Museum Goers Teach Common Sense to Computers

| July 25, 2005

ISI researchers have created a system, called LEARNER2, which prompts ordinary, non-expert humans to share their worldly knowledge with silicon-brained learners.

Green-eyed student: Learner2 needs help to understand the world of humans. Southern California residents can provide the help at the California Science Center in Exposition Park through the end of August, or on the Internet

A University of Southern California project is recruiting Internet users and museum visitors to give computers basic knowledge about the everyday human world.

It’s a big job.”Humans know about telephones and refrigerators, about making phone calls and buying food and details about million other everyday situations,” said Tim Chklovski, a researcher at USC’s Information Sciences Institute working in ISI’s Interactive Knowledge Capture group.

“Giving computers a similarly broad-coverage collection of knowledge is a step to making computers a lot more capable in thinking about common situations and in making them more helpful. ”

Words are only the cues. Real objects in the everyday world fit together in a bewildering variety of configurations that are quite hard for computer intelligences to intuit correctly without help, notes the researcher. “It’s like children without a good teacher.”

And so Chklovski and his collaborator, ISI Project Leader Yolanda Gil, have created a system, called LEARNER2, which prompts ordinary, non-expert humans to share their worldly knowledge with silicon-brained learners.

Visitors to the California Science Center in Exposition Park near USC are doing so now, and will through the end of August, as part of a traveling presentation called “Robots and Us” that began in St. Paul last year, and has previously been in Columbus and Philadelphia. Visitors to other museums in Ft. Worth, Portland, and Boston will get their chances later this year. (Click on image for more details about the “Robots and Us” show)

Internet users at home can increase and clarify the system’s knowledge by visiting http://learner.isi.edu/

Chklovski says the approach allows the computer itself to learn by posing questions designed to systematically fill the holes in its understanding. Chklovski and Gil presented a paper on their work at the central venue for sharing artificial intelligence research, the 20th National Conference on Artificial Intelligence (AAAI) held this year in Pittsburgh.

The computer systems learning the information are systems designed to interface directly with people as “virtual assistants,” or electronic aides. The idea is that they will be able to perform tasks like scheduling meetings, booking travel, or shopping online for things based on oral instructions from their human bosses.

“One important thing that we have learned in fifty years of work in Artificial Intelligence is that any intelligent system needs to be able to learn new things all the time,” explained Gil. “You cannot predict in advance all the things they need to know in order to perform a task, nor can you count on a set of knowledge engineers or programmers to be able to describe all the things they know about the objects being used in an application.”

The problem, according to Gil, is often called the “knowledge acquisition” bottleneck, and it makes intelligent systems “brittle” because they cannot reason even slightly beyond the knowledge they start with.

“Having a system that learns continuously new things about the world, and that learns them from volunteers that have a lot of time in their hands and feel it they are being useful to science by contributing, is a very promising approach to address brittleness,” she said.

In scheduling and running a videoconference, for example, the assistant needs to understand a camera needs to be switched on; a microphone set up, tested, adjusted. If a glitch occurs &ndash no projector – the system has to understand what action to take (for example, locate another projector).

To do so, Chklovski explains, machines need to acquire a basis of common sense knowledge. One method of doing this is to have experts prepare what they think is needed, and simply pour it in. But this has led to uneven results, because it’s hard to anticipate where gaps still accumulate.

What’s better, Gil and Chklovski believe, is to have the computer system itself accumulate the information, by asking people systematically. Which is what the LEARNER2 online and museum system do.

The system is a follow-on to a version that was less directed in collecting and validating knowledge which Chklovski developed in his previous academic home, M.I.T.


A user signs in, encouraged by a green-eyed, blinking onscreen robot, who is looking for help with words. He presents n word (in the illustration, “pulley”) and asks what it’s used for. (click on image for larger view)

New uses for the pulley lead to new questions about what other things are used for similar uses, what else might connect or be part of a pulley and so forth, in an open-ended process.

The system keeps pushing the knowledge acquisition into new territory, by prompting users to supply only information that expands on what previous users have supplied.

The knowledge acquired is tested by the same method: “I’ve been told that a pulley is used to hoist,” says the system. The user is asked to agree or disagree. This step verifies knowledge that the system has previously been given, to guard against erroneous or even malicious misinformation, by having it checked.

So far more than 2,500 people have used the system, in museum settings and online, contributing and verifying hundreds of thousands of statements. In addition to teaching useful knowledge, contributors can compete with others in the amount and quality of their contributions. Perhaps similarly to crossword puzzles, answering the questions computers want to learn about highlights subtleties of the many facts and nuances we take for granted, both about everyday world and about the language.

Meanwhile, progressive improvements have made the interface more interesting to humans, so that where initially people became bored after a relatively few interchanges, “now they stay on for hours,” Chklovski said &ndash up to five hours at a time in the Internet version.

“In future work, we are exploring additional applications for knowledge collected from thousands of volunteers.” says Chklovski. “We have learned a lot about what works and what does not in collecting knowledge. We have worked on and published about two projects in which people help make computers more facile at manipulating language and understanding the meaning of what people say.”

In the first of these, he explained, volunteer contributors teach computers about possible paraphrases of identical ideas so that computers can better get at the meaning behind the verbiage. In the second, volunteers teach about put words with several meanings, such as “plant,” into proper context (i.e., a factory, versus something that grows in the garden.)

The California Science Center is located in Exposition Park, near the corner of Vermont and Exposition Boulevards in Los Angeles, just south of USC’s University Park campus. Admission to the main galleries is free. Museum hours are 10 a.m through 5 p.m.

The paper presented at the AAAI 2005 conference by Chklovski, “An Analysis of Knowledge Collected from Volunteer Contributors,” is at http://www.isi.edu/~timc/papers/ KCAP05-Chklovski-Gil-

Published on July 25th, 2005

Last updated on August 9th, 2021

Share This Story