Here Comes the Holodeck

| April 12, 2001

Artificial intelligence agents are used to create challenging and unpredictable VR training exercises.

The use of virtual reality or arcade games to practice hand-eye coordination or quick reaction, or even to teach factual information is easy to understand and well accepted. But can such techniques also teach sound judgement and clear thinking in an emergency?

New programs developed by ISI and two cooperating USC institutes are designed to do just this by melding advances in artificial intelligence with state of the art work in rendering virtual environments in animation and sound.

Mission Rehearsal Exercise shows both how desirable the holodeck would be, and how far away it is.

A “Mission Rehearsal Exercise” developed for the U.S. Army by ISI, the USC Institute for Creative Technologies (ICT), and the USC Integrated Media Systems Center (IMSC) takes soldier-trainees on a virtual reality mission in a troubled town in Bosnia. There, they must deal with a situation threatening to spin out of control.

It uses a movie-theater-sized (8-feet tall, 31-feet wide) curved screen that looms around trainees. Combining with the screen image is highly directional and lifelike “immersive sound,” creating a convincing illusion of being present at the scene, rather than observing a show.

The scene is populated with animated figures who exist only as computer programs, but are nevertheless autonomous agents who can interact with human trainees in real time.

An article about the Mission Rehearsal Exercise (MRE) program appeared in the documentation for AAAI Spring Symposium on Artificial Intelligence and Interactive Entertainment, at Stanford University March 26-28 2001. Another paper, along with partial demonstration of the system is scheduled for the Agents 2001 conference in Montreal, Que. May 28-June 1, 2001

The army simulation is an early attempt to reach toward the “holodeck,” the virtual reality training and recreation facility seen in “Star Trek: The Next Generation,” according to project leader Jeff Rickel of ISI.

In one simulation scenario, a lieutenant enters the village to deal with one problem — a weapons inspection team being threatened by an angry crowd — and finds another one as well: an American jeep has accidentally struck and injured a local child.

Should the lieutenant split his forces to deal with both situations? If so, how? Meanwhile, a TV camera crew arrives, further complicating the situation.

The scene of the village uses 3-D computer modeling to create basic shapes visible from any angle, enhanced by texture mapping. The group used commercial software from Boston Dynamics to add animations of people, and an extremely sophisticated sound system — with multiple sound tracks (up to 64 tracks for some effects) played through no less than 12 speaker channels. Chris Kyriakakis of the IMSC created this system.

Two kinds of autonomous software agents inhabit this complex and convincing environment. Most are basic robotic programs that carry on a limited range of pre-scripted, routinized behavior — milling in the background, standing around, etc.

Three characters are more complex. These are software actors who have substantial abilities to react to what the trainee does. Their faces change expression, thanks to software from the Santa Cruz, CA-based Haptek Corporation. They move, and most strikingly, they can respond to speech.

Scripted characters are relatively easy to create, explains Rickel, “but have limited flexibility, making them well suited for bit parts.” AI characters are more difficult to program, but can interact with people and with their environment in more flexible ways, making them well suited for key roles such as the mother, sergeant, and medic, who all have to interact with the human lieutenant.

Research by Rickel and his colleagues, who also include Jonathan Gratch, Randall Hill, and William Swartout of ICT and Stacy Marsella of ISI, has built on earlier work at ISI by Rickel and W. Lewis Johnson developing a teaching agent called “Steve.”

Steve instructed Navy recruits in a virtual-reality world presented through VR glasses, responding to their simple questions. The Sergeant and the Medic are more advanced versions of Steve. Steve’s appearance has been updated from the legless floating presence in the early version to a more lifelike form.

The third AI character, the Mother, adds another layer: she goes beyond words to the expression of emotions. Gratch and Marsella were responsible for this feature.

Another major addition to the Steve agent is the use of a dramatic story line, with continuing incidents driving the action. To keep score at the end, a television story reported by the news crew on the scene records the result of the trainee’s responses to the situation, chronicling either an abandoned boy in critical condition, or a boy out of danger because of timely action.

“The work we have done in one way shows how far away the holodeck is — but in another shows how useful it may be,” Rickel said. “The project represents a grand challenge for both AI and virtual reality, but the potential payoff is a powerful new medium for experiential learning.”

“What makes the Mission Rehearsal Exercise project unique is that we are bringing together for the first time a set of technologies including immersive audio, large scale graphics, and virtual humans and linking them to an interactive story line to create a compelling experience.” added William Swartout of the ICT.

“The synergies that result are powerful. We were surprised to find that even though the system is still at its beginnings, some people came away from the simulation emotionally moved. I don’t think this effect is due to any one element alone, but rather it is due to all of them working together.”

Published on April 12th, 2001

Last updated on August 10th, 2021

Share This Story