Share or print this article Print PDF
Stumble Upon Digg Delicious

Is That Tea Real?

The human-computer relationship gets closer – and more lifelike – through the work of UC Santa Barbara’s Four Eyes Lab.
Picture this: You hold out your hand, flash a "V” sign and a map appears before your eyes, showing your location and waiting for you to point to a nearby restaurant, so that it can show you the menu. Or you walk up to a teapot seemingly suspended in the three-dimensional space in front of you. You take it by the handle and it pours virtual tea into a virtual cup. You’re tempted to take a sip.

Marvels like these are still in the future, but maybe not so far. Computer scientists are advancing on the goal of a truly intuitive interface, where people interact as naturally with computers as they do with other humans. To put it another way, it’s getting harder all the time to tell the virtual from the real.

At the Four Eyes Lab – the name stands for “Imaging, Interaction, and Innovative Interfaces” – the wall between human and computer is being dismantled from both sides. Researchers are developing new ways to make computers more like us in their sensitivity to human gestures and expressions, while coming up with display technologies that mimic 3-D reality.

“We do things that have to do with images,” says computer science Professor Matthew Turk, who started the lab in 2000 after coming to UCSB from Microsoft Corp. He now directs the lab along with computer science Assistant Professor Tobias Höllerer, who came on board in 2003. With the help of PhD students, visiting researchers and alumni, Turk says he and Höllerer are focused on the goal of developing “new technologies that will enable better, more powerful ways for people to interact with computers.”

Eye-Hand Coordination

One example is HandVu, a Four Eyes-developed system that enables the user to guide a computer literally by pointing the way. Turk calls it “a vision-based system for hand-gesture interface.” The user, carrying a computer and wearing goggles with a camera attached, gets the computer’s attention by sticking a hand into the camera’s field of view. The computer, its camera “eye” now locked onto the hand, follows its movements and displays the changing scene inside the goggles.

The system also recognizes different gestures – such as an open hand, a pointing finger or a fist – and takes them as commands. By making a fist followed by a scissors motion with the thumb and forefinger, the user can “grab” a virtual object in the computer’s display, then “let go” of it by opening the fist and pointing the thumb and fingers forward.

This ability to read hand signals suggests any number of possibilities. Link the computer to the global positioning system, for instance, and the scenario of the on-demand map that flashes restaurant menus isn’t all that farfetched.

Höllerer says hand-tracking computers are a step toward a merger of virtual reality with the physical world, so that user interfaces “basically consist of physical objects.” With HandVu technology and GPS working together, he says, one could point to an object, such as a building or even a particular window in the building, and thereby mark it for others who are using the same database (he and his students have done an inventory of trees on part of the UCSB campus in this way). Not only does the object have a label, but the label would automatically appear to anyone looking at it through a linked viewing system – such as the camera and goggles now used with HandVu. “The idea is that, wherever you go, without any additional need for calibration or modeling, you should be able to don your glasses and add annotations to the physical world,” says Höllerer.

Soldiers could use such technology to alert each other to potential snipers. Search-and-rescue teams could use it to mark the ground they have covered and show areas yet to be combed. Virtual objects could be inserted into a real scene through the same method. “A group of architects in front of a construction site can see a fully rendered building before it is even started,” says Höllerer. Landscape architects could do the same scene-painting with virtual trees. Something like x-ray vision is possible as well. Using sensors that detect what lies behind an object and then send their data to the hand-tracking computer, viewers can see “through” the object by pointing at it.

Waiting for Miniaturization

HandVu-type interfaces have some way to go before hitting the consumer mainstream.

At this point, as Turk admits, “we are experimenting with fairly clunky devices.” Users of hand-tracking systems have to carry laptop-size computers in backpacks, and much of their head is covered with a contraption holding a camera (through which the computer sees) and goggles (through which they see what the computer sees). They’re still easy to spot in a crowd. As Turk says of the systems he and Höllerer are developing, “It’s not like we have a company just waiting to stick this in its pocket.” But they also know how technology progresses – toward making things smaller, more powerful and more portable. Höllerer hopes that “within five years or so we will arrive at something that people won’t reject right away.”

Top: Various non-photorealistic rendering techniques being explored by the Four Eyes lab. Top left: Illuminating and modeling for augmented reality as seen by the user shown wearing a head worn display. Bottom left: Visiting K-12 students reaching out to interact with a virtual scene. Above: A virtual pointing device manipulating a graphical user interface on the fog screen.

One exception to these purely experimental gadgets is the Fogscreen, a display system that is in commercial production and is wowing folks at trade shows and other venues worldwide. Though not developed by the Four Eyes Lab, the Fogscreen has a UCSB link: It was invented by a Four Eyes alumnus, Ismo Rakkolainen, who is now chief technology officer of the company manufacturing it in Finland. As the name suggests, it is a display made of mist, kicked up by zapping ordinary water with ultrasound and then blown downward through a carefully arranged battery of fans into a thin vertical plane 59 inches high and 79 inches wide. This “immaterial display,” as Turk calls it, has unique properties. It can show projected images on both sides, allowing both the front and back of an object to be displayed. And you can walk right through it. You won’t even get wet doing so. The sheet of fog is dry to the touch, give or take a rare drop of condensation.

The Four Eyes Lab has two Fogscreens on loan (they cost $90,000 each), and it has them set up at right angles to create three-dimensional images floating in air. But even a single screen can create a natural-looking interface for virtual objects and human-computer interactions. “The first thing people try to do is reach in and touch things,” says Höllerer. “That is where our lab jumped in and said, “Look, we have to develop interactive technologies for this, because clearly that’s what people want to do.’”

The “Holy Grail” – as Seen on Star Trek

Höllerer and his students have devised applications that enable users to manipulate Fogscreen images with handheld devices such as LED-equipped position trackers or wireless joysticks. In one of these, virtual objects such as teapots, balls and cubes can be moved around and made to collide – all seemingly floating in mid-air. In another demonstration, a user can use a joystick to walk through a “virtual forest,” looking in various directions and even changing the direction of the sunlight. Somewhere around the corner, in the 3-D space created by dual Fogscreens arrayed at right angles, Höllerer sees this technology moving toward “the holy grail of virtual reality in computer graphics, which is basically the science fiction idea of the holodeck from Star Trek, where you can bring up any 3-D environment and it looks lifelike.”

The Fogscreen apps work in much the same way as HandVu technology. In both cases, the user motions to the computer, with a hand or a handheld device, and the computer changes the display in response. Outdoors, it may pin a virtual label on a tree. Indoors, on the Fogscreen, it might upend a virtual teapot. In such ways, two distinct research streams come together to create new possibilities. Turk’s focus on computer vision – making computers see and interpret their surroundings more accurately – dovetails with Höllerer’s interest in raising the realism quotient of display technology and 3D interaction. The result is a new experience of cyber-sociability, with people and machines communicating as never before.

diabetes