Tuesday, March 25, 2008

A Robotic Helping Hand

Georgia Tech's prototype robot responds to instructions given with an ordinary laser pointer.

Ready to fetch: Georgia Tech's new home-assistance robot stands at about human height, with two camera eyes that can home in on the spot projected by a laser pointer. Its roughly two-foot-long sensor-equipped arm can extend down to the ground or up to tables to pick up lightweight items.
Credit: Rob Felt/Copyright Georgia Institute of Technology 2008
Multimedia
See Georgia Tech’s new El-E assistant robot in action.

A new robot from Georgia Tech understands commands given using a simple tool: an off-the-shelf laser pointer. In a demonstration video, a person reclining in a chair flicks on a green laser and trains it on a cordless phone on the floor a few feet away. A thin, five-foot-seven-inch robot called Elevated Engagement, or El-E for short, fixes on the phone, wheels over, grips it, and brings it back to the user in a robotic version of fetch.

While companion robots have been making their way into our homes for a while, from Furbys and Tamagotchi digital pets to the therapeutic Paro baby seal, El-E is a step closer to an automated robot that can, say, clean up an entire house or do the dishes. Many obstacles still remain, however--in areas like navigation, grasping, and communication.

El-E, built by the Healthcare Robotics Lab at Georgia Tech, is the first robot to be guided by laser pointing, a method more exact than human gesture or speech, which robotics researchers have tried in the past. According to the project's principal investigator, Charles Kemp, the approach was partially inspired by quadriplegics who communicate with helper monkeys via lasers. "It's a point-and-click interface," says Kemp. Users point the laser at what they want and then at where they want it to go: to themselves, to another person, or onto another surface.

What's more, El-E is also the first robot to autonomously retrieve objects from surfaces of varying heights in an unmapped environment. While robots have reached for objects on tables and shelves before, they always needed to know the layout of a static setting. El-E, by contrast, can work in a new room with no map and interact with new or moved tables by using its own, built-in laser to detect surfaces.

Kemp's lab is developing the robot in collaboration with Julie Jacko, director of the Institute for Health Informatics and a professor at the University of Minnesota, and Jonathan Glass, who directs a center at Emory that researches amyotrophic lateral sclerosis, or ALS (Lou Gehrig's disease). The team used several off-the-shelf components to build most of the bot and added the novel laser-pointer interface at its "head." The first part of the interface is a camera coupled to a hyperbolic mirror that makes it omnidirectional so that it can see any object illuminated by the pointer. The robot swivels its two "eyes"--high-resolution cameras--until they are facing the spot made by the laser pointer. The robot then triangulates information from the cameras to estimate the object's position in three-dimensional space. Once El-E locates an object, it declares its success by saying the word "ding" and wheeling over to it.

"The use of a laser pointer on El-E opens up a brand new way for people to interact with robots," says Andrew Ng, a professor of computer science at Stanford University, who has followed Kemp's work closely. "I think this is a way of interacting with robots that will prove useful on many more applications."

To begin the process of picking up an object, El-E uses its laser range finder to figure out if the object is on the floor or on an elevated surface. If it's on the floor, El-E moves toward the object and lowers its laser range finder to scan across the floor. If the object is elevated, El-E uses the range finder to identify the edge of the surface of the table or desk where the object is resting. Once it docks with the table, El-E scans the surface and utilizes a camera on its hand to look down and visually segment the object, assuming the table has a uniform visual texture. So far, El-E can correctly pick out an object among others, as long as they're spaced out. The team hasn't yet tested objects that are clustered or overlapping.

http://www.technologyreview.com/Infotech/20453/?a=f

No comments: