My first robot was an MC68HC11-based RugWarrior Pro I named “Beto.” When I opened the box he came in, I found a couple of motors, tires, a small board, tons of little things to solder, and a few other bits and pieces of robo-guts. Over the next week or so, my kids observed me putting him together on our dinner table. At the end of the week, I put some batteries into him, uploaded some sample code, and watched him make some noises and move around on the floor. The kids giggled and clapped and tried with all their might to get him to come to them as if they were calling a small dog. “Come here Beto! Come here!”
Without ever having touched a mouse, keyboard, or joystick, the girls were able to make Beto go where they wanted him to. They would laughingly place their hand to the right front of him to make him steer left, on his left to make him steer right, and so on. This was a primitive human-robot interaction (HRI) that some researchers are now (and probably were then, too) spending their lives trying to understand and develop.
MIT’s been studying this phenomenon under the Sociable Machines Project with the robot Kismet being the primary research platform. Kismet is an extreme example of a robot that tries to mimic complex human emotive behaviors. His substantial array of sensors and actuators allow him to react to verbal intonations be they disciplinary or complimentary.
The Infanoid Project has recently spawned a small celebrity that’s bringing some attention to its project. Keepon is squishy little yellow “creature” robot that has the appearance of two tennis balls stacked one atop the other. Keepon is part of a study that aims to “relate robotics to human sciences in order to understand the underlying mechanism of social communication specific to humans and some species of primates.”
Keepon represents the end of the spectrum opposite to Kismet in regards to complexity. Although his implementation model is mechanically complex, his manifest model is profoundly simple. He is a physical caricature with just enough features that it’s easy for humans to project a face onto him.
Good robot builders are finally discovering what good application developers know:
- No matter how cool the interface is, less of it is better. A robot with visible cameras, wires, actuators, and other robo-guts that are not necessary for his purpose is less likely to be adopted or to connect with his users. Superfluous stuff just gets in the way.
- The Manifest Model should closely match the user’s Mental Model. The complexity of how the robot is implemented should be abstracted or hidden from the user so the robot’s interface can apply his mental model to its usage. For example, the mental model kids and most users project onto Keepon is that he’s a squishy little guy that doesn’t talk (because he has no mouth, duh).
You can see videos of Keepon here, here, and here. He’ll be at NextFest this week.
Resources:
- Rug Warrior Pro Robot, Alan McDonley
- Insights from User Interface Literature
- About Face, Alan Cooper