Intelligent Machines

Tactile Gaming and Telepresence Androids

Highlights from the SIGGRAPH computer graphics conference.

Aug 9, 2011
The SIGGRAPH conference, which takes place in Vancouver, Canada, this year, is a showcase for the latest research in computer graphics, interfaces, and design.  These images were captured using a new kind of video camera developed by Contrast Optical and the University of New Mexico that is able to mimic the human eye’s ability to capture details from both bright and dark features simultaneously. The three smaller images, right, show the output from the camera’s three digital image sensors, each of which captures a slightly different range of brightness. A system of prisms directs light from the lens onto three sensors; software combines the output from the three into one image (shown at left) that captures detail in both bright sparks and dark shadows. See a video of the camera and its output.
Software developed by Thibaut Weise and colleagues at the Swiss Federal Institute of Technology in Lausanne makes a digital character mirror your facial expressions. The system uses Microsoft’s Kinect motion-sensing camera to track a user’s face and translate its shape and motion onto a digital character. In the top image, two users show how this could enable new kinds of online interaction.
This handheld device, known as GelSight, uses transparent rubber and a camera to quickly capture the fine three-dimensional structure of a surface, such as the palm of this person’s hand. The device can capture details just two microns across and less than a micron deep. Micah Kimo Johnson and colleagues developed the system at MIT and say it could be used in forensic investigations–for example, to rapidly examine shell cases so investigators can determine which gun fired them. Watch a video of GelSight in action.
This blimp gives users a remote physical presence so they can communicate with people elsewhere. A projector inside the blimp puts the operator’s face on its outside, while speakers transmit his or her voice. Microphones and a camera let the person controlling the blimp hear and see what’s happening around it. The system was created by Hiroaki Tobita and Shigeaki Maruyama at Sony Computer Science Laboratories.
A smart phone app called iFace3D, developed by Digiteyezer, can capture a photo-realistic 3-D model of a person’s head. The app captures video footage for 20 seconds as the phone is moved around a person’s head, and the video is processed by an Internet server to reconstruct the head’s 3-D shape. The app can then be used to view, manipulate, and edit that model as shown here.
The chair shown here provides tactile sensations during game play. The player feels something running along his or her skin, as indicated by the red trace. A mat sitting on the chair contains a grid of vibrating devices that are carefully coordinated to re-create the feel of a smooth, moving touch. The Surround Haptics project is the work of Ali Israr and Ivan Poupyrev at Disney’s Pittsburgh research lab.
This android torso can be used to express another person’s facial expressions, body language, and voice as a novel form of telepresence. Taking control of Telenoid, as it is known, requires special software that uses a webcam to track a person’s face and head movements, transferring them to the robot along with the operator’s voice. Telenoid was developed at Japan’s Advanced Telecommunications Research Institute.
A camera hidden inside this mirror allows software to track and display a person’s heart rate. It does this by monitoring tiny changes in the brightness of a person’s face, which betray pulses of blood moving through the skin’s blood vessels. Ming Zher-Poh, pictured, developed the mirror with colleagues at the MIT Media Lab.