Do you have a lamp with a standard-size lightbulb socket? If so, you’ve already got one piece of the required gear for turning your desk—or fridge, wall, or pretty much any other surface—into an augmented-reality display that you can interact with much as you do with the screen of a smartphone.
That’s the premise behind a project from researchers in the Future Interfaces Group at Carnegie Mellon University. Called Desktopography, it uses a small projector, a depth sensor, and a computer to project images onto surfaces; the projections can move around to stay out of the way of objects that are also on the surface. It screws into a lightbulb socket, and the latest prototype uses the lamp for power, says Robert Xiao, a graduate student who leads the project.
Desktopography can project things like a calculator or map onto a desk, and then you can interact with it or move it around by using multiple fingers. If you place an object—say, a cup—on the area where one of the images is projected, software helps it quickly move to an open spot. The projections can be linked to physical objects, too, so if you move a book across a table, a calendar projected on it can travel along.
While it’s still confined to the lab, Xiao says Desktopography is an attempt to bring augmented reality to everyday life without adding any sensors or electronics to the surfaces on which you want to see images. Unlike Microsoft’s HoloLens and Meta’s Meta 2 (see “The Desktop of the Future Is Coming”), it doesn’t require a headset to produce good-looking images, and unlike apps such as Pokémon Go, it doesn’t use a smartphone to make these virtual images appear in front of you.
Recommended for You
“It’s about trying to break interaction out from our screens and our devices, where they’re separated from reality, and a separate world, really … and try to merge those onto our environment,” Xiao says.
There are a lot of hurdles to overcome before Desktopography can be commercialized. For instance, using a camera for precise multi-touch tracking is tricky, especially when the camera is above your hand and trying to estimate when your finger touches a surface, since it can’t see the bottom of your finger. And it’s also hard to pack everything into a small package while properly dissipating the heat generated by running the device.
Xiao is trying to be realistic, saying it could take about five years to make his project a real product.
Become an MIT Technology Review Insider for in-depth analysis and unparalleled perspective.Subscribe today