A typical evening at my house includes at least four screens of various sizes scattered around the living room: a TV, a laptop, an iPad, and an iPhone, plus the occasional smart watch or any other gadget I might be testing. For the most part, each operates on its own, though they share a lot of the same apps and services.
Robert Levy hates this. He’s the chief technology officer for a startup called Conductr, which is building a Web-based platform to let developers create apps that can spread out across multiple devices, automatically determining what parts of the app to feed to which display at any time. Levy thinks this could be useful in any environment in which you have several displays. If the approach takes off, it could also inspire entirely new ways of designing apps.
This kind of multiscreen scenario is becoming increasingly common. Already, smartphones are dominant in cell-phone sales, tablets are de rigueur, and wearable gadgets like smart watches and head-worn computers are tiptoeing toward the mainstream.
Yet while there are already a few ways to share applications among different gadgets, like using your iPhone to control iPad games or receiving smartphone updates on a smart watch, these are limited because it’s difficult to get devices that run on different operating systems to play nicely with each other. Additionally, some operating systems, like Android, are extremely fragmented within their own universe, as different devices run different versions of the OS.
“The problem we see right now is these devices don’t really collaborate with each other,” Levy says.
Conductr lets developers set rules within apps that allow them to work effectively across several devices at once. For instance, within a video-streaming app, the video-watching component might be set to stick with your biggest display, like a laptop or a tablet, while a list of related videos would show up on your smartphone and a remote control would land on your smart watch. Any time you started using the same app on an additional device, Conductr would automatically reconfigure the different pieces of the app, Levy says.
Conductr builds on the work of Daniel Wigdor, an assistant professor of computer science at the University of Toronto who studies multidevice interaction and who is also Conductr’s science advisor. He’s the co-author of three different papers on the topic of gadgets working together, all of which are echoed in Conductr’s work and will be presented at the ACM CHI Conference on Human Factors in Computing Systems in Toronto in late April.
Wigdor thinks of complementary interactions between gadgets as a “symphony of devices,” where software allowing them to work together can bring them into harmony. “It’s like apps are running in the cloud and screens are things you look through to see a portion of the application,” he says.
Levy says a developer would use Conductr’s software development kit to enable an app to communicate with Conductr over the Internet. Each gadget running that app would report to Conductr, letting it know, say, how big its screen is, what types of inputs it supports, its orientation, and its operating system. Conductr’s algorithms would then determine which features of that app should run on it. If one device is turned off or otherwise disconnected, Conductr would just reshuffle the app’s features among the remaining devices.
Levy demonstrates this in a video of a live demo posted on Conductr’s site. It shows how the company’s technology can run a simple PowerPoint-like application across several devices at once. Levy starts off using the application on just a laptop computer, showing a main slide, a slide list in a sidebar on the right side of the screen, a timer, and some speaker notes. When he signs in on the same app on an Android smartphone, all pieces of the app but the main slide disappear from the laptop and reappear on the smartphone’s display. He adds a Pebble smart watch and it grabs the timer function, and then he puts on a Google Glass which snags the speaker notes so he can see them on its above-eye projected display. Each of the devices can also act as a remote control for the slides. In the video, at least, the latency appeared to be quite low.
“It’s pretty freaking cool the first time you actually see Google Glass do something useful,” Levy says. “It extends your experience instead of trying to distract you from it.”
Conductr is still far from your living room—or anywhere else. While the demo is fully functional and Conductr’s team is working on another app designed specifically for gaming, Conductr is not yet available for developers and Levy won’t say when that will happen.
Niklas Elmqvist, an assistant professor at Purdue University who studies human-computer interaction and information visualization, is hopeful that these kinds of user experiences will be possible in the next few years, though. Elmqvist and other researchers are working on a somewhat similar project called PolyChrome.
“People have smartphones, they have tablets, there are displays everywhere, you might start getting Google Glass, and smart watches too,” he says. “Most of the time these devices are designed for being worked on in isolation, essentially. It doesn’t really make sense to do that.”
Toronto wants to kill the smart city forever
The city wants to get right what Sidewalk Labs got so wrong.
Saudi Arabia plans to spend $1 billion a year discovering treatments to slow aging
The oil kingdom fears that its population is aging at an accelerated rate and hopes to test drugs to reverse the problem. First up might be the diabetes drug metformin.
Yann LeCun has a bold new vision for the future of AI
One of the godfathers of deep learning pulls together old ideas to sketch out a fresh path for AI, but raises as many questions as he answers.
The dark secret behind those cute AI-generated animal images
Google Brain has revealed its own image-making AI, called Imagen. But don't expect to see anything that isn't wholesome.
Get the latest updates from
MIT Technology Review
Discover special offers, top stories, upcoming events, and more.