Skip to Content

Inhabit This Teddy Bear’s Body Using Virtual Reality

Japanese startup Adawarp thinks teleporting inside the body of a robotic stuffed animal could be a good way to keep in touch with loved ones.
October 26, 2015

Companies inventing things to do with virtual reality headsets like the Oculus Rift, which launches next year, mostly use them to transport you into imaginary worlds. Tatsuki Adaniya has a different idea—teleporting you into the body of a robotic teddy bear.

You take control of this bear by donning a virtual reality headset to see through its eyes and control its head.

Adaniya has built software that lets you strap on an Oculus Rift headset and peer out through the bear’s eyes. You can talk to people near the bear through its speaker and hear them through its microphone, allowing for a two-way conversation with you in the role of a stuffed animal.

When you turn your head, so does the bear, thanks to a movement-recording sensor attached to the headset’s strap. An Xbox controller can be used to move the bear’s arms. “We’re broadcasting human body language,” Adaniya says.

Adaniya thinks children and some adults will be interested in taking on the persona of a stuffed animal—such as a bear, cat, or dog—for fun, or as an unusual way to stay in touch with distant friends or relatives. His company, Adawarp, just went through a startup incubator focused on virtual reality companies called River, which invests at least $200,000 in each company in its program. Adaniya’s project began after he broke up with a long-distance girlfriend and thought about what could have helped them communicate.

I tried out Adaniya’s creation in a tiny conference room. When I pulled on the Rift headset I was transported across the table into the body of the bear. Fluffed-out fur rimmed the edge of my vision as I peeked out at Adaniya and, to his left, my own body.

The uncanny sense of being outside myself subsided surprisingly soon. My former body now felt like just a passive observer in the room. Being able to turn my robotic head helped give the feel of a normal conversation by making it possible to maintain a semblance of eye contact. Adaniya helped by focusing his attention on the bear.

Cameras in the bear’s eyes are used to feed stereo imagery to a person wearing a virtual reality headset.

“The impression of the word ‘robot’ is scary and big,” Adaniya told me. “I don’t want to feel like this is a robot. I want to feel this is an animal, or a new spirit.” My joystick-controlled arms looked a little bit robotic, but Adawarp plans to eventually capture arm movement directly with a motion-sensor.

By the end of 2016, Adaniya aims to ship a version of his robot with a plain plastic body priced at $200 or less. That version will be aimed at encouraging hardware developers to build their own bodies for it. He is also working on making it possible to control the robot without a virtual reality headset, by panning a mobile phone around. The consumer version will come after and bring back the fur. Adaniya thinks versions that look like cats, dogs, and bears could all be popular.

Cindy Bethel, director of the Social, Therapeutic and Robotic Systems Laboratory at Mississippi State University, says that Adaniya’s idea has some potential but will also face challenges. Children are likely to prefer seeing the face of a parent via video chat to interacting with them in bear form, she says. But the ability to touch or hug a tangible figure could be beneficial, says Bethel.

Having a person take the form of a robot might be a boon in situations where a child needs to talk with an unfamiliar adult, such as a therapist or tutor, says Bethel. A small, cuddly bear could feel less threatening and be easier to open up to than a stranger.

However, Bethel also notes that having a robot take on the role of a person risks the effect known as “uncanny valley,” where an artificial creation tries and fails to be human-like, creating a sense of revulsion instead. “If for some reason it doesn’t move naturally, that could be kind of creepy to people,” she says.

Keep Reading

Most Popular

Large language models can do jaw-dropping things. But nobody knows exactly why.

And that's a problem. Figuring it out is one of the biggest scientific puzzles of our time and a crucial step towards controlling more powerful future models.

The problem with plug-in hybrids? Their drivers.

Plug-in hybrids are often sold as a transition to EVs, but new data from Europe shows we’re still underestimating the emissions they produce.

How scientists traced a mysterious covid case back to six toilets

When wastewater surveillance turns into a hunt for a single infected individual, the ethics get tricky.

Google DeepMind’s new generative model makes Super Mario–like games from scratch

Genie learns how to control games by watching hours and hours of video. It could help train next-gen robots too.

Stay connected

Illustration by Rose Wong

Get the latest updates from
MIT Technology Review

Discover special offers, top stories, upcoming events, and more.

Thank you for submitting your email!

Explore more newsletters

It looks like something went wrong.

We’re having trouble saving your preferences. Try refreshing this page and updating them one more time. If you continue to get this message, reach out to us at customer-service@technologyreview.com with a list of newsletters you’d like to receive.