Skip to Content
Uncategorized

Google’s New Software Could Bring Slick AR to Android Phones

August 29, 2017

Do not adjust your smartphone: there isn’t really a cartoon lion in front of you, it’s just an impressive augmented reality trick developed using new tools from Google. The search company has long attempted to popularize AR on smartphones. But on Tuesday, the company revealed its latest tactic: a set of software tools called ARCore that it hopes developers will use to make AR apps for Android phones.

In a blog post, Dave Burke, vice president of Android Engineering, writes that ARCore can do things like track a phone’s location and the direction in which it’s facing by using its camera and sensors (helpful for keeping virtual objects in the same spot), figure out where there are horizontal surfaces (upon which an app may want to place, say, a virtual cup of coffee), and pay attention to real-world light to help developers figure out the most realistic ways to display their virtual objects.

An early version of ARCore is being released Tuesday, and in his post Burke says it will initially work with the company’s Pixel smartphone and Samsung’s Galaxy S8 handset, as long as they’re running the Nougat version of Android, or newer. (He writes that this would make ARCore capable of running on “millions” of devices off the bat, but really that’s just a small percentage of the more than two billion devices out there using Android.)

Google’s ARCore follows similar work by Apple and Facebook, which both released developer tools earlier this year to try to make AR more popular among their users.

There have been smartphone augmented reality apps available for Android and iOS for years, but none of them work all that well or look that good: virtual images tend to float awkwardly in space, rather than fitting in with real-world surroundings, and the software doesn’t cope well with things like changing lighting conditions. Even Pokémon Go, a smash hit when it was released in the summer of 2016, doesn’t do a great job of mixing virtual creatures with reality as you see it through your smartphone screen.

However, a handful of short videos and GIFs that Google made available to show off ARCore in action look impressive. In one clip, a life-size, solid-looking cartoon lion stands in a lobby, facing a real dog, with appropriate-looking shadows on the tile floor moving as the lion shifts. In another, a life-size scarecrow stands on a sidewalk in front of a very real taco truck, pondering the menu, fitting in quite well with the people standing behind him.

Burke writes that ARCore takes advantage of mobile augmented-reality technologies that were developed through building Tango. That is a technology that Google first showed off in 2014, which uses a combination of sensors and computer vision to help phones figure out precisely where they are in 3-D space, even in the absence of GPS. But Tango-capable devices require fancy hardware, like a depth sensor, and there are few phones out there that support it, so it hasn’t become widely used.

ARCore is meant to work without such hardware additions, which means it could be added to apps that a lot more people will be able to use in the near future. Burke says the goal is to have ARCore working on 100 million devices at the end of the developer preview, which Google anticipates will be sometime this winter.

Keep Reading

Most Popular

Large language models can do jaw-dropping things. But nobody knows exactly why.

And that's a problem. Figuring it out is one of the biggest scientific puzzles of our time and a crucial step towards controlling more powerful future models.

OpenAI teases an amazing new generative video model called Sora

The firm is sharing Sora with a small group of safety testers but the rest of us will have to wait to learn more.

Google’s Gemini is now in everything. Here’s how you can try it out.

Gmail, Docs, and more will now come with Gemini baked in. But Europeans will have to wait before they can download the app.

This baby with a head camera helped teach an AI how kids learn language

A neural network trained on the experiences of a single young child managed to learn one of the core components of language: how to match words to the objects they represent.

Stay connected

Illustration by Rose Wong

Get the latest updates from
MIT Technology Review

Discover special offers, top stories, upcoming events, and more.

Thank you for submitting your email!

Explore more newsletters

It looks like something went wrong.

We’re having trouble saving your preferences. Try refreshing this page and updating them one more time. If you continue to get this message, reach out to us at customer-service@technologyreview.com with a list of newsletters you’d like to receive.