Skip to Content

Your Future Toyota May Know Where You’re Going Before You’ve Told It

Toyota’s new subsidiary will manage the troves of data collected from its increasingly connected cars.
April 4, 2016

Whether the car of the future runs on electricity, hydrogen, or old-fashioned gasoline, it will emit billions of bytes of data. And the battle to control and exploit that data is just getting started.

On Monday, the Japanese carmaker Toyota announced a new subsidiary, called Toyota Connected, that will manage and mine the data collected from its vehicles, and the company said it would collaborate with Microsoft on the venture. The data collected and delivered might include mapping data, engine statistics, and records of driver behavior. Most immediately, this could mean updating vehicle features or patching bugs remotely. But the goal is also to develop new kinds of interfaces that predict a driver’s intention.

Over the past decade, cars have become vastly more computerized and connected. Tesla epitomizes this trend, issuing software updates via 3G to its customers’ cars to update the interface, add new apps, and even to tweak the performance of the engine or brakes.

“Consumers’ preferences are being shaped by consumer technology,” Zach Hicks, CEO of Toyota Connected, said during a briefing to announce the new company. “The experience they have with mobile devices is what they want in their vehicle, and it’s our job to live up to that expectation.”

Toyota did not disclose precisely what data it plans to collect, or how it would collect it, but it will likely fit models with more connectivity and computer systems that allow for more comprehensive software control.

The technology could introduce new security risks, and raise drivers’ ire over the collection and use of their personal data. Hicks said that data would never be collected without a driver’s permission. He also said that the new company would aim to launch its first services later this year.

In its research lab, Toyota has shown that tracking driver location and driving behavior—and combining that data with other sources of information—can predict where someone is headed. “When people drive outside of their normal driving patterns,” Hicks said, “we can guess with 80 percent accuracy where they’re likely to go, based on their likes or dislikes.”

This could mean, for example, a car that realizes when its driver is headed to a football game, and then offers to automatically map out the route and prepay parking. “These are the types of services we’ve already got working in our R&D lab,” Hicks said.

As cars become more data-guzzling, Toyota and other carmakers will face increased competition. Companies including Google and Apple have spied an opportunity to tap into the data coming from cars, and they are already making forays into the vehicle interface using dashboard systems that mirror a customized iPhone or Android device.

Via Android Auto, for instance, Google Now will already try to predict the destination for a journey, based on messages found in Gmail or recent Google searches, and then automatically offer directions (see “Rebooting the Automobile”). Meanwhile cloud computing providers, including Microsoft, are keen to provide the on-demand computer power for services such as high-resolution maps often used for automated driving (see “Tech Companies’ Foray into Public Infrastructure Will Magnify Their Power”).

 

 

 

 

 

 

Keep Reading

Most Popular

Large language models can do jaw-dropping things. But nobody knows exactly why.

And that's a problem. Figuring it out is one of the biggest scientific puzzles of our time and a crucial step towards controlling more powerful future models.

OpenAI teases an amazing new generative video model called Sora

The firm is sharing Sora with a small group of safety testers but the rest of us will have to wait to learn more.

Google’s Gemini is now in everything. Here’s how you can try it out.

Gmail, Docs, and more will now come with Gemini baked in. But Europeans will have to wait before they can download the app.

This baby with a head camera helped teach an AI how kids learn language

A neural network trained on the experiences of a single young child managed to learn one of the core components of language: how to match words to the objects they represent.

Stay connected

Illustration by Rose Wong

Get the latest updates from
MIT Technology Review

Discover special offers, top stories, upcoming events, and more.

Thank you for submitting your email!

Explore more newsletters

It looks like something went wrong.

We’re having trouble saving your preferences. Try refreshing this page and updating them one more time. If you continue to get this message, reach out to us at customer-service@technologyreview.com with a list of newsletters you’d like to receive.