Skip to Content

Toyota Investing $50M with Stanford, MIT for Autonomous-Car Research

The auto maker has lured the organizer of the DARPA Robotics Challenge to lead its new AI research effort.
September 4, 2015

Toyota is investing $50 million with Stanford and MIT for autonomous-vehicle research that it says will focus on things like learning how to drive from humans, how to anticipate what people or other vehicles will do on the road, and how best to interact with people.

The Japanese automaker said Friday that it’s investing the money over five years, and it will be split evenly between the two universities. The project will be led by Gill Pratt, a roboticist and former program manager at DARPA who had organized the DARPA Robotics Challenge.

Pratt said safety and autonomy—of people, more than cars—are the overall goals of the artificial-intelligence research. Stanford plans to study topics such as decision making, reasoning, sensing, and perception, while MIT researchers will work on things like smart user interfaces and collecting and analyzing data from humans in hopes of figuring out how we drive.

Daniela Rus, director of MIT’s Computer Science and Artificial Intelligence Lab, known as CSAIL, said her group’s priority is “building a car that is never responsible for a collision.”

Toyota reiterated on Friday that even as cars get smarter and more capable, it wants to keep drivers involved in the act of piloting them—a different tack from the one Google is taking with its fully autonomous vehicles that are roaming the streets in Silicon Valley (see “Toyota Unveils an Autonomous Car, But Says It’ll Keep Drivers in Control”).

Kiyotaka Ise, senior managing officer at Toyota and chief officer of the company’s research and development program, said Friday through a translator that he thinks it will “take quite a long time to have a driverless car.” But he also said that the company will continue to pursue the goal of an autonomous vehicle and, along the way, apply technologies developed for cars to help people drive.

Keep Reading

Most Popular

Large language models can do jaw-dropping things. But nobody knows exactly why.

And that's a problem. Figuring it out is one of the biggest scientific puzzles of our time and a crucial step towards controlling more powerful future models.

OpenAI teases an amazing new generative video model called Sora

The firm is sharing Sora with a small group of safety testers but the rest of us will have to wait to learn more.

Google’s Gemini is now in everything. Here’s how you can try it out.

Gmail, Docs, and more will now come with Gemini baked in. But Europeans will have to wait before they can download the app.

This baby with a head camera helped teach an AI how kids learn language

A neural network trained on the experiences of a single young child managed to learn one of the core components of language: how to match words to the objects they represent.

Stay connected

Illustration by Rose Wong

Get the latest updates from
MIT Technology Review

Discover special offers, top stories, upcoming events, and more.

Thank you for submitting your email!

Explore more newsletters

It looks like something went wrong.

We’re having trouble saving your preferences. Try refreshing this page and updating them one more time. If you continue to get this message, reach out to us at customer-service@technologyreview.com with a list of newsletters you’d like to receive.