Skip to Content

Rising to the AI challenge

MIT must prepare students for a world in which computation is as fundamental as math—and make sure AI is developed responsibly, for the good of society.

Simon Simard

To make sure that the US retains its competitiveness in fields like AI—and that this technology is developed and used in ways that serve our whole society—I’m convinced we must push for bold action from industry, government, and certainly higher education.

The Institute’s answer is the MIT Stephen A. Schwarzman College of Computing; we are reshaping ourselves to prepare our students to shape the future. This experiment starts on our campus, yet its potential impact extends far beyond, touching not only MIT students and faculty but higher education, the nation, and the world.

Our students understand that computation is now as fundamental as math. Many want and need to be “bilingual”—as fluent in computing as they may be in biology, urban planning, or economics. At last, we’ve rearranged the Institute to reflect that wisdom and accelerate that reality.

For our faculty, the College will lead to fascinating conversations all over campus about new challenges, tools, and possibilities. And the faculty’s excitement signals wider implications for higher education. History shows that what MIT does inspires new approaches around the world, from our founding commitment to learning by doing—and the grounding of engineering in science—to UROP, Project Athena, and edX. We should aim high this time, too!

The scale of our commitment to the College also shows the federal government that we are serious about advocating for an intense national focus on, and investment in, artificial intelligence, so we can produce graduates prepared to bring the power of computing to every sector of our society.

In the end, the MIT Schwarzman College could have a great impact on the world—if we get it right. A key test will be how well we succeed in making ethics and societal impact a central focus.

Pushing the limits of new technologies can be so intoxicating that it’s hard to think of how a tool might be misused. But on the road to the algorithmic future, there is no “designated driver” who can keep society safe. We must all stay alert and sober, and actively build the policy guardrails that will keep us, together, out of the ditch.

It’s time to educate a new generation of technologists in the public interest. And I am optimistic that—with the leadership of founding dean Dan Huttenlocher, SM ’84, PhD ’88—the MIT Schwarzman College of Computing will rise to the challenge.

Keep Reading

Most Popular

Large language models can do jaw-dropping things. But nobody knows exactly why.

And that's a problem. Figuring it out is one of the biggest scientific puzzles of our time and a crucial step towards controlling more powerful future models.

OpenAI teases an amazing new generative video model called Sora

The firm is sharing Sora with a small group of safety testers but the rest of us will have to wait to learn more.

Google’s Gemini is now in everything. Here’s how you can try it out.

Gmail, Docs, and more will now come with Gemini baked in. But Europeans will have to wait before they can download the app.

This baby with a head camera helped teach an AI how kids learn language

A neural network trained on the experiences of a single young child managed to learn one of the core components of language: how to match words to the objects they represent.

Stay connected

Illustration by Rose Wong

Get the latest updates from
MIT Technology Review

Discover special offers, top stories, upcoming events, and more.

Thank you for submitting your email!

Explore more newsletters

It looks like something went wrong.

We’re having trouble saving your preferences. Try refreshing this page and updating them one more time. If you continue to get this message, reach out to us at customer-service@technologyreview.com with a list of newsletters you’d like to receive.