Skip to Content
Artificial intelligence

The US Air Force is enlisting MIT to help sharpen its AI skills

The Air Force Artificial Intelligence Incubator aims to develop technologies that serve the “public good,” not weapons development.
An F-35A Lightning II aircraft
An F-35A Lightning II aircraftUnited States Air Force

The US Air Force is one of the most advanced fighting forces in the world—and yet it’s worried about losing that edge in the age of artificial intelligence.

To address that, today it is announcing a collaboration with MIT that will focus on developing and harnessing AI. The Air Force Artificial Intelligence Accelerator will focus on uses of AI “for the public good,” meaning applications relevant to the humanitarian work done by the Air Force and not directly connected to the development of weapons. That caveat might be key in preventing a backlash from students and the community—although that’s far from certain.

In an interview with MIT Technology Review, US Air Force Secretary Heather Wilson said that artificial intelligence would be a key component of the Air Force’s Science and Technology Strategy. That strategic plan, released in April, cites the need to harness emerging technologies more quickly and effectively.

The Air Force already funds a huge amount of research and development; it has contracts or agreements with over 10,000 different entities. It spends $2.5 billion a year on basic and early-stage research and $25 billion on research and development of applied technologies. The new relationship with MIT will see the Air Force contribute $15 million a year to do cooperative research. Eleven Air Force members will work alongside MIT professors and students on a range of projects. The US Department of Defense already has an MIT research center, Lincoln Laboratory.

It’s not yet clear how this collaboration will go down, especially since the military’s previous efforts to collaborate with industry have proved problematic. Most notably, a project involving Google’s Cloud AI team, established through a program known as Maven, sparked a backlash among employees. This involved using the Cloud platform to identify objects in aerial images, and some worried that it could eventually lead to using AI to target weapons. As a result, Google chose not to renew its contract with the Air Force and issued a new AI code of ethics, which precludes working on technology that could be weaponized.

Daniela Rus, director of MIT’s Computer Science and Artificial Intelligence Lab (CSAIL), where the incubator will be embedded, argues that the types of problems that will be tackled are of broad interest to academic researchers. “These are extraordinarily important problem,” Rus says. “In disaster relief, you are in an environment where you cannot anticipate that the maps work, that things are where they are supposed to be. All of these applications have a great deal of uncertainty and complexity.”

Maria Zuber, vice president of research at MIT, says the collaboration will only involve those who have shown an interest in helping the Air Force with its objectives. “No one will be forced to collaborate,” she says. Zuber is also keen to reassure anyone worried that the collaboration could involve development of weapons technology. “MIT does not do weapons research,” she says.

The incubator reflects the incredible potential for the types of algorithms and techniques emerging in academic and industry research labs to revolutionize the military. Machine learning could optimize many mundane things, from payroll to logistics. It will also be vital to a critical aspect of missions: gathering intelligence and extracting useful insights. This is far broader than the use of autonomy in weapons systems, a topic that often comes up when people think about the military applications of AI.

The move is part of an effort to improve the AI capabilities of the Air Force. In February, the Pentagon posted an unclassified document (pdf) outlining its plan for embracing artificial intelligence. The document made it clear that the technology is crucial to the military’s preeminence. Military adoption of AI in other countries, especially Russia and China, is also a key driver, says Wilson.

Key to the overall strategy is the Joint Artificial Intelligence Center, or JAIC, which will serve as a source of AI expertise across the Defense Department. Colonel Jason Brown, who leads the humanitarian mission at JAIC, says the new MIT incubator is part of an effort to completely rethink how the Air Force does things.

“The one thing that we know is we need to transform ourselves through AI in total,” Brown says. “We have a lot to learn.”

Deep Dive

Artificial intelligence

Large language models can do jaw-dropping things. But nobody knows exactly why.

And that's a problem. Figuring it out is one of the biggest scientific puzzles of our time and a crucial step towards controlling more powerful future models.

OpenAI teases an amazing new generative video model called Sora

The firm is sharing Sora with a small group of safety testers but the rest of us will have to wait to learn more.

Google’s Gemini is now in everything. Here’s how you can try it out.

Gmail, Docs, and more will now come with Gemini baked in. But Europeans will have to wait before they can download the app.

Providing the right products at the right time with machine learning

Amid shifting customer needs, CPG enterprises look to machine learning to bolster their data strategy, says global head of MLOps and platforms at Kraft Heinz Company, Jorge Balestra.

Stay connected

Illustration by Rose Wong

Get the latest updates from
MIT Technology Review

Discover special offers, top stories, upcoming events, and more.

Thank you for submitting your email!

Explore more newsletters

It looks like something went wrong.

We’re having trouble saving your preferences. Try refreshing this page and updating them one more time. If you continue to get this message, reach out to us at customer-service@technologyreview.com with a list of newsletters you’d like to receive.