Skip to Content
Artificial intelligence

Why business is booming for military AI startups 

The invasion of Ukraine has prompted militaries to update their arsenals—and Silicon Valley stands to capitalize.

troops ID'ed by AI concept
Ms Tech | NYPL

Exactly two weeks after Russia invaded Ukraine in February, Alexander Karp, the CEO of data analytics company Palantir, made his pitch to European leaders. With war on their doorstep, Europeans ought to modernize their arsenals with Silicon Valley’s help, he argued in an open letter

For Europe to “remain strong enough to defeat the threat of foreign occupation,” Karp wrote, countries need to embrace “the relationship between technology and the state, between disruptive companies that seek to dislodge the grip of entrenched contractors and the federal government ministries with funding.”

Militaries are responding to the call. NATO announced on June 30 that it is creating a $1 billion innovation fund that will invest in early-stage startups and venture capital funds developing “priority” technologies such as artificial intelligence, big-data processing, and automation.

Since the war started, the UK has launched a new AI strategy specifically for defense, and the Germans have earmarked just under half a billion for research and artificial intelligence within a $100 billion cash injection to the military. 

“War is a catalyst for change,” says Kenneth Payne, who leads defense studies research at King’s College London and is the author of the book I, Warbot: The Dawn of Artificially Intelligent Conflict

The war in Ukraine has added urgency to the drive to push more AI tools onto the battlefield. Those with the most to gain are startups such as Palantir, which are hoping to cash in as militaries race to update their arsenals with the latest technologies. But long-standing ethical concerns over the use of AI in warfare have become more urgent as the technology becomes more and more advanced, while the prospect of restrictions and regulations governing its use looks as remote as ever.

The relationship between tech and the military wasn’t always so amicable. In 2018, following employee protests and outrage, Google pulled out of the Pentagon’s Project Maven, an attempt to build image recognition systems to improve drone strikes. The episode caused heated debate about human rights and the morality of developing AI for autonomous weapons. 

It also led high-profile AI researchers such as Yoshua Bengio, a winner of the Turing Prize, and Demis Hassabis, Shane Legg, and Mustafa Suleyman, the founders of leading AI lab DeepMind, to pledge not to work on lethal AI. 

But four years later, Silicon Valley is closer to the world’s militaries than ever. And it’s not just big companies, either—startups are finally getting a look in, says Yll Bajraktari, who was previously executive director of the US National Security Commission on AI (NSCAI) and now works for the Special Competitive Studies Project, a group that lobbies for more adoption of AI across the US. 

Why AI

Companies that sell military AI make expansive claims for what their technology can do. They say it can help with everything from the mundane to the lethal, from screening résumés to processing data from satellites or recognizing patterns in data to help soldiers make quicker decisions on the battlefield. Image recognition software can help with identifying targets. Autonomous drones can be used for surveillance or attacks on land, air, or water, or to help soldiers deliver supplies more safely than is possible by land. 

These technologies are still in their infancy on the battlefield, and militaries are going through a period of experimentation, says Payne, sometimes without much success. There are countless examples of AI companies’ tendency to make grand promises about technologies that turn out not to work as advertised, and combat zones are perhaps among the most technically challenging areas in which to deploy AI because there is little relevant training data. This could cause autonomous systems to fail in a “complex and unpredictable manner,” argued Arthur Holland Michel, an expert on drones and other surveillance technologies, in a paper for the United Nations Institute for Disarmament Research

Nevertheless, many militaries are pressing forward. In a vaguely worded press release in 2021, the British army proudly announced it had used AI in a military operation for the first time, to provide information on the surrounding environment and terrain. The US is working with startups to develop autonomous military vehicles. In the future, swarms of hundreds or even thousands of autonomous drones that the US and British militaries are developing could prove to be powerful and lethal weapons. 

Many experts are worried. Meredith Whittaker, a senior advisor on AI at the Federal Trade Commission and a faculty director at the AI Now Institute, says this push is really more about enriching tech companies than improving military operations. 

In a piece for Prospect magazine co-written with Lucy Suchman, a sociology professor at Lancaster University, she argued that AI boosters are stoking Cold War rhetoric and trying to create a narrative that positions Big Tech as “critical national infrastructure,” too big and important to break up or regulate. They warn that AI adoption by the military is being presented as an inevitability rather than what it really is: an active choice that involves ethical complexities and trade-offs. 

AI war chests

With the controversy around Maven receding into the past, the voices calling for more AI in defense have become louder and louder in the last couple of years. 

One of the loudest has been Google’s former CEO Eric Schmidt, who chaired the NSCAI and has called for the US to take a more aggressive approach to adopting military AI.

In a report last year, outlining steps the United States should take to be up to speed in AI by 2025, the NSCAI called on the US military to invest $8 billion a year into these technologies or risk falling behind China.  

The Chinese military likely spends at least $1.6 billion a year on AI, according to a report by the Georgetown Center for Security and Emerging Technologies, and in the US there is already a significant push underway to reach parity, says Lauren Kahn, a research fellow at the Council on Foreign Relations. The US Department of Defense requested $874 million for artificial intelligence for 2022, although that figure does not reflect the total of the department’s AI investments, it said in a March 2022 report.

It’s not just the US military that’s convinced of the need. European countries, which tend to be more cautious about adopting new technologies, are also spending more money on AI, says Heiko Borchert, co-director of the Defense AI Observatory at the Helmut Schmidt University in Hamburg, Germany. 

The French and the British have identified AI as a key defense technology, and the European Commission, the EU’s executive arm, has earmarked $1 billion to develop new defense technologies. 

Good hoops, bad hoops

Building demand for AI is one thing. Getting militaries to adopt it is entirely another. 

A lot of countries are pushing the AI narrative, but they’re struggling to move from concept to deployment, says Arnaud Guérin, the CEO of Preligens, a French startup that sells AI surveillance. That’s partly because the defense industry in most countries is still usually dominated by a clutch of large contractors, which tend to have more expertise in military hardware than AI software, he says. 

It’s also because clunky military vetting processes move slowly compared with the breakneck speed we’re used to seeing in AI development: military contracts can span decades, but in the fast-paced startup cycle, companies have just a year or so to get off the ground.

Startups and venture capitalists have expressed frustration that the process is moving so slowly. The risk, argues Katherine Boyle, a general partner at venture capital firm Andreessen Horowitz, is that talented engineers will leave in frustration for jobs at Facebook and Google, and startups will go bankrupt waiting for defense contracts. 

“Some of those hoops are totally critical, particularly in this sector where security concerns are very real,” says Mark Warner, who founded FacultyAI, a data analytics company that works with the British military. “But others are not … and in some ways have enshrined the position of incumbents.”

AI companies with military ambitions have to “stay in business for a long time,” says Ngor Luong, a research analyst who has studied AI investment trends at the Georgetown Center for Security and Emerging Technologies. 

Militaries are in a bind, says Kahn: go too fast, and risk deploying dangerous and broken systems, or go too slow and miss out on technological advancement. The US wants to go faster, and the DoD has enlisted the help of Craig Martell, the former AI chief at ride-hailing company Lyft. 

In June 2022, Martell took the helm of the Pentagon’s new Chief Digital Artificial Intelligence Office, which aims to coordinate the US military’s AI efforts. Martell’s mission, he told Bloomberg, is to change the culture of the department and boost the military’s use of AI despite “bureaucratic inertia.” 

He may be pushing at an open door, as AI companies are already starting to snap up lucrative military contracts. In February, Anduril, a five-year-old startup that develops autonomous defense systems such as sophisticated underwater drones, won a $1 billion defense contract with the US. In January, ScaleAI, a startup that provides data labeling services for AI, won a $250 million contract with the US Department of Defense. 

Beware the hype

Despite the steady march of AI into the field of battle, the ethical concerns that prompted the protests around Project Maven haven’t gone away. 

There have been some efforts to assuage those concerns. Aware it has a trust issue, the US Department of Defense has rolled out “responsible artificial intelligence” guidelines for AI developers, and it has its own ethical guidelines for the use of AI. NATO has an AI strategy that sets out voluntary ethical guidelines for its member nations. 

All these guidelines call on militaries to use AI in a way that is lawful, responsible, reliable, and traceable and seeks to mitigate biases embedded in the algorithms. 

One of their key concepts is that humans must always retain control of AI systems. But as the technology develops, that won’t really be possible, says Payne.  

“The whole point of an autonomous [system] is to allow it to make a decision faster and more accurately than a human could do and at a scale that a human can’t do,” he says. “You’re effectively hamstringing yourself if you say ‘No, we’re going to lawyer each and every decision.’”  

Still, critics say stronger rules are needed. There is a global campaign called Stop Killer Robots that seeks to ban lethal autonomous weapons, such as drone swarms. Activists, high-profile officials such as UN chief António Guterres, and governments such as New Zealand’s argue that autonomous weapons are deeply unethical, because they give machines control over life-and-death decisions and could disproportionately harm marginalized communities through algorithmic biases. 

Swarms of thousands of autonomous drones, for example, could essentially become weapons of mass destruction. Restricting these technologies will be an uphill battle because the idea of a global ban has faced opposition from big military spenders, such as the US, France, and the UK.

Ultimately, the new era of military AI raises a slew of difficult ethical questions that we don’t have answers to yet. 

One of those questions is how automated we want armed forces to be in the first place, says Payne. On one hand, AI systems might reduce casualties by making war more targeted, but on the other, you’re “effectively creating a robot mercenary force to fight on your behalf,” he says. “It distances your society from the consequences of violence.” 

Deep Dive

Artificial intelligence

Large language models can do jaw-dropping things. But nobody knows exactly why.

And that's a problem. Figuring it out is one of the biggest scientific puzzles of our time and a crucial step towards controlling more powerful future models.

OpenAI teases an amazing new generative video model called Sora

The firm is sharing Sora with a small group of safety testers but the rest of us will have to wait to learn more.

Google’s Gemini is now in everything. Here’s how you can try it out.

Gmail, Docs, and more will now come with Gemini baked in. But Europeans will have to wait before they can download the app.

Providing the right products at the right time with machine learning

Amid shifting customer needs, CPG enterprises look to machine learning to bolster their data strategy, says global head of MLOps and platforms at Kraft Heinz Company, Jorge Balestra.

Stay connected

Illustration by Rose Wong

Get the latest updates from
MIT Technology Review

Discover special offers, top stories, upcoming events, and more.

Thank you for submitting your email!

Explore more newsletters

It looks like something went wrong.

We’re having trouble saving your preferences. Try refreshing this page and updating them one more time. If you continue to get this message, reach out to us at customer-service@technologyreview.com with a list of newsletters you’d like to receive.