Skip to Content
Artificial intelligence

Never mind killer robots—here are six real AI dangers to watch out for in 2019

Last year a string of controversies revealed a darker (and dumber) side to artificial intelligence.
Mz Tech

Once it was fashionable to fret about the prospect of super-intelligent machines taking over the world. The past year showed that AI may cause all sorts of hazards long before that happens.

The latest AI methods excel at perceptual tasks such as classifying images and transcribing speech, but the hype and excitement over these skills have disguised how far we really are from building machines as clever as we are. Six controversies from 2018 stand out as warnings that even the smartest AI algorithms can misbehave, or that carelessly applying them can have dire consequences.

1. Self-crashing cars

After a fatal accident involving one of Uber’s self-driving cars in March, investigators found that the company’s technology had failed catastrophically, in a way that could easily have been prevented.

Carmakers like Ford and General Motors, newcomers like Uber, and a horde of startups are hurrying to commercialize a technology that, despite its immaturity, has already seen billions of dollars in investment. Waymo, a subsidiary of Alphabet, has made the most progress; it rolled out the first fully autonomous taxi service in Arizona last year. But even Waymo’s technology is limited, and autonomous cars cannot drive everywhere in all conditions.

What to watch for in 2019: Regulators in the US and elsewhere have so far taken a hands-off approach for fear of stifling innovation. The US National Highway Traffic Safety Administration has even signaled that existing safety rules may be relaxed. But pedestrians and human drivers haven’t signed up to be guinea pigs. Another serious accident in 2019 might shift the regulators’ attitudes.

2. Political manipulation bots

In March, news broke that Cambridge Analytica, a political consulting company, had exploited Facebook’s data sharing practices to influence the 2016 US presidential election. The resulting uproar showed how the algorithms that decide what news and information to surface on social media can be gamed to amplify misinformation, undermine healthy debate, and isolate citizens with different views from one another.

During a congressional hearing, Facebook CEO Mark Zuckerberg promised that AI itself could be trained to spot and block malicious content, even though it is still far from being able to understand the meaning of text, images, or video.

What to watch for in 2019: Zuckerberg’s promise will be tested in elections held in two of Africa’s biggest countries: South Africa and Nigeria. The long run-up to the 2020 US election has also begun, and it could inspire new kinds of misinformation technology powered by AI, including malicious chatbots. 

3. Algorithms for peace

Last year, an AI peace movement took shape when Google employees learned that their employer was supplying technology to the US Air Force for classifying drone imagery. The workers feared this could be a fateful step towards supplying technology for automating deadly drone strikes. In response, the company abandoned Project Maven, as it was called, and created an AI code of ethics.

Academics and industry heavyweights have backed a campaign to ban the use of autonomous weapons. Military use of AI is only gaining momentum, however, and other companies, like Microsoft and Amazon, have shown no reservations about helping out.

What to watch out for in 2019: Although Pentagon spending on AI projects is increasing, activists hope a preemptive treaty banning autonomous weapons will emerge from a series of UN meetings slated for this year.

4. A surveillance face-off

AI’s superhuman ability to identify faces has led countries to deploy surveillance technology at a remarkable rate. Face recognition also lets you unlock your phone and automatically tags photos for you on social media.

Civil liberties groups warn of a dystopian future. The technology is a formidable way to invade people’s privacy, and biases in training data make it likely to automate discrimination.

In many countriesChina especially—face recognition is being widely used for policing and government surveillance. Amazon is selling the technology to US immigration and law enforcement agencies.

What to watch out for in 2019: Face recognition will spread to vehicles and webcams, and it will be used to track your emotions as well as your identity. But we may also see some preliminary regulation of it this year, too.

5. Fake it till you break it

A proliferation of “deepfake” videos last year showed how easy it is becoming make fake clips using AI. This means fake celebrity porn, lots of weird movie mashups, and, potentially, virulent political smear campaigns.

Generative adversarial networks (GANs), which involve two dueling neural networks, can conjure extraordinarily realistic but completely made-up images and video. Nvidia recently showed how GANs can generate photorealistic faces of whatever race, gender, and age you want.

What to watch for in 2019: As deepfakes improve, people will probably start being duped by them this year. DARPA will test new methods for detecting deepfakes. But since this also relies on AI, it’ll be a game of cat and mouse.

6. Algorithmic discrimination

Bias was discovered in numerous commercial tools last year. Vision algorithms trained on unbalanced data sets failed to recognize women or people of color; hiring programs fed historic data were proven to perpetuate discrimination that already exists

Tied to the issue of bias—and harder to fix—is the lack of diversity across the AI field itself. Women occupy, at most, 30% of industry jobs and fewer than 25% of teaching roles at top universities. There are comparatively few black and Latin researchers as well.

What to expect in 2019: We’ll see methods for detecting and mitigating bias and algorithms that can produced unbiased results from biased data. The International Conference on Machine Learning, a major AI conference, will be held in Ethiopia in 2020 because African scientists researching problems of bias could have trouble getting visas to travel to other regions. Other events could also move.

Deep Dive

Artificial intelligence

Large language models can do jaw-dropping things. But nobody knows exactly why.

And that's a problem. Figuring it out is one of the biggest scientific puzzles of our time and a crucial step towards controlling more powerful future models.

OpenAI teases an amazing new generative video model called Sora

The firm is sharing Sora with a small group of safety testers but the rest of us will have to wait to learn more.

Google’s Gemini is now in everything. Here’s how you can try it out.

Gmail, Docs, and more will now come with Gemini baked in. But Europeans will have to wait before they can download the app.

Providing the right products at the right time with machine learning

Amid shifting customer needs, CPG enterprises look to machine learning to bolster their data strategy, says global head of MLOps and platforms at Kraft Heinz Company, Jorge Balestra.

Stay connected

Illustration by Rose Wong

Get the latest updates from
MIT Technology Review

Discover special offers, top stories, upcoming events, and more.

Thank you for submitting your email!

Explore more newsletters

It looks like something went wrong.

We’re having trouble saving your preferences. Try refreshing this page and updating them one more time. If you continue to get this message, reach out to us at customer-service@technologyreview.com with a list of newsletters you’d like to receive.