Skip to Content

Sponsored

Artificial intelligence

The battle of algorithms: Uncovering offensive AI

Learn about current and emerging applications of offensive AI, defensive AI, and the ongoing battle of algorithms between the two.

In association withDarktrace

As machine-learning applications move into the mainstream, a new era of cyber threat is emerging—one that uses offensive artificial intelligence (AI) to supercharge attack campaigns. Offensive AI allows attackers to automate reconnaissance, craft tailored impersonation attacks, and even self-propagate to avoid detection. Security teams can prepare by turning to defensive AI to fight back—using autonomous cyber defense that learns on the job to detect and respond to even the most subtle indicators of an attack, no matter where it appears.

Marcy Rizzo, of MIT Technology Review, interviews Darktrace's Marcus Fowler and Max Heinemeyer in January 2021.

MIT Technology Review recently sat down with experts from Darktrace—Marcus Fowler, director of strategic threat, and Max Heinemeyer, director of threat hunting—to discuss the current and emerging applications of offensive AI, defensive AI, and the ongoing battle of algorithms between the two.   

Sign up to watch the webcast.

This content was produced by Insights, the custom content arm of MIT Technology Review. It was not written by MIT Technology Review’s editorial staff.

Deep Dive

Artificial intelligence

A Roomba recorded a woman on the toilet. How did screenshots end up on Facebook?

Robot vacuum companies say your images are safe, but a sprawling global supply chain for data from our devices creates risk.

The viral AI avatar app Lensa undressed me—without my consent

My avatars were cartoonishly pornified, while my male colleagues got to be astronauts, explorers, and inventors.

Roomba testers feel misled after intimate images ended up on Facebook

An MIT Technology Review investigation recently revealed how images of a minor and a tester on the toilet ended up on social media. iRobot said it had consent to collect this kind of data from inside homes—but participants say otherwise.

How to spot AI-generated text

The internet is increasingly awash with text written by AI software. We need new tools to detect it.

Stay connected

Illustration by Rose Wong

Get the latest updates from
MIT Technology Review

Discover special offers, top stories, upcoming events, and more.

Thank you for submitting your email!

Explore more newsletters

It looks like something went wrong.

We’re having trouble saving your preferences. Try refreshing this page and updating them one more time. If you continue to get this message, reach out to us at customer-service@technologyreview.com with a list of newsletters you’d like to receive.