Skip to Content

Sponsored

Artificial intelligence

An innovation war: Cybersecurity vs. cybercrime

IT security tools are becoming increasingly sophisticated thanks to artificial intelligence, but advances in the cybercriminal world are close behind.

Produced in association withEnterprise.nxt, a digital publication from Hewlett Packard Enterprise

An innovation war: Cybersecurity vs. cybercrime

An innovation war: Cybersecurity vs. cybercrime

EBSCO Industries started using a cybersecurity tool that uses artificial intelligence (AI) to hunt down and help eliminate breaches. Soon after, security analysts at the information services company found failed login attempts the product had ignored. Thinking the unsuccessful sign-ons might signal a cyberattack, the security team launched a manual investigation.

“It was an employee who put his password in wrong,” says John W. Graham, global chief information security officer at EBSCO, a $2.8 billion conglomerate. It took the team two hours to research the issue; they won’t waste time on that again. Instead, they’ll trust the tool.

Graham’s experience is typical of how AI technology buys back security analysts’ time and resources. Some 61% of corporations can’t detect breaches without AI-driven cybersecurity technology, according to a study from Capgemini. But for every advance in cybersecurity that puts organizations ahead, there are new cybercrime enhancements that set them back again.

Cybercrime tools that incorporate AI are outstripping their cybersecurity counterparts—malware today can pinpoint their targets from millions, generate convincing spam, and infect computer networks without being detected. All this raises the question—and it’s a tough one—can cybersecurity innovations keep pace with cybercrime? It can, if companies use the same original thought and invention that sustains the war, turning not just to technology, but also communications with government agencies and new ways of thinking about cyber defense. 

Download the full report.

Deep Dive

Artificial intelligence

A Roomba recorded a woman on the toilet. How did screenshots end up on Facebook?

Robot vacuum companies say your images are safe, but a sprawling global supply chain for data from our devices creates risk.

The viral AI avatar app Lensa undressed me—without my consent

My avatars were cartoonishly pornified, while my male colleagues got to be astronauts, explorers, and inventors.

Roomba testers feel misled after intimate images ended up on Facebook

An MIT Technology Review investigation recently revealed how images of a minor and a tester on the toilet ended up on social media. iRobot said it had consent to collect this kind of data from inside homes—but participants say otherwise.

How to spot AI-generated text

The internet is increasingly awash with text written by AI software. We need new tools to detect it.

Stay connected

Illustration by Rose Wong

Get the latest updates from
MIT Technology Review

Discover special offers, top stories, upcoming events, and more.

Thank you for submitting your email!

Explore more newsletters

It looks like something went wrong.

We’re having trouble saving your preferences. Try refreshing this page and updating them one more time. If you continue to get this message, reach out to us at customer-service@technologyreview.com with a list of newsletters you’d like to receive.