What is AI, exactly? The question may seem basic, but the answer is kind of complicated.
In the broadest sense, AI refers to machines that can learn, reason, and act for themselves. They can make their own decisions when faced with new situations, in the same way that humans and animals can.
As it currently stands, the vast majority of the AI advancements and applications you hear about refer to a category of algorithms known as machine learning (see "What is machine learning?"). These algorithms use statistics to find patterns in massive amounts of data. They then use those patterns to make predictions on things like what shows you might like on Netflix, what you’re saying when you speak to Alexa, or whether you have cancer based on your MRI.
Machine learning, and its subset deep learning (basically machine learning on steroids), is incredibly powerful. It is the basis of many major breakthroughs, including facial recognition, hyper-realistic photo and voice synthesis, and AlphaGo, the program that beat the best human player in the complex game of Go. But it is also just a tiny fraction of what AI could be.
The grand idea is to develop something resembling human intelligence, which is often referred to as “artificial general intelligence,” or “AGI.” Some experts believe that machine learning and deep learning will eventually get us to AGI with enough data, but most would agree there are big missing pieces and it’s still a long way off. AI may have mastered Go, but in other ways it is still much dumber than a toddler.
In that sense, AI is also aspirational, and its definition is constantly evolving. What would have been considered AI in the past may not be considered AI today.
Because of this, the boundaries of AI can get really confusing, and the term often gets mangled to include any kind of algorithm or computer program. We can thank Silicon Valley for constantly inflating the capabilities of AI for its own convenience. (Cough, Mark Zuckerberg, cough.)
To clear things up, I drew you this flowchart on the back of an envelope so you can work out whether something is using AI or not.
This originally appeared in our AI newsletter The Algorithm. To have it directly delivered to your inbox, subscribe here for free.
This new data poisoning tool lets artists fight back against generative AI
The tool, called Nightshade, messes up training data in ways that could cause serious damage to image-generating AI models.
The Biggest Questions: What is death?
New neuroscience is challenging our understanding of the dying process—bringing opportunities for the living.
Rogue superintelligence and merging with machines: Inside the mind of OpenAI’s chief scientist
An exclusive conversation with Ilya Sutskever on his fears for the future of AI and why they’ve made him change the focus of his life’s work.
How to fix the internet
If we want online discourse to improve, we need to move beyond the big platforms.
Get the latest updates from
MIT Technology Review
Discover special offers, top stories, upcoming events, and more.