Amazon is trying to make Alexa more chatty—but it’s very, very difficult
The online retailer held a competition to have graduate students make its personal assistant into a more conversational bot. Nobody won.
Getting to know you: In a contest dubbed Alexa Prize, Amazon challenged 15 teams to build “a socialbot that can converse coherently and engagingly with humans on popular topics for 20 minutes.” Three made it to the final round. The prize? Besides bragging rights, there was a cool $1 million on offer.
But: Building a chatbot is difficult. Machine learning isn’t advanced enough to do the task on its own, which means a lot of the software needs to be hand-coded. Even then, no team quite managed the goal of 20 minutes that Amazon was looking for. So the contest will be held again.
Why it matters: Companies like Amazon and Google are betting big on voice interface becoming as mainstream as search. For Amazon, a friendly voice assistant might help it convince customers to make more purchases. But $1 million says that it’s still trying.
Deep Dive
Artificial intelligence
Large language models can do jaw-dropping things. But nobody knows exactly why.
And that's a problem. Figuring it out is one of the biggest scientific puzzles of our time and a crucial step towards controlling more powerful future models.
Google DeepMind’s new generative model makes Super Mario–like games from scratch
Genie learns how to control games by watching hours and hours of video. It could help train next-gen robots too.
What’s next for generative video
OpenAI's Sora has raised the bar for AI moviemaking. Here are four things to bear in mind as we wrap our heads around what's coming.
Stay connected
Get the latest updates from
MIT Technology Review
Discover special offers, top stories, upcoming events, and more.