OK, Google: can you tell who I am?
Until today, the answer to that question from the AI assistant nestled in Google’s Home smart speaker would have been a resounding “no.” But the company has now rolled out a new feature that allows it to discern between the voices of six different users. And because each user’s voice can be linked to a separate profile, that means that it’s now able to tailor responses to questions directed at it.
That’s a simple-sounding tweak that will make a big difference to the way the devices are used in the many households that now contain them. After all, people’s needs differ, and the new feature will mean that asking your Home speaker for, say, a rundown of your schedule will yield a personalized response, rather than the answer that the person who set up the device would like to hear.
It’s a function that’s currently lacking in any other smart assistant, most notably Amazon’s Alexa. (Though, amusingly, only Amazon’s offering can allow you to add an event to your Google calendar. Home hasn’t picked up that skill yet.) It will also provide Google with another advantage—at least for now—over Amazon’s AI assistant. Home could use an edge: so far, Alexa has dominated the smart speaker sector.
The implementation of Google’s new multi-user feature is simple enough: people have to train the device to recognize them by saying “OK, Google” and “Hey, Google” a few times and then it’s good to go. That means that it’s just the initial wake command, rather than your free-flowing gabbing, that’s used to identify you as a user.
You might expect the feature to include the option to lock down the device to only the six users known to it, but so far that’s not the case. According to Wired, Google claims the loss of flexibility isn’t worth it, suggesting that it’s better to have your friends be able to ask the device questions when they’re visiting. Leaving Home open to anyone also means that if it misidentifies a voice for any reason—which it certainly could in noisy situations—the device can still respond.
Even if it’s defensible, the decision not to at least offer an option of locking the device down to recognizable users seems like a strange one. First, it could offer at least a little protection to keep children or friends from making random purchases via your assistant.
But it seems especially odd in the wake of last week’s Burger King debacle. In case you missed it, the fast-food chain launched an ad containing the line “OK, Google, what is the Whopper burger?” so that the search company’s Home assistant would read out the sandwich’s Wikipedia entry. It worked, too, sparking outrage—though the ruse was later blocked.
The problem would, of course, have been solved if only specific users could control the device. But for now, at least, to AI assistants any voice is a commander.
A horrifying new AI app swaps women into porn videos with a click
Deepfake researchers have long feared the day this would arrive.
DeepMind’s AI predicts almost exactly when and where it’s going to rain
The firm worked with UK weather forecasters to create a model that was better at making short term predictions than existing systems.
People are hiring out their faces to become deepfake-style marketing clones
AI-powered characters based on real people can star in thousands of videos and say anything, in any language.
What an octopus’s mind can teach us about AI’s ultimate mystery
Machine consciousness has been debated since Turing—and dismissed for being unscientific. Yet it still clouds our thinking about AIs like GPT-3.
Get the latest updates from
MIT Technology Review
Discover special offers, top stories, upcoming events, and more.