OK, Google: can you tell who I am?
Until today, the answer to that question from the AI assistant nestled in Google’s Home smart speaker would have been a resounding “no.” But the company has now rolled out a new feature that allows it to discern between the voices of six different users. And because each user’s voice can be linked to a separate profile, that means that it’s now able to tailor responses to questions directed at it.
That’s a simple-sounding tweak that will make a big difference to the way the devices are used in the many households that now contain them. After all, people’s needs differ, and the new feature will mean that asking your Home speaker for, say, a rundown of your schedule will yield a personalized response, rather than the answer that the person who set up the device would like to hear.
It’s a function that’s currently lacking in any other smart assistant, most notably Amazon’s Alexa. (Though, amusingly, only Amazon’s offering can allow you to add an event to your Google calendar. Home hasn’t picked up that skill yet.) It will also provide Google with another advantage—at least for now—over Amazon’s AI assistant. Home could use an edge: so far, Alexa has dominated the smart speaker sector.
The implementation of Google’s new multi-user feature is simple enough: people have to train the device to recognize them by saying “OK, Google” and “Hey, Google” a few times and then it’s good to go. That means that it’s just the initial wake command, rather than your free-flowing gabbing, that’s used to identify you as a user.
You might expect the feature to include the option to lock down the device to only the six users known to it, but so far that’s not the case. According to Wired, Google claims the loss of flexibility isn’t worth it, suggesting that it’s better to have your friends be able to ask the device questions when they’re visiting. Leaving Home open to anyone also means that if it misidentifies a voice for any reason—which it certainly could in noisy situations—the device can still respond.
Even if it’s defensible, the decision not to at least offer an option of locking the device down to recognizable users seems like a strange one. First, it could offer at least a little protection to keep children or friends from making random purchases via your assistant.
But it seems especially odd in the wake of last week’s Burger King debacle. In case you missed it, the fast-food chain launched an ad containing the line “OK, Google, what is the Whopper burger?” so that the search company’s Home assistant would read out the sandwich’s Wikipedia entry. It worked, too, sparking outrage—though the ruse was later blocked.
The problem would, of course, have been solved if only specific users could control the device. But for now, at least, to AI assistants any voice is a commander.
A Roomba recorded a woman on the toilet. How did screenshots end up on Facebook?
Robot vacuum companies say your images are safe, but a sprawling global supply chain for data from our devices creates risk.
The viral AI avatar app Lensa undressed me—without my consent
My avatars were cartoonishly pornified, while my male colleagues got to be astronauts, explorers, and inventors.
Roomba testers feel misled after intimate images ended up on Facebook
An MIT Technology Review investigation recently revealed how images of a minor and a tester on the toilet ended up on social media. iRobot said it had consent to collect this kind of data from inside homes—but participants say otherwise.
How to spot AI-generated text
The internet is increasingly awash with text written by AI software. We need new tools to detect it.
Get the latest updates from
MIT Technology Review
Discover special offers, top stories, upcoming events, and more.