I recently got my own personal assistant, called Amy. My new helper is amazingly attentive and diligent, but also a bit strange. For one thing, she seems completely obsessed with organizing meetings and pretty much refuses to talk about anything else.
Amy isn’t a real person but a software agent that exists somewhere in the cloud and communicates with my contacts and me by e-mail, helping set up and reschedule meetings and other appointments. The software is being developed by a company called X.ai, which hopes to create something that seems virtually indistinguishable from a real human, juggling calendars with care, tact, and consummate attention to detail.
When I first contacted Dennis Mortensen, the founder and CEO of X.ai, he asked Amy to set up a phone call. It took a couple of e-mails for me to realize that Amy wasn’t his real assistant but the technology his company had been developing for the past year.
“There are about 10 billion meetings in the U.S. every year,” Mortensen told me when I visited his company’s offices in downtown New York recently. “That’s the number I’m attacking. I simply can’t play out a future in 15 years when meetings are set up in exactly the same way. That must disappear.”
The company’s approach is similar to the one used by Apple’s Siri or Microsoft’s Cortana, but it’s capable of parsing more complex language within a specific area. “From a technology point of view there might an opening to create these ‘vertical’ AIs that certainly aren’t replacements for humans, but they are very good workers that can do one thing,” Mortensen says.
The other big difference, of course, is that when Siri misunderstands you, the consequences aren’t as bad as sending an important new client to a meeting place at the wrong time.
Still, it’s an interesting approach to developing a useful form of artificial intelligence. It remains incredibly hard to enable a machine to converse with people in a convincing manner; the meaning of a statement can change dramatically depending on subtle shifts in grammar, previous information, and contextual understanding. So X.ai has chosen to narrow the scope of Amy’s conversation to scheduling meetings and nothing more, hoping that this will make the challenge more manageable, although Mortensen admits that he’s not sure the problem is really solvable. This narrow focus can certainly make Amy seem a little, well, single-minded.
I recently started using Amy, which is currently free, to help schedule some of my own meetings, and it works quite well for basic calendar logistics. I can include Amy on an e-mail to someone and ask that she help figure out when might be the best time to meet up, or I can send a quick e-mail asking Amy to change the time or location of a meeting. I’ve tried all sorts of conversational styles, and Amy isn’t fazed. And if the software isn’t sure about the location of a venue or something else, it will politely ask for clarification.
I tried asking Amy to remind a colleague of mine, Mike, to bring his laptop to a meeting we’d already set up. After a few minutes, she politely told me to go ask him myself. Amy’s reply read:
It doesn’t look like this message is related to scheduling a meeting. I try to only send messages that relate to scheduling your meetings so your guests feel the urgency and importance when an email comes from me. I think your message would have a stronger impact if you sent it directly to the guest.
Fair enough, but is such a basic assistant all that useful? Setting up meetings is clearly a big headache for certain people, but not for everyone. X.ai says it plans to include more sophisticated capabilities in the future, but the inherent difficulty of mastering conversation through software may work against that.
Chris Dyer, an assistant professor of computer science specializing in natural language processing at Carnegie Mellon University, says researchers have tried for some time to build software agents capable of conversing about a narrow subject.“There’s a general feeling in the field that by finding well-circumscribed domains, with rules that can be well captured, fairly effectively, we might make some real progress,” he told me.
But Dyer noted that narrowing the topic of conversation too much can significantly lower an AI’s apparent IQ. “The risk is that it’s hard to find problems in natural language that really are simple enough to make progress on but not too simple, such that they’re kind of ‘toy’ problems,” he said. “And I think there really is a feeling that we haven’t found those quite yet.”
Although I’m far from convinced that Amy will ever be an indispensible tool, I plan to keep using the software for a while longer. Perhaps the remaining limitations of such software agents could at least provide a nice excuse for my own tardiness: if I ever stand you up, you’ll know whom to blame.
The gene-edited pig heart given to a dying patient was infected with a pig virus
The first transplant of a genetically-modified pig heart into a human may have ended prematurely because of a well-known—and avoidable—risk.
Meta has built a massive new language AI—and it’s giving it away for free
Facebook’s parent company is inviting researchers to pore over and pick apart the flaws in its version of GPT-3
Saudi Arabia plans to spend $1 billion a year discovering treatments to slow aging
The oil kingdom fears that its population is aging at an accelerated rate and hopes to test drugs to reverse the problem. First up might be the diabetes drug metformin.
The dark secret behind those cute AI-generated animal images
Google Brain has revealed its own image-making AI, called Imagen. But don't expect to see anything that isn't wholesome.
Get the latest updates from
MIT Technology Review
Discover special offers, top stories, upcoming events, and more.