Skip to Content

Don’t Make Siri a “Character,” Apple

Apple is seeking a writer to punch up Siri’s dialogue. Bad idea.
January 18, 2013

It certainly sounds like a cool job: Apple wants to hire a writer/editor “to help us evolve and enrich Siri, our virtual personal assistant” and “evolve Siri as a distinct, recognizable character.” Or maybe they don’t, because the job listing has already disappeared. Let’s hope that, instead of hiring someone, they realized that “punching up” a voice command utility is probably detrimental to its user experience. 

Siri belongs on the right. (from Understanding Comics)

When Siri was first unveiled, it was magical and entertaining. Its cute canned one-liners served a practical purpose: highlighting Siri’s distinguishing feature, namely its ability to “understand” natural language. It was theory of mind as marketing: if something talks like us, we’re quick to assume it understands us. Apple had to show us two things: that Siri was different than other voice command interfaces, and that it worked better than them, too.

We all know what happened next: the backlash. Siri didn’t work as well as its classy banter (and Apple’s TV ads) primed us to expect. Its superficial charm suddenly felt like a con: wouldn’t it be better if the #$*%ing thing just did what I asked? 

Apple is undoubtedly working overtime to improve Siri’s basic functionality and performance. But now that the honeymoon with her is long over, “more personality” is the last thing Siri needs. She’s not new. She’s not cute. She’s not unique. She’s an input method, nothing more. And “more personality” will just get in the way. 

I doubt many iPhone users (or potential iPhone users) fire up Siri just for fun anymore. If they invoke her at all, it’s most likely to do something that she’s actually good at, like setting an alarm or a reminder–anything simple enough to be barked out unambiguously in two seconds or less. (Using iOS’s Reminders app by hand is actually a pretty tedious experience in comparison.) Asking Siri about the weather? Swiping iOS’s Notification Center tray down is faster and simpler. Doing simple conversions, like “what is three feet in centimeters”? Siri may still misunderstand you (I just tried this), and even if she doesn’t, the overstuffed data sheet from Wolfram Alpha that she coughs up is a lot harder to read at a glance than the large-type, boldface result you get by just punching “3ft in cm” into Google.  

In short, if Siri can’t be smarter, she must be faster–and padding out her patter isn’t going to accomplish that. “Micro mobile interactions”, according to mobile interface design guru Luke Wroblewski, are the key to a satisfying user experience on a handheld device. Get in, get out, fast. I usually shut Siri up in mid-sentence as soon as I get visual confirmation that the reminder I asked her to set was parsed correctly. I got what I wanted–I don’t want any more interaction, much less a canned soundbite, no matter how droll. 

The other pitfall is more psychological. The more detail you add to a representation of a human–that is, the more you make it a “distinct, recognizable character”–the less we are able to relate to it emotionally and project ourselves onto it. Scott McCloud eloquently explains this effect with regard to visual media in Understanding Comics (see the image at the top of this post). It may also apply to aural representations like Siri, too. Apple probably assumes that giving Siri “more personality” is a logical extension of its successful skeuomorphic visual designs. But in fact, the more “personality” Siri assumes, the less relatable and appealing she may be to the maximum number of users. So what good is it?

Apple should forget about turning Siri into a poor-man’s Jarvis (the urbane, artificially intelligent virtual butler that tends to Tony Stark’s every whim in the Iron Man movies). Anything short of a full, actual, real personality is just useless noise. Siri isn’t an assistant–she’s just a Big Red Button we press by talking. We don’t need it to talk back all that much. 

Keep Reading

Most Popular

Large language models can do jaw-dropping things. But nobody knows exactly why.

And that's a problem. Figuring it out is one of the biggest scientific puzzles of our time and a crucial step towards controlling more powerful future models.

OpenAI teases an amazing new generative video model called Sora

The firm is sharing Sora with a small group of safety testers but the rest of us will have to wait to learn more.

Google’s Gemini is now in everything. Here’s how you can try it out.

Gmail, Docs, and more will now come with Gemini baked in. But Europeans will have to wait before they can download the app.

This baby with a head camera helped teach an AI how kids learn language

A neural network trained on the experiences of a single young child managed to learn one of the core components of language: how to match words to the objects they represent.

Stay connected

Illustration by Rose Wong

Get the latest updates from
MIT Technology Review

Discover special offers, top stories, upcoming events, and more.

Thank you for submitting your email!

Explore more newsletters

It looks like something went wrong.

We’re having trouble saving your preferences. Try refreshing this page and updating them one more time. If you continue to get this message, reach out to us at customer-service@technologyreview.com with a list of newsletters you’d like to receive.