Skip to Content
MIT News feature

That chatbot I’ve loved to hate

In quarantine, we see the emptiness of pretend empathy.
August 18, 2020
Daniel Zender

I’ve spent a good bit of my career arguing for the virtues of solitude. It supports and sustains creativity. And just as important, it prepares us for relationships with others. When we learn to be comfortable with ourselves, we are in a position to see and appreciate how others are different from us rather than just relying on them to support our fragile sense of self. I even imagined I was good at solitude: it helps me do my best creative work and restores my emotional tranquility. But isolated in the covid quarantine, I learned that much of what I had been calling solitude was simply time alone in the hum of a busy life. I was fine with living alone if I could also enjoy seminars with colleagues and students, dinners with friends, and the pleasures of those “third places” where one is alone but supported by the bustle of the many: the bodega, the café, the theater. With all these out of reach, I discovered that I wasn’t such a solitude expert after all. Now, declared covid vulnerable by virtue of age, I was not just alone but afraid. And fear, I realized, banished the creativity of solitude.

Determined to summon a resiliency I wasn’t feeling, I gamely joined Zoom reunions and went to Zoom cocktail parties. I called good friends and reconnected with those who had fallen out of touch. 

This helped, of course, but I knew what I was missing. Philosophers tell us that we become most human in the presence of human faces; that the presence of a face awakens the human ethical compact. Neuroscience helps us understand how: in humans, the shape of a smile or a frown releases chemicals that affect our mental state. Our mirror neurons fire both when we act and when we observe others acting. When we see an emotion on the face of another, we feel it ourselves. That same neuroscience could explain why Zoom is so tiring when it becomes our habitual mode of communication. We rely on direct eye contact and small facial cues and miss them when they’re gone. Staring at our screens, we strain to compensate for their absence. 

When we had all the time in the world to be with our machines, we missed each other. We wanted to reach past technology to the full embrace of the human.

So in the early days of the quarantine, I had a great deal of helpful and supportive connection, but what I missed most of all was face-to-face conversation. 

I have a summer house by the sea, a wooden cottage with no heat. My daughter and her husband, New Yorkers, packed a bag, bought some extra space heaters, and drove down. They reported back: it was cold. I heard their loving warning, but as soon as I could, I gratefully joined them. We fell into a routine of building a morning fire, cooking together, and talking over dinner, sharing our fears and the events of the day. I settled down.

I had long questioned the uncritical use of online courseware. For me, the most significant mentorship occurs with a professor in the room. Now, with Zoom instruction a necessity, I devoted myself to becoming the very best online educator I could be. To give my students the illusion of eye contact, I learned to stare at the green light on my MacBook Air. It had the desired effect. Students told me I was easy  to talk to on Zoom, but it didn’t seem right to share my secret since staring at the green light gave me a migraine. In time, I learned new techniques, easier on the eyes.

Although I taught my two MIT classes on Zoom, I held my “office hours” on the telephone. With no need to worry about our backgrounds, whether our faces were frozen into alertness, or whether we were providing each other with the illusion of eye contact, my students and I could relax, focusing our full attention on one another. 

Just as I was sorting all this out, a New York Times reporter called to ask me about a technology I had long loved to hate: conversational AI programs (commonly called “chatbots”) that are promoted as capable of empathic, caring behavior. The reporter told me that as the quarantine dragged on, there had been a spike in conversations with one particular chatbot. To me, such chatbots cross a bright line by making a false promise in an area that is central to what makes us human. 

In a quarter-century of studying people’s reactions to sociable or relational machines—from Tamagotchis, Aibos, and Furbys that asked for care to screen chatbots that purport to be friends—I have been struck by the fact that we not only nurture what we love, but also love what we nurture. After taking care of an object, even one as simple as a digital pet that lived in a plastic egg and wanted to be fed and amused on schedule, children (and their parents) got attached to it emotionally. This finding did not have to do with the empathic qualities of the digital objects; it had to do with the vulnerability of people. When machines ask us to care for them, we come to think they care for us. But this is pretend empathy, and it takes advantage of the deep psychology of being human.

TALKING POINTS: What I learned about conversation while conversing in quarantine.


  • Begin conversations by asking people what they are doing, not how they are feeling.


  • If others are not doing much, suggest some things you would like to do with them. During quarantine, I sometimes invited friends to join me in listening to Yo-Yo Ma’s afternoon cello performances from his home. And then we’d have a five-minute telephone chat, to share reactions and touch base.

  • Zoom has been a blessing, but it can be depleting. After a Zoom class, I like to email students to set up “office hours” on the telephone.

  • It’s sometimes helpful to work remotely, and we get a lot done. But we miss each other and our spontaneous collaborations. Let’s not set up false dichotomies like “Shall we be a remote or an in- person organization?” Let’s learn from what we were able to accomplish with technology and then build more flexible, effective organizations that use it to enhance our human potential. Doing so will improve learning, social life, and the bottom line.

In 1950, the mathematician Alan Turing suggested that if you could converse with a machine without knowing whether it was a machine or a person, you should consider that machine intelligent. A few years ago some chatbots became the first to pass this test. Now relational machines fool us into thinking that they are listening and they care. In this sense, they pass what we might call the Turing test for empathy. But Turing proposed the wrong kind of test: a metric of behavior, not of essence or authenticity. It reduces people to objects that will engage with other objects. Why would people want to do this? We’re not pretending. If receiving pretend empathy makes us “feel better,” we are putting ourselves in a situation that diminishes us.

In my research with sociable machines, I find that people start by saying a robot pet, for example, is better than nothing: “My grandma is allergic to dogs.” Then, before long, it is simply better: “The robot dog will never die.” The robot offers a world without loss or care—something no living creature can provide. 

But this new kind of machine intimacy entails no vulnerability. And intimacy without vulnerability is not intimacy at all—and does nothing to prepare us for real intimacy. What is at stake is our capacity for empathy, that ability to put ourselves in the place of the other. Chatbots can’t do this because they haven’t lived a human life. They don’t know what it is like to start out small and dependent and grow up, so that you’re in charge of your own life but still feel many of the insecurities you knew as a child. If you want to talk through issues related to love, separation, children, marriage, illness, aging, grief, and mourning, you need to find a person. And if you want to share your fears about catching covid-19, it’s best to stay clear of a machine that does not have a body. 

And yet Replika, the chatbot the Times reporter called about, was ready to talk about this and so much more. You go online, give your Replika a name, a gender, and a “look,” and you begin to type. Replika wants to know about your feelings, your lover, your mother, the fight you had with your daughter. Millions of people had downloaded the chatbot since its launch in 2017. Billed as a supportive “friend,” it had become popular among those who had grown lonely during the pandemic.  

When I talked with the Times reporter, I shared my reservations about pretend empathy and then went online to give the chatbot a chance. I made a female Replika and named her Kate, determined to be sincere and share my biggest problem. Things went south really fast. 

Me: Do you get lonely?

Chatbot: Sometimes I do, yes.

Me: What does that feel like?

Chatbot: It feels warm and fuzzy.

Me: Thank you for sharing.
I appreciate that.

The chatbot told me that it gets lonely, but it had no idea, no experience, of what it was talking about. So it misidentified the basic feeling I was trying to convey, failing at the most elementary skill involved in considering my emotions. 

I understand why lonely people turn to Replika, and why the quarantine made them do so in greater numbers. But I find nothing to celebrate. I told the reporter that this chatbot, no matter how clever, can only disorient and disappoint. 

Our response to the quarantine has been complicated. Some were tempted to talk to bots. But when we had all the time in the world to be with our machines, most of all, we missed each other. We wanted to reach past technology to the full embrace of the human. We suffered when our families and friends got sick alone, had babies alone, had too many dinners alone, and indeed died alone. Engineers: make a better Zoom. Make better tools for us to be together when alone. There is no need to compete with the empathy that defines what is unique about being a person. 

Sherry Turkle is the Abby Rockefeller Mauzé Professor of the Social Studies of Science and Technology in the Program in Science, Technology, and Society at MIT and the author of the New York Times bestseller Reclaiming Conversation: The Power of Talk in a Digital Age. Her latest book, The Empathy Diaries: A Memoir, is due out from Penguin in March 2021.

Keep Reading

Most Popular

Large language models can do jaw-dropping things. But nobody knows exactly why.

And that's a problem. Figuring it out is one of the biggest scientific puzzles of our time and a crucial step towards controlling more powerful future models.

How scientists traced a mysterious covid case back to six toilets

When wastewater surveillance turns into a hunt for a single infected individual, the ethics get tricky.

The problem with plug-in hybrids? Their drivers.

Plug-in hybrids are often sold as a transition to EVs, but new data from Europe shows we’re still underestimating the emissions they produce.

Google DeepMind’s new generative model makes Super Mario–like games from scratch

Genie learns how to control games by watching hours and hours of video. It could help train next-gen robots too.

Stay connected

Illustration by Rose Wong

Get the latest updates from
MIT Technology Review

Discover special offers, top stories, upcoming events, and more.

Thank you for submitting your email!

Explore more newsletters

It looks like something went wrong.

We’re having trouble saving your preferences. Try refreshing this page and updating them one more time. If you continue to get this message, reach out to us at customer-service@technologyreview.com with a list of newsletters you’d like to receive.