Skip to Content
Artificial intelligence

A food delivery robot burst into flames—and now people have made a candlelight vigil for it

December 17, 2018

A food delivery robot caught fire on the campus of UC Berkeley on Friday, prompting an outpouring of grief online and leading students to set up a candlelit memorial to the “KiwiBot.”

What happened? The “KiwiBot,” one of over 100 robots that deliver food throughout Berkeley, suddenly burst into flames. The manufacturer blamed human error, saying someone had inserted a defective battery that had caused thermal runaway (the same issue that made Samsung’s Galaxy Note 7 phones catch fire in 2016). It promised that new software will monitor the batteries inside its bots, to avoid a repeat.

The response: Students took to the “Overheard at UC Berkeley” Facebook page to pay tribute, describing the robot as a “hero” and a “legend,” according to the student paper. The video of the smoldering bot had nearly 100 comments just an hour after it was uploaded. Some students called for a moment of silence, and others even went as far as to create a candlelit memorial.

Umm … It might sound ridiculous (finals week can get to the best of us). But it’s just another example of humans empathizing with robots. We’ve held funerals for them. There was genuine outrage when a Canadian hitchhiking robot called hitchBOT was decapitated in Philadelphia. People even claimed a robot had “committed suicide” last year when it fell into a fountain (in fact, its algorithm just failed to detect an uneven surface).  People were upset when Google released a video showing researchers kicking its robot dog. Some studies have found people are reluctant to “hurt” robots and that if we see one in “pain” it affects us in much the same way as if it were a human being.

Deep Dive

Artificial intelligence

A Roomba recorded a woman on the toilet. How did screenshots end up on Facebook?

Robot vacuum companies say your images are safe, but a sprawling global supply chain for data from our devices creates risk.

The viral AI avatar app Lensa undressed me—without my consent

My avatars were cartoonishly pornified, while my male colleagues got to be astronauts, explorers, and inventors.

Roomba testers feel misled after intimate images ended up on Facebook

An MIT Technology Review investigation recently revealed how images of a minor and a tester on the toilet ended up on social media. iRobot said it had consent to collect this kind of data from inside homes—but participants say otherwise.

How to spot AI-generated text

The internet is increasingly awash with text written by AI software. We need new tools to detect it.

Stay connected

Illustration by Rose Wong

Get the latest updates from
MIT Technology Review

Discover special offers, top stories, upcoming events, and more.

Thank you for submitting your email!

Explore more newsletters

It looks like something went wrong.

We’re having trouble saving your preferences. Try refreshing this page and updating them one more time. If you continue to get this message, reach out to us at customer-service@technologyreview.com with a list of newsletters you’d like to receive.