How to hack a smart fridge
Your smart home devices know more about you than you might think—and they’re less secure than you’d hope.
This article is from The Technocrat, MIT Technology Review's weekly tech policy newsletter about power, politics, and Silicon Valley. To receive it in your inbox every Friday, sign up here.
Do you know how many internet-connected devices there are inside your home? I certainly don’t. These days, it could be almost anything: a thermostat, a TV, a lightbulb, an air conditioner, or a refrigerator. But what I do know, thanks to some of the conversations I’ve had over the past few weeks, is just how much data they’re producing, and how many people can access that data if they want to. Hint: it’s a lot.
I’ve been speaking to people who work in a field called IoT forensics, which is essentially about snooping around these devices to find data and, ultimately, clues. Although law enforcement bodies and courts in the US don’t often explicitly refer to data from IoT devices, those devices are becoming an increasingly important part of building cases. That’s because, when they’re present at a crime scene, they hold secrets that might be invisible to the naked eye. Secrets like when someone switched a light off, brewed a pot of coffee, or turned on a TV can be pivotal in an investigation.
Mattia Epifani is one such person. He doesn’t call himself a hacker, but he is someone the police turn to when they need help investigating whether data can be extracted from an item. He’s a digital forensic analyst and instructor at the SANS Institute, and he’s worked with lawyers, police, and private clients around the world.
“I’m like … obsessed. Every time I see a device, I think, How could I extract data from there? I always do it on test devices or under authorization, of course,” says Epifani.
Smartphones and computers are the most common sorts of devices police seize to assist an investigation, but Epifani says evidence of a crime can come from all sorts of places: “It can be a location. It can be a message. It can be a picture. It can be anything. Maybe it can also be the heart rate of a user or how many steps the user took. And all these things are basically stored on electronic devices.”
Take, for example, a Samsung refrigerator. Epifani used data from VTO Labs, a digital forensics lab in the US, to investigate just how much information a smart fridge keeps about its owners.
VTO Labs reverse-engineered the data storage system of a Samsung fridge after it had primed the appliance with test data, extracted that data, and posted a copy of its databases publicly on their website for use by researchers. Steve Watson, the lab’s CEO, explained that this involves finding all the places where the fridge could store data, both within the unit itself and outside it, in apps or cloud storage. Once they’d done that, Epifani got to work analyzing and organizing the data and gaining access to the files.
What he found was a treasure trove of personal details. Epifani found information about Bluetooth devices near the fridge, Samsung user account details like email addresses and home Wi-Fi networks, temperature and geolocation data, and hourly statistics on energy usage. The fridge stored data about when a user was playing music through an iHeartRadio app. Epifani could even access photos of the Diet Coke and Snapple on the fridge’s shelves, thanks to the small camera that’s embedded inside it. What’s more, he found that the fridge could hold much more data if a user connected the fridge to other Samsung devices through a centralized personal or shared family account.
None of this is necessarily secret or undisclosed to people when they buy this model of refrigerator, but I certainly wouldn’t have expected that if I were under investigation, a police officer—with a warrant, of course—could see my hungry face each time I opened my fridge hunting for cheese. Samsung didn’t reply to our request for comment, but it’s following pretty standard practices within the world of IoT. Many of these sorts of devices access and store similar types of data.
Devices don’t even have to be particularly sophisticated to prove helpful in criminal investigations, according to Watson and Epifani.
Both of them have both worked on devices more discreet than smart fridges. Once, VTO Labs examined a circuit board from an ocean buoy in an effort to find out whether it contained any data about the shipping movements of drug traffickers. Watson says that the circuit board revealed a satellite communications provider and, ultimately, the account number associated with a smuggler.
Just to compound the plentiful security and privacy risks, many IoT devices also run on out-of-date, and thus less secure, operating systems, because users rarely remember to update them. “Can you imagine people updating their fridge? No, they don’t,” says Epifani.
This problem is only going to grow as we stuff our homes with more and more things that connect to the internet. Recently, the Atlantic wrote a great piece about the data that smart TVs collect on their couch-bound watchers. My colleague Eileen Guo showed how Roomba vacuums can take invasive pictures, in an investigation about how data was collected on people who were testing the products.
Watson is not especially worried about the government or the tech companies spying on you through your thermostat, per se. He’s more worried about all the ways your data is being sold and accumulated by data brokers.
“That’s where the risks are that people don’t understand: if my bed tracks my sleep and tracks my heart rate, and that company is selling off this information to an insurance company that realizes you have a near cardiac event every time you go to sleep, or that you have sleep apnea or whatever,” he says.
“The more technology encroaches into our lives in every facet … we lose the ability to have any measure of control over where it’s going, how much is collected, who’s getting their hands on it, and what they are doing with it.”
What I am reading this week
- The Kids Online Safety Act, a federal bill that would require social media platforms to offer features to disable algorithmic recommendations and increase data protections for minors, was reintroduced by a bipartisan group in the Senate after it met with heavy criticism in the last congressional session. The bill has a lot of momentum, and you’ll be hearing a lot about it in the coming weeks. (I wrote about the push to pass online child safety bills in the US a few weeks ago.)
- Artificial-intelligence pioneer Geoffrey Hinton resigned from Google this week, in part so he could speak freely about the dangers of the recent advances in AI that he helped usher in. My colleague Will Douglas Heaven interviewed Hinton last week and spoke to him live at our EmTech Digital event. Both conversations were fascinating.
- The White House announced new AI guardrails yesterday, including an initiative to conduct independent public assessments of generative AI models that has been signed by Google, Microsoft, and OpenAI, among others.
What I learned this week
Twitter’s algorithms amplify political tweets that make people feel angrier, according to a new working paper by researchers at Cornell Tech and University of California, Berkeley presented at the Knight First Amendment Institute last week.
The researchers compared tweets in users’ chronological Twitter feeds, organized simply by the time tweets are posted, to their personalized feeds, which are sorted by an algorithm designed to prioritize engagement. They found that the political tweets picked by an algorithm made people feel more strongly opposed to groups with views that differ to theirs.
The finding might seem obvious, but this paper is one of the first to demonstrate exactly how algorithms can deepen political divides. Interestingly, the researchers also found that users prefer the personalized timeline for tweets in general, but not for political tweets. That suggests that they’d rather not have a bunch of anger-inducing political posts shoved at them. The study is an important contribution to the ongoing debate about the role that social media plays in political polarization.
How to preserve your digital memories
Following recent announcements by Google and Twitter, more data deletion policies are coming.
Your digital life isn’t as permanent as you think it is
Google will delete accounts after two years of inactivity, and experts expect more data deletion policies to come
Catching bad content in the age of AI
Why haven’t tech companies improved at content moderation?
Behind the scenes of Carnegie Mellon’s heated privacy dispute
Researchers at Carnegie Mellon University wanted to create a privacy-preserving smart sensor. They were accused of violating privacy instead.
Get the latest updates from
MIT Technology Review
Discover special offers, top stories, upcoming events, and more.