This article is from The Technocrat, MIT Technology Review's weekly tech policy newsletter about power, politics, and Silicon Valley. To receive it in your inbox every Friday, sign up here.
We’re only a few weeks into 2024, and violations of people’s privacy are already making some big headlines! First we had the continued drama with the 23andMe data breach; then a major financial software company was shut down for inappropriately using private information; and then this week, the FTC took an unprecedented step and banned a data broker from selling people’s location data.
It’s a major move that could signal some more aggressive action from policy makers to curb the corrosive effects that data brokers have on personal privacy.
If you’re not familiar with data brokers, they make up a massive and growing industry that collects, buys, and analyzes personal data and sells it to other companies or groups, which use that information to target messages and advertisements or sell products. I wrote about the sector a few months ago, after researchers found brokers selling data about US military personnel and their families with little discretion and for mere pennies. The researchers told me they were “shocked” at how easy it was to buy sensitive data about military members.
These companies, though, are often cloaked in extreme secrecy, and given that it’s a pretty new industry, they aren’t bound by a ton of regulations. It’s been really interesting to see how lawmakers and other government actors have responded to them over the past several years.
Notably, these firms have been under particular scrutiny since the Supreme Court eliminated the legal right to abortion in 2022. After the Dobbs decision, a lot of Democrats in particular were concerned that data brokers would track and sell data about visits to sensitive locations, like a doctor’s office or abortion clinic. And in July 2022, President Joe Biden issued an executive order directing federal agencies to increase privacy protections related to reproductive care.
So on Tuesday, the FTC announced that it was banning Outlogic, formerly X-Mode Social, from sharing and selling users’ sensitive information—particularly, precise location data that tracked people’s visits to places like medical clinics—and required that it delete all the previous location data it collected.
X-Mode has been around since 2013, and its software has been integrated into hundreds of different apps to collect location data of millions of users worldwide. The new FTC settlement isn’t the first time the company has gotten into hot water. Back in 2020, an investigation by Vice revealed that data collected by X-Mode on a Muslim social app was shared with a US military intelligence contractor.
The agency now alleges that the company did not protect geolocation data about sensitive locations, violated consumer privacy, and failed to put in place safeguards on the use of sensitive information by third parties. (In a statement reported by Reuters, Outlogic says there was no finding that it misused location data and has always prohibited customers from “associating its data with sensitive locations such as healthcare facilities.") In its settlement announcement, FTC chair Lina Khan said, “By securing a first-ever ban on the use and sale of sensitive location data, the FTC is continuing its critical work to protect Americans from intrusive data brokers and unchecked corporate surveillance.”
One expert I spoke with thinks this could be a sign of what’s to come for the industry.
“The FTC’s action is significant because of the prohibitions—barring the company from selling data about sensitive locations, rather than just paying fines,” says Justin Sherman, an adjunct professor at Duke’s Sanford School of Public Policy. In other words, it’s more than a slap on the wrist.
Sherman, who runs a project at Duke focused on the industry and who was involved in the research about military members, adds that this new move is “also notable because the FTC is focused on how certain locations are more sensitive than others.” The idea that people have different rights to privacy in different contexts is similar to the argument the FTC is making in its ongoing lawsuit against the data broker Kochava, which it’s suing on the grounds that it identifies anonymous users without consent and tracks their sensitive location data.
No matter what the FTC does next, data brokers are likely to continue to draw scrutiny for their sketchy practices. The new settlement is also likely to fuel more calls for forceful legislative action against them.
In a statement, Senator Ron Wyden’s office said, “While the FTC’s action is encouraging, the agency should not have to play data broker whack-a-mole. Congress needs to pass tough privacy legislation to protect Americans’ personal information and prevent government agencies from going around the courts by buying our data from data brokers.”
Maybe something to look forward to in 2024?
What I am reading this week
- If you read one story this week, it should be this feature from the NYT on the collision of disinformation and elections, which has been a theme I’ve been chatting about over the past few months. It’s a great analysis of the risks.
- Tech layoffs are ongoing, and this week Google announced it was cutting jobs focused on its voice-activated assistant. Over the past two years, tech workers have gotten hit really hard with layoffs, particularly in the trust and safety departments.
- Microsoft is debating what to do about its research lab in China in light of the country’s escalating tensions with the US. It’s a noteworthy example of the impact geopolitical moves are having on the tech industry and individual business and workers.
What I learned this week
A new study found that deplatforming—or kicking someone off of social media, usually because of the spreading of misinformation—reduces overall attention to those figures online. I’d recommend Justin Hendrix’s analysis of the study in Tech Policy Press, in which he explains that the current research on the effects of deplatforming has yielded complex and unclear results. However, this new body of work from researchers at EPFL Switzerland and Rutgers University indicates that while it does have an effect on overall attention, the effect of deplatforming is much higher for lesser-known people. For the most well-known individuals, like former president Donald Trump, deplatforming does not have as much of an impact. It’s really important research as tech platforms hone their misinformation tool kit ahead of a big election year.
What’s next for AI regulation in 2024?
The coming year is going to see the first sweeping AI laws enter into force, with global efforts to hold tech companies accountable.
Meet the economist who wants the field to account for nature
Gretchen Daily is working to make the environment more of an element in economic decision-making.
Three technology trends shaping 2024’s elections
The biggest story of this year will be elections in the US and all around the globe
Four lessons from 2023 that tell us where AI regulation is going
What we should expect in the coming 12 months in AI policy
Get the latest updates from
MIT Technology Review
Discover special offers, top stories, upcoming events, and more.