Skip to Content
Policy

Child online safety laws will actually hurt kids, critics say

Why child online safety is so complicated

a parent and child stand next to a height requirement sign that points a security camera beam at the adult's face
Stephanie Arnett/MITTR | Getty

This article is from The Technocrat, MIT Technology Review's weekly tech policy newsletter about power, politics, and Silicon Valley. To receive it in your inbox every Friday, sign up here.

This summer, the Senate moved two bills dealing with online privacy for children and teens out of committee. Both have been floating around Congress in various forms over the last few years and are starting to get some real bipartisan support

At the same time, we’ve also seen many states pick up (and politicize) laws about online safety for kids in recent months. These policies vary quite a bit from state to state, as I wrote back in April. Some focus on children’s data, and others try to limit how much and when kids can get online. 

Supporters say these laws are necessary to mitigate the risks that big tech companies pose to young people—risks that are increasingly well documented. They say it’s well past time to put guardrails in place and limit the collecting and selling of minors’ data.

“What we’re doing here is creating a duty of care that makes the social media platforms accountable for the harms they’ve caused,” said Senator Richard Blumenthal, who is co-sponsoring a child online safety bill in the Senate, in an interview with Slate. “It gives attorneys general and the FTC the power to bring lawsuits based on the product designs that, in effect, drive eating disorders, bullying, suicide, and sex and drug abuse that kids haven’t requested and that can be addictive.”

But—surprise, surprise—as with most things, it’s not really that simple. There are also vocal critics who argue that child safety laws are actually harmful to kids because all these laws, no matter their shape, have to contend with a central tension: in order to implement laws that apply to kids online, companies need to actually identify which users are kids—which requires the collection or estimation of sensitive personal information. 

I was thinking about this when the prominent New York–based civil society organization S.T.O.P. (which stands for the Surveillance Technology Oversight Project) released a report on September 28 that highlights some of these potential harms and makes the case that all bills requiring tech companies to identify underage users, even if well intentioned, will increase online surveillance for everyone. 

“These bills are sold as a way to protect teens, but they do just the opposite,” S.T.O.P. executive director Albert Fox Cahn said in a press release. “Rather than misguided efforts to track every user’s age and identity, we need privacy protections for every American.”  

There’s a wide range of regulations out there, but the report calls out several states that are creating laws imposing stricter—even drastic—restrictions on minors’ internet access, effectively limiting online speech. 

A Utah law that will take effect in March 2024, for instance, will require that parents give consent for their kids to access social media outside the hours of 6:30 a.m. to 10:30 p.m., and that social media companies build features enabling parents to access their kids’ accounts. 

Critics—especially those who advocate for online privacy and free speech, including but not limited to S.T.O.P.—have taken issue with different aspects of these various bills. But beyond the specific regulations, the common complaint is that there’s no privacy-preserving—or easy—way to confirm that an underage user is in fact underage.

There’s not exactly a gold standard for how to do this. Some bills, such as Utah’s, require that users provide official age verification, such as a government-issued ID, before accessing certain websites or products. (Er, would you really want X having a copy of your license?) Others, like a law in California, let companies do their own age estimations based on the data they already have from users. 

I, for one, keep coming back to the argument that these verification systems could have impacts far beyond the intended underage users. Putting the burden of verification on users and on tech companies could, as S.T.O.P. argues, end up blocking adults from certain types of content. If this happens, S.T.O.P. says, it would limit internet freedom, especially for members of marginalized communities who may be more hesitant to share age information, like undocumented migrants. 

As the report argues: “These laws mandate or coerce the use of new, invasive measures that verify users’ legal name, age, and address for nearly every internet service they use. … This change would be invasive and insecure for every user, but it would pose a particularly potent threat to undocumented communities, LGBTQ+ communities, and those seeking reproductive care.”

Honestly, it can be hard to know how to make sense of these laws. On the one hand, the evidence of the harm social media platforms pose to young people in particular is truly overwhelming. But … it’s complicated! I’ve reported on issues related to both sides of the issue. And the laws really do differ greatly between states. 

My two cents is that this would all be much easier if there were a comprehensive privacy law in the US that regulated user data and safety for both children and adults. 

What else I am reading

  • This feature from Gerry Shih in the Washington Post, which uncovers the digital campaign of Hindu nationalists in India. It’s a fascinating look at the growth of disinformation in the country, as well as the impact of private messaging apps, like WhatsApp, on conflicts.
  • Alex Reisner at the Atlantic has a blockbuster investigation into the data used to train one of Meta’s large language models, LLaMA, the company’s ChatGPT competitor. It includes a cool search tool—which readers can use themselves!—documenting over 180,000 books that the model was trained on. Copyright much?
  • Dhruv Mehrotra and Dell Cameron at Wired have a great scoop about how the owners of policing tech company ShotSpotter purchased the company that created PredPol, a predictive policing company. The acquisition could mark a terrifying combination of controversial and fringey police technologies, and I’ll be watching this industry closely as companies race to become the preferred tech vendor for law enforcement.

What I learned this week

Forgive me for taking a policy break, but … I guess I’m gonna need to buy an exoskeleton? My colleague Rhiannon wrote about wearable robots that might make people run faster, according to a new study in Science Robotics. The exoskeleton collects data from sensors about the runner’s gait, and then encourages the athlete to take more steps over the same distance, increasing speed. 

Deep Dive

Policy

Is there anything more fascinating than a hidden world?

Some hidden worlds--whether in space, deep in the ocean, or in the form of waves or microbes--remain stubbornly unseen. Here's how technology is being used to reveal them.

A brief, weird history of brainwashing

L. Ron Hubbard, Operation Midnight Climax, and stochastic terrorism—the race for mind control changed America forever.

Africa’s push to regulate AI starts now        

AI is expanding across the continent and new policies are taking shape. But poor digital infrastructure and regulatory bottlenecks could slow adoption.

Stay connected

Illustration by Rose Wong

Get the latest updates from
MIT Technology Review

Discover special offers, top stories, upcoming events, and more.

Thank you for submitting your email!

Explore more newsletters

It looks like something went wrong.

We’re having trouble saving your preferences. Try refreshing this page and updating them one more time. If you continue to get this message, reach out to us at customer-service@technologyreview.com with a list of newsletters you’d like to receive.