Skip to Content
Opinion

Eric Schmidt has a 6-point plan for fighting election misinformation

The former Google CEO hopes that companies, Congress, and regulators will take his advice on board—before it’s too late.

December 15, 2023
Eric Schmidt seated in a booth at Googleplex
Winni Wintermeyer/Redux

The coming year will be one of seismic political shifts. Over 4 billion people will head to the polls in countries including the United States, Taiwan, India, and Indonesia, making 2024 the biggest election year in history.

And election campaigns are using artificial intelligence in novel ways. Earlier this year in the US, the Republican presidential primary campaign of Florida governor Ron DeSantis posted doctored images of Donald Trump; the Republican National Committee released an AI-created ad depicting a dystopian future in response to Joe Biden’s announcing his reelection campaign; and just last month, Argentina’s presidential candidates each created an abundance of AI-generated content portraying the other party in an unflattering light. This surge in deepfakes heralds a new political playing field. Over the past year, AI was used in at least 16 countries to sow doubt, smear opponents, or influence public debate, according to a report released by Freedom House in October. We’ll need to brace ourselves for more chaos as key votes unfold across the world in 2024. 

The year ahead will also bring a paradigm shift for social media platforms. The role  of Facebook and others has conditioned our understanding of social media as centralized, global “public town squares” with a never-ending stream of content and frictionless feedback. Yet the mayhem on X (a.k.a. Twitter) and declining use of Facebook among Gen Z—alongside the ascent of apps like TikTok and Discord—indicate that the future of social media may look very different. In pursuit of growth, platforms have embraced the amplification of emotions through attention-driven algorithms and recommendation-fueled feeds. 

But that’s taken agency away from users (we don’t control what we see) and has instead left us with conversations full of hate and discord, as well as a growing epidemic of mental-health problems among teens. That’s a far cry from the global, democratized one-world conversation the idealists dreamed of 15 years ago. With many users left adrift and losing faith in these platforms, it’s clear that maximizing revenue has ironically hurt business interests.

Now, with AI starting to make social media much more toxic, platforms and regulators need to act quickly to regain user trust and safeguard our democracy. Here I propose six technical approaches that platforms should double down on to protect their users. Regulations and laws will play a crucial role in incentivizing or mandating many of these actions. And while these reforms won’t solve all the problems of mis- and disinformation, they can help stem the tide ahead of elections next year. 

1.     Verify human users. We need to distinguish humans using social media from bots, holding both accountable if laws or policies are violated. This doesn’t mean divulging identities. Think of how we feel safe enough to hop into a stranger’s car because we see user reviews and know that Uber has verified the driver’s identity. Similarly, social media companies need to authenticate the human behind each account and introduce reputation-based functionality to encourage accounts to earn trust from the community.

2.     Know every source. Knowing the provenance of the content and the time it entered the network can improve trust and safety. As a first step, using a time stamp and an encrypted (and not removable) IP address would guarantee an identifiable point of origin. Bad actors and their feeds—discoverable through the chain of custody—could be deprioritized or banned instead of being algorithmically amplified. While VPN traffic may deter detection, platforms can step up efforts to improve identification of VPNs. 

3.     Identify deepfakes. In line with President Biden’s sweeping executive order on AI, which requires the Department of Commerce to develop guidance for watermarking AI-generated content, platforms should further develop detection and labeling tools. One way for platforms to start is to scan an existing database of images and tell the user if an image has no history (Google Images, for example, has begun to do this). AI systems can also be trained to detect the signatures of deepfakes, using large sets of truthful images contrasted with images labeled as fake. Such software can tell you when an image has a high likelihood of being a deepfake, similar to the “spam risk” notice you get on your phone when calls come in from certain numbers.

4.     Filter advertisers. Companies can share a “safe list” of advertisers across platforms, approving those who comply with applicable advertising laws and conform professionally to the platforms’ advertising standards. Platforms also need to ramp up their scrutiny of political ads, adding prominent disclaimers if synthetic content is used. Meta, for example, announced this month that it would require political ads to disclose whether they used AI.  

5.     Use real humans to help. There will, of course, be mistakes, and some untrustworthy content will slip through the protections. But the case of Wikipedia shows that misinformation can be policed by humans who follow clear and highly detailed content rules. Social media companies, too, should publish quality rules for content and enforce them by further equipping their trust and safety teams, and potentially augmenting those teams by providing tools to volunteers. How humans fend off an avalanche of AI-generated material from chatbots remains to be seen, but the task will be less daunting if trained AI systems are deployed to detect and filter out such content. 

6.     Invest in research. For all these approaches to work at scale, we’ll require long-term engagement, starting now. My philanthropic group is working to help create free, open-source testing frameworks for many AI trust and safety groups. Researchers, the government, and civil society will also need increased access to critical platform data. One promising bill is the Platform Accountability and Transparency Act, which would, for example, require platforms to comply with data requests from projects approved by the National Science Foundation.

With a concerted effort from companies, regulators, and Congress, we can adopt these proposals in the coming year, in time to make a difference. My worry is that everyone benefits from favorable mis- or disinformation to varying degrees: our citizens are amused by such content, our political leaders may campaign with it, and the media garners traffic by covering sensationalist examples. The existing incentive structures will make misinformation hard to eliminate.  

Social media platforms need to fundamentally rethink their design for the age of AI, especially as democracies face a historic test worldwide. It’s clear to me the future will be one of many decentralized online spaces that cater to every interest, reflect the views of real humans (not bots), and focus on concrete community concerns. But until that day comes, setting these guardrails in place will help ensure that platforms maintain a healthy standard of discourse and do not let opaque, engagement-driven algorithms allow AI-enabled election content to run rampant.

Eric Schmidt was the CEO of Google from 2001 to 2011. He is currently cofounder of Schmidt Futures, a philanthropic initiative that bets early on exceptional people making the world better, applying science and technology, and bringing people together across fields

Keep Reading

Most Popular

What is AI?

Everyone thinks they know but no one can agree. And that’s a problem.

What are AI agents? 

The next big thing is AI tools that can do more complex tasks. Here’s how they will work.

How to fix a Windows PC affected by the global outage

There is a known workaround for the blue screen CrowdStrike error that many Windows computers are currently experiencing. Here’s how to do it.

What’s next for bird flu vaccines

If we want our vaccine production process to be more robust and faster, we’ll have to stop relying on chicken eggs.

Stay connected

Illustration by Rose Wong

Get the latest updates from
MIT Technology Review

Discover special offers, top stories, upcoming events, and more.

Thank you for submitting your email!

Explore more newsletters

It looks like something went wrong.

We’re having trouble saving your preferences. Try refreshing this page and updating them one more time. If you continue to get this message, reach out to us at customer-service@technologyreview.com with a list of newsletters you’d like to receive.