Social media isn’t just for photos of kittens and your uncle’s political memes anymore. It’s increasingly a tool governments use to influence elections and subvert democracy, according to a new report by the democracy advocacy group Freedom House.
The report found that at least 18 countries, including the United States, had their elections manipulated through social media over the last year. The spread of disinformation also contributed to the overall decline of Internet freedom across the world for the seventh year running, and contributed to violent attacks on human-rights activists and journalists, according to the report.
Russian meddling in last year’s U.S. presidential election and the Brexit vote is by now well-documented, but the Freedom House report found that it wasn’t just foreign forces that were trying to sway elections. Turkey, Venezuela, the Philippines, and more than two dozen other countries employed “opinion shapers” that spread government talking points and shut down critics within their own borders. The number of countries trying to shape online discussions in this way has risen every year since Freedom House began tracking it in 2009 (the image above is Freedom House’s graphic of global Internet freedom for this year).
The report points out that while countries like China and Russia have employed online armies to spread propaganda or shut down sites for at least a decade, automated systems like bots and algorithms are increasingly creating new ways of disrupting democracy that are harder to track, and yet to be fully understood.
Several countries seem to be taking note. The EU is asking for help in fighting disinformation and has created an expert group to combat fake news. Germany took special care before its election earlier this year to make sure its systems were safe from meddling.
In a somewhat more drastic measure, Somaliland, the self-declared republic in northwest Somalia, is blocking a dozen social-media sites during its upcoming election. Human Rights Watch has cautioned that social media is necessary for a free and fair election—but Somaliland believes the tactic is necessary after a rash of fake stories spread virally online.
This new data poisoning tool lets artists fight back against generative AI
The tool, called Nightshade, messes up training data in ways that could cause serious damage to image-generating AI models.
Rogue superintelligence and merging with machines: Inside the mind of OpenAI’s chief scientist
An exclusive conversation with Ilya Sutskever on his fears for the future of AI and why they’ve made him change the focus of his life’s work.
Unpacking the hype around OpenAI’s rumored new Q* model
If OpenAI's new model can solve grade-school math, it could pave the way for more powerful systems.
Generative AI deployment: Strategies for smooth scaling
Our global poll examines key decision points for putting AI to use in the enterprise.
Get the latest updates from
MIT Technology Review
Discover special offers, top stories, upcoming events, and more.