Skip to Content

How Facebook and Google fund global misinformation

The tech giants are paying millions of dollars to the operators of clickbait pages, bankrolling the deterioration of information ecosystems around the world.

light and shadow on floor
Getty
November 20, 2021

Myanmar, March 2021.

A month after the fall of the democratic government.

A Facebook Live video showed hundreds of people protesting against the military coup on the streets of Myanmar.

It had nearly 50,000 shares and over 1.5 million views, in a country with a little over 54 million people.

Observers, unable to see the events on the ground, used the footage, along with hundreds of other live feeds, to track and document the unfolding situation. (MIT Technology Review blurred the names and images of the posters to avoid jeopardizing their safety.)

But less than a day later, the same video would be broadcast again multiple times, each still claiming to be live.

In the middle of a massive political crisis, there was no longer a way to discern what was real and what wasn’t.

In 2015, six of the 10 websites in Myanmar getting the most engagement on Facebook were from legitimate media, according to data from CrowdTangle, a Facebook-run tool. A year later, Facebook (which recently rebranded to Meta) offered global access to Instant Articles, a program publishers could use to monetize their content.

One year after that rollout, legitimate publishers accounted for only two of the top 10 publishers on Facebook in Myanmar. By 2018, they accounted for zero. All the engagement had instead gone to fake news and clickbait websites. In a country where Facebook is synonymous with the internet, the low-grade content overwhelmed other information sources.

It was during this rapid degradation of Myanmar’s digital environment that a militant group of Rohingya—a predominantly Muslim ethnic minority—attacked and killed a dozen members of the security forces, in August of 2017. As police and military began to crack down on the Rohingya and push out anti-Muslim propaganda, fake news articles capitalizing on the sentiment went viral. They claimed that Muslims were armed, that they were gathering in mobs 1,000 strong, that they were around the corner coming to kill you.

It’s still not clear today whether the fake news came primarily from political actors or from financially motivated ones. But either way, the sheer volume of fake news and clickbait acted like fuel on the flames of already dangerously high ethnic and religious tensions. It shifted public opinion and escalated the conflict, which ultimately led to the death of 10,000 Rohingya, by conservative estimates, and the displacement of 700,000 more.

In 2018, a United Nations investigation determined that the violence against the Rohingya constituted a genocide and that Facebook had played a “determining role” in the atrocities. Months later, Facebook admitted it hadn’t done enough “to help prevent our platform from being used to foment division and incite offline violence.”

Over the last few weeks, the revelations from the Facebook Papers, a collection of internal documents provided to Congress and a consortium of news organizations by whistleblower Frances Haugen, have reaffirmed what civil society groups have been saying for years: Facebook’s algorithmic amplification of inflammatory content, combined with its failure to prioritize content moderation outside the US and Europe, has fueled the spread of hate speech and misinformation, dangerously destabilizing countries around the world.

But there’s a crucial piece missing from the story. Facebook isn’t just amplifying misinformation.

The company is also funding it.

An MIT Technology Review investigation, based on expert interviews, data analyses, and documents that were not included in the Facebook Papers, has found that Facebook and Google are paying millions of ad dollars to bankroll clickbait actors, fueling the deterioration of information ecosystems around the world.

The anatomy of a clickbait farm

Facebook launched its Instant Articles program in 2015 with a handful of US and European publishers. The company billed the program as a way to improve article load times and create a slicker user experience.

That was the public sell. But the move also conveniently captured advertising dollars from Google. Before Instant Articles, articles posted on Facebook would redirect to a browser, where they’d open up on the publisher’s own website. The ad provider, usually Google, would then cash in on any ad views or clicks. With the new scheme, articles would open up directly within the Facebook app, and Facebook would own the ad space. If a participating publisher had also opted in to monetizing with Facebook’s advertising network, called Audience Network, Facebook could insert ads into the publisher’s stories and take a 30% cut of the revenue. 

Instant Articles quickly fell out of favor with its original cohort of big mainstream publishers. For them, the payouts weren’t high enough compared with other available forms of monetization. But that was not true for publishers in the Global South, which Facebook began accepting into the program in 2016. In 2018, the company reported paying out $1.5 billion to publishers and app developers (who can also participate in Audience Network). By 2019, that figure had reached multiple billions.

Early on, Facebook performed little quality control on the types of publishers joining the program. The platform’s design also didn’t sufficiently penalize users for posting identical content across Facebook pages—in fact, it rewarded the behavior. Posting the same article on multiple pages could as much as double the number of users who clicked on it and generated ad revenue.

Clickbait farms around the world seized on this flaw as a strategy—one they still use today.

A farm will create a website or multiple websites…

…for publishing predominantly plagiarized content.

It registers them with Instant Articles and Audience Network,

which inserts ads into their articles.

Then it posts those articles across a cluster of as many as dozens of Facebook pages at a time.

Clickbait actors cropped up in Myanmar overnight. With the right recipe for producing engaging and evocative content, they could generate thousands of US dollars a month in ad revenue, or 10 times the average monthly salary—paid to them directly by Facebook.

An internal company document, first reported by MIT Technology Review in October, shows that Facebook was aware of the problem as early as 2019. The author, former Facebook data scientist Jeff Allen, found that these exact tactics had allowed clickbait farms in Macedonia and Kosovo to reach nearly half a million Americans a year before the 2020 election. The farms had also made their way into Instant Articles and Ad Breaks, a similar monetization program for inserting ads into Facebook videos. At one point, as many as 60% of the domains enrolled in Instant Articles were using the spammy writing tactics employed by clickbait farms, the report said. Allen, bound by a nondisclosure agreement with Facebook, did not comment on the report.

Despite pressure from both internal and external researchers, Facebook struggled to stem the abuse. Meanwhile, the company was rolling out more monetization programs to open up new streams of revenue. Besides Ad Breaks for videos, there was IGTV Monetization for Instagram and In-Stream Ads for Live videos. “That reckless push for user growth we saw—now we are seeing a reckless push for publisher growth,” says Victoire Rio, a digital rights researcher fighting platform-induced harms in Myanmar and other countries in the Global South.

MIT Technology Review has found that the problem is now happening on a global scale. Thousands of clickbait operations have sprung up, primarily in countries where Facebook’s payouts provide a larger and steadier source of income than other forms of available work. Some are teams of people while others are individuals, abetted by cheap automated tools that help them create and distribute articles at mass scale. They’re no longer limited to publishing articles, either. They push out Live videos and run Instagram accounts, which they monetize directly or use to drive more traffic to their sites.

Google is also culpable. Its AdSense program fueled the Macedonia- and Kosovo-based farms that targeted American audiences in the lead-up to the 2016 presidential election. And it’s AdSense that is incentivizing new clickbait actors on YouTube to post outrageous content and viral misinformation.

Many clickbait farms today now monetize with both Instant Articles and AdSense, receiving payouts from both companies. And because Facebook’s and YouTube’s algorithms boost whatever is engaging to users, they’ve created an information ecosystem where content that goes viral on one platform will often be recycled on the other to maximize distribution and revenue.

“These actors wouldn’t exist if it wasn’t for the platforms,” Rio says.

In response to the detailed evidence we provided to each company of this behavior, Meta spokesperson Joe Osborne disputed our core findings, saying we’d misunderstood the issue. “Regardless, we’ve invested in building new expert-driven and scalable solutions to these complex issues for many years, and will continue doing so,” he said.

Google confirmed that the behavior violated its policies and terminated all of the YouTube channels MIT Technology Review identified as spreading misinformation. “We work hard to protect viewers from clickbait or misleading content across our platforms and have invested heavily in systems that are designed to elevate authoritative information,” YouTube spokesperson Ivy Choi said.

Clickbait farms are not just targeting their home countries. Following the example of actors from Macedonia and Kosovo, the newest operators have realized they need to understand neither a country’s local context nor its language to turn political outrage into income.

MIT Technology Review partnered with Allen, who now leads a nonprofit called the Integrity Institute that conducts research on platform abuse, to identify possible clickbait actors on Facebook. We focused on pages run out of Cambodia and Vietnam—two of the countries where clickbait operations are now cashing in on the situation in Myanmar.

We obtained data from CrowdTangle, whose development team the company broke up earlier this year, and from Facebook’s Publisher Lists, which record which publishers are registered in monetization programs. Allen wrote a custom clustering algorithm to find pages posting content in a highly coordinated manner and targeting speakers of languages used primarily outside the countries where the operations are based. We then analyzed which clusters had at least one page registered in a monetization program or were heavily promoting content from a page registered with a program.

We found over 2,000 pages in both countries engaged in this clickbait-like behavior. (That could be an undercount, because not all Facebook pages are tracked by CrowdTangle.) Many have millions of followers and likely reach even more users. In his 2019 report, Allen found that 75% of users who were exposed to clickbait content from farms run in Macedonia and Kosovo had never followed any of the pages. Facebook’s content-recommendation system had instead pushed it into their news feeds.

When MIT Technology Review sent Facebook a list of these pages and a detailed explanation of our methodology, Osborne called the analysis “flawed.” “While some Pages here may have been on our publisher lists, many of them didn’t actually monetize on Facebook,” he said. 

Indeed, these numbers do not indicate that all of these pages generated ad revenue. Instead, it is an estimate, based on data Facebook has made publicly available, of the number of pages associated with clickbait actors in Cambodia and Vietnam that Facebook has made eligible to monetize on the platform.

Osborne also confirmed that more of the Cambodia-run clickbait-like pages we found had directly registered with one of Facebook’s monetization programs than we previously believed. In our analysis, we found 35% of the pages in our clusters had done so in the last two years. The other 65% would have indirectly generated ad revenue by heavily promoting content from the registered page to a wider audience. Osborne said that in fact about half of the pages we found, or roughly 150 more pages, had directly registered at one point with a monetization program, primarily Instant Articles.

Shortly after we approached Facebook, operators of clickbait pages in Myanmar began complaining in online forums that their pages had been booted out of Instant Articles. Osborne declined to respond to our questions about the latest enforcement actions the company has taken.

Facebook has continuously sought to weed these actors out of its programs. For example, only 30 of the Cambodia-run pages are still monetizing, Osborne said. But our data from Facebook’s publisher lists shows enforcement is often delayed and incomplete—clickbait pages can stay within monetization programs for hundreds of days before they are taken down. The same actors will also spin up new pages once their old ones have demonetized.

Allen is now open-sourcing the code we used to encourage other independent researchers to refine and build on our work.

Using the same methodology, we also found  more than 400 foreign-run pages targeting predominantly US audiences in clusters that appeared in Facebook’s Publisher lists over the last two years. (We did not include pages from countries whose primary language is English.) The set includes a monetizing cluster run in part out of Macedonia aimed at women and the LGBTQ community. It has eight Facebook pages, including two verified ones with over 1.7 million and 1.5 million followers respectively, and posts content from five websites, each registered with Google AdSense and Audience Network. It also has three Instagram accounts, which monetize through gift shops and collaborations and by directing users to the same largely plagiarized websites. Admins of the Facebook pages and Instagram accounts did not respond to our requests for comment.

The LGBT News and Women's Rights News pages on Facebook post identical content from five of its own affiliated sites monetizing with Instant Articles and Google AdSense, as well as from other news outlets that it appears to have paid partnerships with.

Osborne said Facebook is now investigating the accounts after we brought them to the company’s attention. Choi said Google has removed AdSense ads from hundreds of pages on these sites in the past because of policy violations but that the sites themselves are still allowed to monetize based on the company’s regular reviews.

While it’s possible that the Macedonians who run the pages do indeed care about US politics and about women’s and LGBTQ rights, the content is undeniably generating revenue. This means what they promote is most likely guided by what wins and loses with Facebook’s news feed algorithm.

The activity of a single page or cluster of pages may not feel significant, says Camille François, a researcher at Columbia University who studies organized disinformation campaigns on social media. But when hundreds or thousands of actors are doing the same thing, amplifying the same content, and reaching millions of audience members, it can affect the public conversation. “What people see as the domestic conversation on a topic can actually be something completely different,” François says. “It’s a bunch of paid people pretending to not have any relationship with one another, optimizing what to post.”

Osborne said Facebook has created several new policies and enforcement protocols in the last two years to address this issue, including penalizing pages run out of one country that behave as if they are local to another, as well as penalizing pages that build an audience on the basis of one topic and then pivot to another. But both Allen and Rio say the company’s actions have failed to close fundamental loopholes in the platform’s policies and designs—vulnerabilities that are fueling a global information crisis.

“It’s affecting countries first and foremost outside the US but presents a massive risk to the US long term as well,” Rio says. “It’s going to affect pretty much anywhere in the world when there are heightened events like an election.”

Disinformation for hire

In response to MIT Technology Review’s initial reporting on Allen’s 2019 internal report, which we published in full, David Agranovich, the director of global threat disruption at Facebook, tweeted, “The pages referenced here, based on our own 2019 research, are financially motivated spammers, not overt influence ops. Both of these are serious challenges, but they’re different. Conflating them doesn’t help anyone.” Osborne repeated that we were conflating the two groups in response to our findings.

But disinformation experts say it’s misleading to draw a hard line between financially motivated spammers and political influence operations.

There is a distinction in intent: financially motivated spammers are agnostic about the content they publish. They go wherever the clicks and money are, letting Facebook’s news feed algorithm dictate which topics they’ll cover next. Political operations are instead targeted toward pushing a specific agenda.

But in practice it doesn’t matter: in their tactics and impact, they often look the same. On an average day, a financially motivated clickbait site might be populated with celebrity news, cute animals, or highly emotional stories—all reliable drivers of traffic. Then, when political turmoil strikes, they drift toward hyperpartisan news, misinformation, and outrage bait because it gets more engagement.

The Macedonian page cluster is a prime example. Most of the time the content promotes women’s and LGTBQ rights. But around the time of events like the 2020 election, the January 6 insurrection, and the passage of Texas’s antiabortion “heartbeat bill,” the cluster amplified particularly pointed political content. Many of its articles have been widely circulated by legitimate pages with huge followings, including those run by Occupy Democrats, the Union of Concerned Scientists, and Women’s March Global. 

An example of a highly political article that was ultimately deleted from one of the cluster's five affiliated sites. Clickbait sites often scrub old articles from their pages.

Political influence operations, meanwhile, might post celebrity and animal content to build out Facebook pages with large followings. They then also pivot to politics during sensitive political events, capitalizing on the huge audiences already at their disposal.

Political operatives will sometimes also pay financially motivated spammers to broadcast propaganda on their Facebook pages, or buy pages to repurpose them for influence campaigns. Rio has already seen evidence of a black market where clickbait actors can sell their large Facebook audiences.

In other words, pages look innocuous until they don’t. “We have empowered inauthentic actors to accumulate huge followings for largely unknown purposes,” Allen wrote in the report.

This shift has happened many times in Myanmar since the rise of clickbait farms, in particular during the Rohingya crisis and again in the lead-up to and aftermath of this year’s military coup. (The latter was precipitated by events much like those leading to the US January 6 insurrection, including widespread fake claims of a stolen election.)

In October 2020, Facebook took down a number of pages and groups engaged in coordinated clickbait behavior in Myanmar. In an analysis of those assets, Graphika, a research firm that studies the spread of information online, found that the pages focused predominantly on celebrity news and gossip but pushed out political propaganda, dangerous anti-Muslim rhetoric, and covid-19 misinformation during key moments of crisis. Dozens of pages had more than 1 million followers each, with the largest reaching over 5 million.

The same phenomenon played out in the Philippines in the lead-up to president Rodrigo Duterte’s 2016 election. Duterte has been compared to Donald Trump for his populist politics, bombastic rhetoric, and authoritarian leanings. During his campaign, a clickbait farm, registered formally as the company Twinmark Media, shifted from covering celebrities and entertainment to promoting him and his ideology.

At the time, it was widely believed that politicians had hired Twinmark to conduct an influence campaign. But in interviews with journalists and researchers, former Twinmark employees admitted they were simply chasing profit. Through experimentation, the employees discovered that pro-Duterte content excelled during a heated election. They even paid other celebrities and influencers to share their articles to get more clicks and generate more ad revenue, according to research from media and communication scholars Jonathan Ong and Jason Vincent A. Cabañes.

In the final months of the campaign, Duterte dominated the political discourse on social media. Facebook itself named him the “undisputed king of Facebook conversations” when it found he was the subject of 68% of all election-related discussions, compared with 46% for his next closest rival.

Three months after the election, Maria Ressa, CEO of the media company Rappler, who won the Nobel Peace Prize this year for her work fighting disinformation, published a piece describing how a concert of coordinated clickbait and propaganda on Facebook “shift[ed] public opinion on key issues.”

“It’s a strategy of ‘death by a thousand cuts’—a chipping away at facts, using half-truths that fabricate an alternative reality by merging the power of bots and fake accounts on social media to manipulate real people,” she wrote. 

In 2019, Facebook finally took down 220 Facebook pages, 73 Facebook accounts, and 29 Instagram accounts linked to Twinmark Media. By then, Facebook and Google had already paid the farm as much as $8 million (400 million Philippine pesos).

Neither Facebook nor Google confirmed this amount. Meta’s Osborne disputed the characterization that Facebook had influenced the election.

An evolving threat

Facebook made a major effort to weed clickbait farms out of Instant Articles and Ad Breaks in the first half of 2019, according to Allen’s internal report. Specifically, it began checking publishers for content originality and demonetizing those who posted largely unoriginal content.

But these automated checks are limited. They primarily focus on assessing the originality of videos, and not, for example, on whether an article has been plagiarized. Even if they did, such systems would only be as good as the company’s artificial-intelligence capabilities in a given language. Countries with languages not prioritized by the AI research community receive far less attention, if any at all. “In the case of Ethiopia there are 100 million people and six languages. Facebook only supports two of those languages for integrity systems,” Haugen said during her testimony to Congress.

Rio says there are also loopholes in enforcement. Violators are taken out of the program but not off the platform, and they can appeal to be reinstated. The appeals are processed by a separate team from the one that does the enforcing and performs only basic topical checks before reinstating the actor. (Facebook did not respond to questions about what these checks actually look for.) As a result, it can take mere hours for a clickbait operator to rejoin again and again after removal. “Somehow all of the teams don’t talk to each other,” she says.

This is how Rio found herself in a state of panic in March of this year. A month after the military had arrested former democratic leader Aung San Suu Kyi and seized control of the government, protesters were still violently clashing with the new regime. The military was sporadically cutting access to the internet and broadcast networks, and Rio was terrified for the safety of her friends in the country.

She began looking for them in Facebook Live videos. “People were really actively watching these videos because this is how you keep track of your loved ones,” she says. She wasn’t concerned to see that the videos were coming from pages with credibility issues; she believed that the streamers were using fake pages to protect their anonymity.

Then the impossible happened: she saw the same Live video twice. She remembered it because it was horrifying: hundreds of kids, who looked as young as 10, in a line with their hands on their heads, being loaded into military trucks.

When she dug into it, she discovered that the videos were not live at all. Live videos are meant to indicate a real-time broadcast and include important metadata about the time and place of the activity. These videos had been downloaded from elsewhere and rebroadcast on Facebook using third-party tools to make them look like livestreams.

There were hundreds of them, racking up tens of thousands of engagements and hundreds of thousands of views. As of early November, MIT Technology Review found dozens of duplicate fake Live videos from this time frame still up. One duplicate pair with over 200,000 and 160,000 views, respectively, proclaimed in Burmese, “I am the only one who broadcasts live from all over the country in real time.” Facebook took several of them down after we brought them to its attention but dozens more, as well as the pages that posted them, still remain. Osborne said the company is aware of the issue and has significantly reduced these fake Lives and their distribution over the past year. 

Ironically, Rio believes, the videos were likely ripped from footage of the crisis uploaded to YouTube as human rights evidence. The scenes, in other words, are indeed from Myanmar—but they were all being posted from Vietnam and Cambodia.

Over the past half-year, Rio has tracked and identified several page clusters run out of Vietnam and Cambodia. Many used fake Live videos to rapidly build their follower numbers and drive viewers to join Facebook groups disguised as pro-democracy communities. Rio now worries that Facebook’s latest rollout of in-stream ads in Live videos will further incentivize clickbait actors to fake them. One Cambodian cluster with 18 pages began posting highly damaging political misinformation, reaching a total of 16 million engagements and an audience of 1.6 million in four months. Facebook took all 18 pages down in March but new clusters continue to spin up while others remain.

For all Rio knows, these Vietnamese and Cambodian actors do not speak Burmese. They likely do not understand Burmese culture or the country’s politics. The bottom line is they don’t need to. Not when they’re stealing their content.

Rio has since found several of the Cambodians’ private Facebook and Telegram groups (one with upward of 3,000 individuals), where they trade tools and tips about the best money-making strategies. MIT Technology Review reviewed the documents, images, and videos she gathered, and hired a Khmer translator to interpret a tutorial video that walks viewers step by step through a clickbait workflow.

The materials show how the Cambodian operators gather research on the best-performing content in each country and plagiarize them for their clickbait websites. One Google Drive folder shared within the community has two dozen spreadsheets of links to the most popular Facebook groups in 20 countries, including the US, the UK, Australia, India, France, Germany, Mexico, and Brazil.

The tutorial video also shows how they find the most viral YouTube videos in different languages and use an automated tool to convert each one into an article for their site. We found 29 YouTube channels spreading political misinformation about the current political situation in Myanmar, for example, that were being converted into clickbait articles and redistributed to new audiences on Facebook.

One of the YouTube channels spreading political misinformation in Myanmar. Google ultimately took it down.

After we brought the channels to its attention, YouTube terminated all of them for violating its community guidelines, including seven that it determined were part of coordinated influence operations linked to Myanmar. Choi noted that YouTube had previously also stopped serving ads on nearly 2,000 videos across these channels. “We continue to actively monitor our platforms to prevent bad actors looking to abuse our network for profit,” she said.

Then there are other tools, including one that allows prerecorded videos to appear as fake Facebook Live videos. Another randomly generates profile details for US men, including image, name, birthday, Social Security number, phone number, and address, so yet another tool can mass-produce fake Facebook accounts using some of that information.

It’s now so easy to do that many Cambodian actors operate solo. Rio calls them micro-entrepreneurs. In the most extreme scenario, she’s seen individuals manage as many as 11,000 Facebook accounts on their own.

Successful micro-entrepreneurs are also training others to do this work in their community. “It’s going to get worse,” she says. "Any Joe in the world could be affecting your information environment without you realizing.”

Profit over safety

During her Senate testimony in October of this year, Haugen highlighted the fundamental flaws of Facebook’s content-based approach to platform abuse. The current strategy, focused on what can and cannot appear on the platform, can only be reactive and never comprehensive, she said. Not only does it require Facebook to enumerate every possible form of abuse, but it also requires the company to be proficient at moderating in every language. Facebook has failed on both counts—and the most vulnerable people in the world have paid the greatest price, she said.

The main culprit, Haugen said, is Facebook’s desire to maximize engagement, which has turned its algorithm and platform design into a giant bullhorn for hate speech and misinformation. An MIT Technology Review investigation from earlier this year, based on dozens of interviews with Facebook executives, current and former employees, industry peers, and external experts, corroborates this characterization.

Her testimony also echoed what Allen wrote in his report—and what Rio and other disinformation experts have repeatedly seen through their research. For clickbait farms, getting into the monetization programs is the first step, but how much they cash in depends on how far Facebook’s content-recommendation systems boost their articles. They would not thrive, nor would they plagiarize such damaging content, if their shady tactics didn’t do so well on the platform.

As a result, weeding out the farms themselves isn’t the solution: highly motivated actors will always be able to spin up new websites and new pages to get more money. Instead, it’s the algorithms and content reward mechanisms that need addressing.

In his report, Allen proposed one possible way Facebook could do this: by using what’s known as a graph-based authority measure to rank content. This would amplify higher-quality pages like news and media and diminish lower-quality pages like clickbait, reversing the current trend.

Haugen emphasized that Facebook’s failure to fix its platform was not for want of solutions, tools, or capacity. “Facebook can change but is clearly not going to do so on its own,” she said. “My fear is that without action, the divisive and extremist behaviors we see today are only the beginning. What we saw in Myanmar and are now seeing in Ethiopia are only the opening chapters of a story so terrifying no one wants to read the end of it.”

(Osborne said Facebook has a fundamentally different approach to Myanmar today with greater expertise in the country’s human rights issues and a dedicated team and technology to detect violating content, like hate speech, in Burmese.)

In October, the outgoing UN special envoy on Myanmar said the country had deteriorated into civil war. Thousands of people have since fled to neighboring countries like Thailand and India. As of mid-November, clickbait actors were continuing to post fake news hourly. In one post, the democratic leader, “Mother Suu,” had been assassinated. In another, she had finally been freed.

Special thanks to our team. Design and development by Rachel Stein and Andre Vitorio. Art direction and production by Emily Luong and Stephanie Arnett. Editing by Niall Firth and Mat Honan. Fact checking by Matt Mahoney. Copy editing by Linda Lowenthal.

Correction: A previous version of the article incorrectly stated that after we reached out to Facebook, clickbait actors in Cambodia began complaining in online forums about being booted out of Instant Articles. The actors were actually in Myanmar.

Keep Reading

Most Popular

Scientists are finding signals of long covid in blood. They could lead to new treatments.

Faults in a certain part of the immune system might be at the root of some long covid cases, new research suggests.

Large language models can do jaw-dropping things. But nobody knows exactly why.

And that's a problem. Figuring it out is one of the biggest scientific puzzles of our time and a crucial step towards controlling more powerful future models.

OpenAI teases an amazing new generative video model called Sora

The firm is sharing Sora with a small group of safety testers but the rest of us will have to wait to learn more.

Google’s Gemini is now in everything. Here’s how you can try it out.

Gmail, Docs, and more will now come with Gemini baked in. But Europeans will have to wait before they can download the app.

Stay connected

Illustration by Rose Wong

Get the latest updates from
MIT Technology Review

Discover special offers, top stories, upcoming events, and more.

Thank you for submitting your email!

Explore more newsletters

It looks like something went wrong.

We’re having trouble saving your preferences. Try refreshing this page and updating them one more time. If you continue to get this message, reach out to us at customer-service@technologyreview.com with a list of newsletters you’d like to receive.