Tomorrow, key figures at leading Silicon Valley technology companies will appear on Capitol Hill. Twitter CEO Jack Dorsey, Facebook COO Sheryl Sandberg, and a Google representative (lawmakers want Alphabet CEO Larry Page, the company wants to send lawyer Kent Walker) will answer questions from various members of Congress about bias, artificial intelligence, and, we suspect, whether these mammoth companies should continue to be the dominant communication platforms for most Americans. Here are some of the main themes to look out for in the discussions—and one overarching question that really should frame the debate.
Key question: Why shouldn’t you be broken up?
Background: The economic clout of the tech giants—and its implications for other areas of their influence—has sparked considerable debate already this year. Critics say the firms’ vise-like grip on markets such as online advertising and online search harms competition. President Trump recently weighed in on the debate, saying that the big firms are in “a very antitrust situation.” Some think tanks like the Open Markets Institute have called for Facebook to be broken up, spinning out its Messenger, Instagram, and WhatsApp services into independent firms.
The tech companies are likely to point out that the services they provide are free or incredibly cheap. And they will argue this is evidence they aren’t harming consumers—a key test for US antitrust policy. However, both the US Department of Justice and the Federal Trade Commission have already signaled they’re going to be scrutinizing the giants more closely, and they are going to be listening closely to what the firms’ executives say in Congress this week. Europe has already taken a tougher position, fining Google $5 billion earlier this year in an antitrust ruling the company is appealing.
Deeper analysis: see “It’s time to rein in the data barons”
Theme: Political bias
Key question: How can social-media platforms ensure they’re distributing accurate and truthful information instead of stories that are predisposed toward a particular ideology?
Background: This is about getting social-media users out of the “filter bubble” and ensuring that insular communities aren’t reinforcing skewed viewpoints by seeing only news that conforms to their current beliefs. But here’s where it gets tricky, as one person’s “accurate and truthful” is another person’s “fake news.” Look for conservative members of Congress to address allegations of anti-conservative bias directly with both Sandberg and Dorsey.
Deeper analysis: see “This is what filter bubbles actually look like”
Theme: Black-box algorithms
Key question: How can we be sure your algorithms aren’t unfair?
Background: Artificial intelligence is being used to help make ever more decisions, from identifying potential medical treatments for patients to helping police determine where to deploy officers. And the big tech companies are in the vanguard of firms developing the algorithms that are going to have a huge impact on our lives. The danger is that they could embed hidden biases that influence the results served up.
This issue will be fresh in politicians’ minds following a recent episode in which some lawmakers were mistaken for criminals in a trial conducted by the Electronic Frontier Foundation that used Amazon’s Rekognition image-analysis AI. There should be some tough questions at this week’s hearings about how the big tech firms intend to guard against bias, and to what extent they will allow their algorithms to be inspected for evidence of unfairness. Separate from the bias debate, Google could also face questions about its stance on working with the military on AI applications—a subject that has stirred considerable controversy inside the company.
Deeper analysis: see “Inspecting algorithms for bias"
Theme: Artificial intelligence and fake news
Key question: AI-created “deepfakes” can trick your eyes and ears into thinking politicians did or said something that never happened. How can your company help users identify what is real and what is not?
Background: Generative adversarial networks, or GANs, and other advanced AI techniques can create simulated video and audio that seem eerily real. If it’s done well, the resulting object can put words into politician’s mouths or “show” them doing deeds they never did.
Deeper analysis: see "Fake America great again"
While the cut and thrust of tomorrow’s hearings will focus on these and other specific issues, the fundamental question here is how to address the broader threat that technology poses to democracy. As we pointed out in our most recent issue, AI, and especially applications like GANs and deep learning, are changing politics in unpredictable and potentially disastrous ways.
Technology—and the companies that create it—must be part of the solution, but what’s really needed is a deeper societal discussion of how we should prepare ourselves for a world in which truth and freedom aren’t guaranteed to triumph. Repeatedly grilling the giants of Silicon Valley won’t be enough to solve democracy’s ills.
Troll farms reached 140 million Americans a month on Facebook before 2020 election, internal report shows
“This is not normal. This is not healthy.”
The Facebook whistleblower says its algorithms are dangerous. Here’s why.
Frances Haugen’s testimony at the Senate hearing today raised serious questions about how Facebook’s algorithms work—and echoes many findings from our previous investigation.
She risked everything to expose Facebook. Now she’s telling her story.
Sophie Zhang, a former data scientist at Facebook, revealed that it enables global political manipulation and has done little to stop it.
Covid conspiracy theories are driving people to anti-Semitism online
Old and overtly anti-Semitic fantasies are gaining new adherents, and far-right activists have been working to convert anti-lockdown beliefs to anti-Semitism too.
Get the latest updates from
MIT Technology Review
Discover special offers, top stories, upcoming events, and more.