Facebook wants AI-powered chips to stop people from streaming suicides and murders
The social-media giant is working on hardware that can analyze and filter live video.
Background: After Facebook rolled out its live video feature in 2016, the company was criticized for a rash of suicides streamed to audiences on the platform. In response, the company created AI tools to spot dangerous behavior and increased the number of reviewers, so that it took less than 10 minutes to remove footage after it was posted.
Improvement: Investing in chips with AI software that can recognize self-harm, sexual acts, or other activities Facebook wants to ban would reduce the need for human moderators to watch suspect videos.
Why it matters: Mark Zuckerberg has big plans for these kinds of custom-built systems. By designing and making its own hardware, Facebook could not only improve its platform but save a lot of money by reducing reliance on chip manufacturers like Nvidia and Intel.
Geoffrey Hinton tells us why he’s now scared of the tech he helped build
“I have suddenly switched my views on whether these things are going to be more intelligent than us.”
ChatGPT is going to change education, not destroy it
The narrative around cheating students doesn’t tell the whole story. Meet the teachers who think generative AI could actually make learning better.
Deep learning pioneer Geoffrey Hinton has quit Google
Hinton will be speaking at EmTech Digital on Wednesday.
The future of generative AI is niche, not generalized
ChatGPT has sparked speculation about artificial general intelligence. But the next real phase of AI will be in specific domains and contexts.
Get the latest updates from
MIT Technology Review
Discover special offers, top stories, upcoming events, and more.