Machine learning could check if you’re social distancing properly at work
Andrew Ng’s startup Landing AI has created a new workplace monitoring tool that issues an alert when anyone is less than the desired distance from a colleague.
Six feet apart: On Thursday, the startup released a blog post with a new demo video showing off a new social distancing detector. On the left is a feed of people walking around on the street. On the right, a bird’s-eye diagram represents each one as a dot and turns them bright red when they move too close to someone else. The company says the tool is meant to be used in work settings like factory floors and was developed in response to the request of its customers (which include Foxconn). It also says the tool can easily be integrated into existing security camera systems, but that it is still exploring how to notify people when they break social distancing. One possible method is an alarm that sounds when workers pass too close to one another. A report could also be generated overnight to help managers rearrange the workspace, the company says.
Under the hood: The detector must first be calibrated to map any security footage against the real-world dimensions. A trained neural network then picks out the people in the video, and another algorithm computes the distances between them.
Workplace surveillance: The concept is not new. Earlier this month, Reuters reported that Amazon is also using similar software to monitor the distances between their warehouse staff. The tool also joins a growing suite of technologies that companies are increasingly using to surveil their workers. There are now myriad cheap off-the-shelf AI systems that firms can buy to watch every employee in a store, or listen to every customer service representative on a call. Like Landing AI’s detector, these systems flag up warnings in real time when behaviors deviate from a certain standard. The coronavirus pandemic has only accelerated this trend.
Dicey territory: In its blog post, Landing AI emphasizes that the tool is meant to keep “employees and communities safe,” and should be used “with transparency and only with informed consent.” But the same technology can also be abused or used to normalize more harmful surveillance measures. When examining the growing use of workplace surveillance in its annual report last December, the AI Now research institute also pointed out that in most cases, workers have little power to contest such technologies. “The use of these systems,” it wrote, “pools power and control in the hands of employers and harms mainly low-wage workers (who are disproportionately people of color).” Put another way, it makes an existing power imbalance even worse.
Deep Dive
Artificial intelligence
Geoffrey Hinton tells us why he’s now scared of the tech he helped build
“I have suddenly switched my views on whether these things are going to be more intelligent than us.”
ChatGPT is going to change education, not destroy it
The narrative around cheating students doesn’t tell the whole story. Meet the teachers who think generative AI could actually make learning better.
Deep learning pioneer Geoffrey Hinton has quit Google
Hinton will be speaking at EmTech Digital on Wednesday.
The future of generative AI is niche, not generalized
ChatGPT has sparked speculation about artificial general intelligence. But the next real phase of AI will be in specific domains and contexts.
Stay connected
Get the latest updates from
MIT Technology Review
Discover special offers, top stories, upcoming events, and more.