Machine learning could check if you’re social distancing properly at work
Andrew Ng’s startup Landing AI has created a new workplace monitoring tool that issues an alert when anyone is less than the desired distance from a colleague.
Six feet apart: On Thursday, the startup released a blog post with a new demo video showing off a new social distancing detector. On the left is a feed of people walking around on the street. On the right, a bird’s-eye diagram represents each one as a dot and turns them bright red when they move too close to someone else. The company says the tool is meant to be used in work settings like factory floors and was developed in response to the request of its customers (which include Foxconn). It also says the tool can easily be integrated into existing security camera systems, but that it is still exploring how to notify people when they break social distancing. One possible method is an alarm that sounds when workers pass too close to one another. A report could also be generated overnight to help managers rearrange the workspace, the company says.
Under the hood: The detector must first be calibrated to map any security footage against the real-world dimensions. A trained neural network then picks out the people in the video, and another algorithm computes the distances between them.
Workplace surveillance: The concept is not new. Earlier this month, Reuters reported that Amazon is also using similar software to monitor the distances between their warehouse staff. The tool also joins a growing suite of technologies that companies are increasingly using to surveil their workers. There are now myriad cheap off-the-shelf AI systems that firms can buy to watch every employee in a store, or listen to every customer service representative on a call. Like Landing AI’s detector, these systems flag up warnings in real time when behaviors deviate from a certain standard. The coronavirus pandemic has only accelerated this trend.
Dicey territory: In its blog post, Landing AI emphasizes that the tool is meant to keep “employees and communities safe,” and should be used “with transparency and only with informed consent.” But the same technology can also be abused or used to normalize more harmful surveillance measures. When examining the growing use of workplace surveillance in its annual report last December, the AI Now research institute also pointed out that in most cases, workers have little power to contest such technologies. “The use of these systems,” it wrote, “pools power and control in the hands of employers and harms mainly low-wage workers (who are disproportionately people of color).” Put another way, it makes an existing power imbalance even worse.
Deep Dive
Artificial intelligence
DeepMind’s cofounder: Generative AI is just a phase. What’s next is interactive AI.
“This is a profound moment in the history of technology,” says Mustafa Suleyman.
Deepfakes of Chinese influencers are livestreaming 24/7
With just a few minutes of sample video and $1,000, brands never have to stop selling their products.
AI hype is built on high test scores. Those tests are flawed.
With hopes and fears about the technology running wild, it's time to agree on what it can and can't do.
You need to talk to your kid about AI. Here are 6 things you should say.
As children start back at school this week, it’s not just ChatGPT you need to be thinking about.
Stay connected
Get the latest updates from
MIT Technology Review
Discover special offers, top stories, upcoming events, and more.