Skip to Content
Artificial intelligence

Artificial intelligence sees construction site accidents before they happen

Construction companies are developing an AI system that predicts worksite injuries—an example of the growing use of workplace surveillance.
June 14, 2019
An image of construction workers
An image of construction workers

A construction site is a dangerous place to work, with a fatal accident rate five times higher than that of any other industry.

Now a number of big construction companies are testing technology that could save lives, and money, by predicting when accidents will happen.

Suffolk, a construction giant based in Boston, has been developing the system for more than a yearin collaboration with SmartVid, a computer vision company in the same city. Earlier this year, the company persuaded several of its competitors to join a consortium that would share data to improve the technology.

Jit Kee Chin, chief data officer and an executive vice president at Suffolk, discussed the project and the collaboration at EmTech Next, a conference hosted by MIT Technology Reviewthis week.

Jit Kee Chen
Justin Saglio

The system makes use of a deep-learning algorithm trained on construction site images and accident records. It can then be put to work monitoring a new construction site and flagging situations that seem likely to lead to an accident, such as worker not wearing gloves or working too close to a dangerous piece of machinery.

“Safety is a huge problem for construction,” said Chin on stage at EmTech. “The standard way safety is managed today is you try to change behavior.”

The project demonstrates the potential for AI-enabled computer vision to track and predict workplace activity. This is especially important for the construction industry, which also suffers from poor productivity and severe cost overruns. Indeed, the construction world has adoptedcomputer vision, machine learning, and other advanced technology relatively rapidly.

Suffolk and SmartVid createdthe Predictive Analytics Strategic Council this March as a way for companies to contribute data that might improve the system’s performance.

Chin says it made sense for competitors to hand over their information, since many companies wouldn’t have enough data on their own. Deep-learning algorithms typically need huge amounts of data to improve their models. Improving safety is an incentive as well. “Safety was a good place to start,” she said. “Most companies don’t have this in-house.”

But while the project is primarily designed to improve safety for workers, it is also another example of a much wider trend: using AI to monitor, quantify, and optimize work life. Increasingly, companies are finding ways to track the work that people do and are using algorithms to optimize their performance.

This is now a fundamental part of some jobs, such as driving for a ride-sharing company or working for tech firms like Amazon. And it is unlikely to stop there—we may all find ourselves working for algorithms eventually.

Mary Gray, an anthropologist at Microsoft who studies the labor behind many tech products, told the EmTech audience that a growing number of workers spend their time supporting and responding to algorithms. “It’s more than the work we tend to have in mind when we talk about automation,” Gray said.

Deep Dive

Artificial intelligence

Large language models can do jaw-dropping things. But nobody knows exactly why.

And that's a problem. Figuring it out is one of the biggest scientific puzzles of our time and a crucial step towards controlling more powerful future models.

OpenAI teases an amazing new generative video model called Sora

The firm is sharing Sora with a small group of safety testers but the rest of us will have to wait to learn more.

Google’s Gemini is now in everything. Here’s how you can try it out.

Gmail, Docs, and more will now come with Gemini baked in. But Europeans will have to wait before they can download the app.

Google DeepMind’s new generative model makes Super Mario–like games from scratch

Genie learns how to control games by watching hours and hours of video. It could help train next-gen robots too.

Stay connected

Illustration by Rose Wong

Get the latest updates from
MIT Technology Review

Discover special offers, top stories, upcoming events, and more.

Thank you for submitting your email!

Explore more newsletters

It looks like something went wrong.

We’re having trouble saving your preferences. Try refreshing this page and updating them one more time. If you continue to get this message, reach out to us at customer-service@technologyreview.com with a list of newsletters you’d like to receive.