When Jason Nichols joined GE Global Research in 2011, soon after completing postdoctoral work in organic chemistry at the University of California, Berkeley, he anticipated a long career in chemical research. But after four years creating materials and systems to treat industrial wastewater, Nichols moved to the company’s machine-learning lab. This year he began working with augmented reality. Part chemist, part data scientist, Nichols is now exactly the type of hybrid employee crucial to the future of a company working to inject artificial intelligence into its machines and industrial processes.
Fifteen years ago, GE’s machine operators and technicians monitored its aircraft engines, locomotives, and gas turbines by listening to their clanks and whirs and checking their gauges. Today, the company uses AI to do the equivalent, even predicting failures in advance (see "50 Smartest Companies 2017."). By marshaling this technology, GE hopes to become one of the world’s top software providers by 2020, a quest that amped up in 2011 with a $1 billion initiative to collect and analyze sensor data from machines. Creating smarter models via AI is the next step in the company’s strategy—one that it hopes will give it an advantage over longtime rivals like Siemens and software giants, such as IBM, that are now expanding into industrial analytics.
Of course, integrating artificial intelligence into an organization founded in 1892 is a difficult task. It starts with training the technical brains behind the company, which employs 300,000 people across all its businesses worldwide. GE Global Research, where Jason Nichols works, is setting up online programs that teach machine learning and symposia where scientists can explore new roles. So far, nearly 400 employees from across the company have completed GE’s certification program for data analytics, and about 50 scientists have moved into digital analytics jobs of the kind Nichols has taken on.
Many of these dual scientists help make cloud-hosted software models of GE’s machines that can be used to save money and improve safety for its customers. GE builds these “digital twins” using information it gathers from sensors on the machines, supplemented with physics-based models, AI, data analytics, and knowledge from its scientists and engineers. Though digital twins are primarily lines of software code, the most elaborate versions look like 3-D computer-aided design drawings full of interactive charts, diagrams, and data points. They enable GE to track wear and tear on its aircraft engines, locomotives, gas turbines, and wind turbines using sensor data instead of assumptions or estimates, making it easier to predict when they will need maintenance. An aircraft engine flying over the U.S. could, for instance, have a digital twin on a GE computer server in California help determine the best service schedule for its parts.
Besides forecasting a machine’s life expectancy, the virtual models allow GE to optimize the operation of its products. GE says digital twins are increasing the amount of electricity wind farms produce by as much as 20 percent and reducing annual fuel consumption and carbon emissions for one of its locomotives by 32,000 gallons and 174,000 tons a year, respectively. More than 700,000 models have been delivered to clients, a number that could exceed one million by the end of this year.
The technology depends on artificial intelligence to continually update itself. What’s more, if data is corrupted or missing, the company fills in the gaps with the aid of machine learning, a type of AI that lets computers learn without being explicitly programmed, says Colin Parris, GE Global Research’s vice president for software research. Parris says GE pairs computer vision with deep learning, a type of AI particularly adept at recognizing patterns, and reinforcement learning, another recent advance in AI that enables machines to optimize operations, to enable cameras to find minute cracks on metal turbine blades even when they are dirty and dusty.
Take the tiny robot, a little bigger than a Matchbox car, used to inspect working engines. Using computer vision and a variety of AI techniques, the bot can look for cracks inside plane engines by riding on top of a slowly moving fan blade.
Similar technology can be attached to a drone to find corrosion on the 200-foot-high flare stacks that burn off excess gas released at oil and gas production sites.
To develop and work with these systems, GE researchers need to understand both the physics of the machines and the AI algorithms.
“This is a place where you will have a molecular biologist sitting with a machine-learning expert or a controls-theory person sitting with someone who knows about materials science,” says Mark Grabb, GE Global Research’s technology director for analytics. “That type of collaboration is very powerful, but there is nothing more powerful than having that same information in the same brain; it’s just hyper-efficient.”
Consider the brain of Matt Nielsen, who joined GE Global Research in 1998 after earning a PhD in physics. Nielsen developed photonics and worked on electric-vehicle software before moving fully to the company’s digital side in 2015. Today, he leads a team of digital twin developers and helps build physics-based models that can be combined with machine-learning algorithms.
Sahika Genc, another dual scientist, developed systems for ICU alarms before transitioning to GE’s machine-learning lab in 2014. Genc is now a machine-learning scientist who uses deep learning and reinforcement learning to make GE’s energy management systems more efficient. One of her recent projects applied machine learning and heat transfer theory to identify how building energy is dissipated and stored. The forecasts will help GE customers reduce their energy consumption.
These hybrid researchers could be GE’s best shot at remaining relevant for another century as the company looks for growth opportunities in such competitive and mature industries as turbines, jet engines, and locomotives.
Parris, the software research leader, admits that some of GE’s 2,000 researchers still regard certain aspects of the new approach as a “passing fad.”
But scientists who don’t make the leap may get left behind. In January, the company laid off researchers in areas deemed peripheral to GE’s “digital industrial” strategy. That’s after creating 100 new research jobs related to AI and robotics in 2016.
A horrifying new AI app swaps women into porn videos with a click
Deepfake researchers have long feared the day this would arrive.
The therapists using AI to make therapy better
Researchers are learning more about how therapy works by examining the language therapists use with clients. It could lead to more people getting better, and staying better.
DeepMind says its new language model can beat others 25 times its size
RETRO uses an external memory to look up passages of text on the fly, avoiding some of the costs of training a vast neural network
2021 was the year of monster AI models
GPT-3, OpenAI’s program to mimic human language, kicked off a new trend in artificial intelligence for bigger and bigger models. How large will they get, and at what cost?
Get the latest updates from
MIT Technology Review
Discover special offers, top stories, upcoming events, and more.