Skip to Content

Does Watson Know the Answer to IBM’s Woes?

IBM is betting that research on more human-like artificial intelligence will help it turn things around.
November 5, 2014

As cheap cloud computing services erode IBM’s traditional hardware business with alarming speed, the company finds itself facing an uncertain future. If only there were some clever machine it could turn to for advice.

A scientist from the Spanish oil company, Repsol, and an IBM researcher use a visualization tool developed in IBM’s Cognitive Environments lab, based in Yorktown, New York.

Appropriately enough that’s what a large part of IBM’s research division is trying to create, by building on the research effort that led to Watson, the computer that won in  the game show Jeopardy! in 2011. The hope is that this effort will lead to software and hardware that can answer complex questions by looking through vast amounts of information containing subtle and disparate clues.

“We’re betting billons of dollars, and a third of this division now is working on it,” John Kelly, director of IBM Research, said of cognitive computing, a term the company uses to refer to artificial intelligence techniques related to Watson.

The stakes are looking higher by the day. IBM has delivered a string of disappointing quarters, and announced recently that it would take a multibillion-dollar hit to offload its struggling chip business.

The company’s vast research department is already big part of the turnaround plan. Earlier this year the division was reorganized to ramp up efforts related to cognitive computing. The push began with the development of the original Watson, but has expanded to include other areas of software and hardware research aimed at helping machines provide useful insights from huge quantities of often-messy data. The research efforts are far-ranging, and include software that can suggest new recipes by analyzing thousands of ingredients and popular meals, and electronic components, known as neurosynaptic chips, that have features modeled on the workings of biological brains and are more efficient at processing sensory information.

Speaking at an event held at the IBM’s Thomas J. Watson Research Center in Yorktown Heights, New York, last week, Kelly said other parts of the company’s research division are being reorganized to increase this focus. Some materials and hardware research has either been scaled back or redeployed to support the cognitive computing effort. “The underlying physics, materials, and devices to power this next generation of [cognitive] systems is of great interest to us,” Kelly said.

But the question-and-answer software that descended from the original Watson and runs on conventional computer hardware remains the centerpiece of IBM’s cognitive crusade. It’s also key to its evolving business plan.

The hope is that the technology will be able to answer more complicated questions in all sorts of industries, including health care, financial investment, and oil discovery; and that it will help IBM build a lucrative new computer-driven consulting business. At the M.D. Anderson Cancer Center in Houston, a version of Watson is helping doctors develop treatment regimens from a patient’s symptoms based on an analysis of thousands of pages of medical papers and doctors’ notes.

There is good reason for IBM’s management to hope that the technology might provide the spark for its reinvention. Watson demonstrated an unprecedented ability to find answers to very tricky human questions in vast amounts of data on everything from 1960s pop music to obscure hereditary disorders. Simultaneously, there is a growing belief that machine-learning techniques may provide powerful way to mine the rising tide of big data, with companies including Google, Facebook, and Amazon developing their own methods for hunting through vast quantities of data for useful insights.

Even so, Watson is still a work in progress. Some companies and researchers testing Watson systems have reported difficulties in adapting the technology to work with their data sets. IBM’s CEO, Virginia Rometty, said in October last year that she expects Watson to bring in $10 billion in annual revenue in 10 years, even though that figure then stood at around $100 million.

“It’s not taking off as quickly as they would like,” says Robert Austin, a professor of management at Copenhagen Business School who has studied IBM’s strategy over the years. “This is one of those areas where turning demos into real business value depends on the devils in the details. I think there’s a bold new world coming, but not as fast as some people think.”

Out of necessity or sincere belief in the technology’s potential, IBM is moving aggressively to commercialize the technology. Last week the company announced it had teamed up with Twitter and the Chinese social network Tencent to offer a service that will try to find useful insights from the torrent of messages sent through these services every day. Using the technology a company that sells kitchen equipment might, for example, learn about a possible problem with one of its products from comments made by restaurant patrons.

IBM also needs software developers to embrace its vision and build services and apps that use its cognitive computing technology. In May of this year it announced that seven universities would offer computer science classes featuring Watson technology. And last month IBM revealed a list of partners that have developed applications by tapping into application programming interfaces that access versions of Watson running in the cloud.

IBM’s push to commercialize its cognitive computing research programs may ultimately shape the achievements made within its research labs.

“I very much admire the end goal,” said Boris Katz, a professor of computer science at MIT and a member of the original Watson team, speaking at the Yorktown event. But he added that business pressures could encourage IBM’s researchers to move more quickly than they would like. “If the management is patient, they will really go far,” he said.

Keep Reading

Most Popular

Large language models can do jaw-dropping things. But nobody knows exactly why.

And that's a problem. Figuring it out is one of the biggest scientific puzzles of our time and a crucial step towards controlling more powerful future models.

The problem with plug-in hybrids? Their drivers.

Plug-in hybrids are often sold as a transition to EVs, but new data from Europe shows we’re still underestimating the emissions they produce.

Google DeepMind’s new generative model makes Super Mario–like games from scratch

Genie learns how to control games by watching hours and hours of video. It could help train next-gen robots too.

How scientists traced a mysterious covid case back to six toilets

When wastewater surveillance turns into a hunt for a single infected individual, the ethics get tricky.

Stay connected

Illustration by Rose Wong

Get the latest updates from
MIT Technology Review

Discover special offers, top stories, upcoming events, and more.

Thank you for submitting your email!

Explore more newsletters

It looks like something went wrong.

We’re having trouble saving your preferences. Try refreshing this page and updating them one more time. If you continue to get this message, reach out to us at customer-service@technologyreview.com with a list of newsletters you’d like to receive.