IBM may not be the sexiest tech giant, compared with Google or Apple or the latest cutting-edge startup. But it’s been around since 1911, so it must be doing something right.
Its secret is its research division, with 3,000 researchers distributed across 12 locations, which the company relies on to stay on top of trends in emerging technology. For decades now, the company has engaged in an annual process to create and adapt business units in light of what’s on the horizon.
The process certainly isn’t perfect. In its heyday, IBM was a powerhouse of AI research, responsible for major milestones like teaching a machine to play checkers and to beat the best human chess player. Now those headlines go to newcomers like OpenAI and DeepMind. Meanwhile, IBM has paid a reputational price for overhyping Watson.
But the company has been eyeing a comeback, especially since striking up a partnership with MIT two years ago to share researchers and IP. At EmTech Next, MIT Technology Review’s event on the future of work, we invited Sophie Vandebroek, the VP of emerging-technology partnerships, to share her strategy for long-term innovation.
The following is a mix of excerpts from the Q&A we had on stage and a series of follow-up questions we asked after the event. The answers have been edited for length and clarity.
When you joined IBM, it had sort of lost its foothold as a powerhouse in AI research. Walk us through how you approached that challenge when you were first thinking about it.
What IBM does well to look at what’s next is we do a global technology outlook [GTO] on an annual basis. Researchers help us see what’s across the horizon and say, “Hey, look out for these significant trends that could either blindside the company or really enable the company and our clients to build the next billion-dollar business.” That’s how we think about that.
When I first joined, I was leading that process of the GTO. We decided very quickly that AI is one of these technologies that’s on an exponential curve. AI had been an output of these global technology outlooks several times in the past, like when the Watson health business was created, and Watson for security, etc. But we thought, let’s just refresh and think of it very holistically, taking into account everything that has happened in the last several years.
So how does the GTO process happen?
It’s a year-long process that ends with the day when IBM Research makes recommendations around emerging technologies that have the opportunity to create the next billion-dollar business for IBM. We leverage tools like Github, where people can post their ideas, to make it a very transparent process. All individuals in IBM Research can go in and vote and give advice, and the leadership team regularly reviews what comes out of this.
So the first six months is an idea-gathering phase, and then by early summer we start narrowing it down to the high-level umbrella topics that are extremely important. Some years it’s just one topic, like two years ago when we did AI. Throughout the summer the topics get fine-tuned, and early in the fall, we start looking at what the VC community is saying and what the competition is doing. We conduct additional detailed market and competitor research to help strengthen the message. The blockchain business came out of this process; the new Watson Security business, too.
IBM is a company that’s more than a century old. It’s a big ship with hundreds of thousands of employees, so you need to make sure that the ship continues to go in a successful direction. Having these kinds of processes truly pulls the whole company together and has them focus on what’s important.
When you joined, you very quickly decided to propose the MIT-IBM Watson AI Lab. Why?
Both IBM and MIT are exceptional institutions on the East Coast. The West Coast had a lot of companies that are investing in AI and that are working with universities on the West Coast. Some companies basically got the whole department, like what happened in Carnegie Mellon and Uber [the latter gutted the former’s top robotics lab]—that’s of course a bad model. I’ve also been on the dean of engineering’s advisory committee at MIT for a decade. Both institutions could really, with very little extra investment, go to the next level toward the “quest for intelligence,” as MIT started calling it after the lab was established.
So we made this proposal with a lot of buy-in from all of my colleagues at IBM to establish the MIT-IBM Watson AI Lab. At IBM, out of some 5,000 researchers in our community (including students and interns), 1,500 work on artificial intelligence—either on the core AI algorithms or on applying it to industry. So [the new lab was] not going to focus on problems that this large community was already focused on. We really wanted to focus on the most difficult problems where you just need the best and most brilliant people in the world.
And what is the impact that you’ve seen from this collaboration? How has it enhanced the existing AI research?
This collaboration has focused IBM research again on solving significant basic-science problems in AI. IBM doesn’t solely make the decisions on what projects are selected in the lab. It is done by a steering committee with three MIT members and three IBM members, co-led by a director from each organization. Once a year we put out a request for proposals across our four research pillars: core AI algorithms, the physics of AI, applied AI to industries, and prosperity enabled by AI. Those projects are then reviewed and selected by the steering committee together. We had 186 proposals the first time around, and we funded 49. This process forces us to look at the difficult scientific research problems that are not just applied.
Our researchers are also part of the product research community, which is very good at executing product road maps. We executed a Moore’s Law road map for many decades, where we wanted to get transistors smaller and smaller, for example. We have a similar road map for quantum. So fundamental research is about being at the forefront of knowing what AI can do today and constantly pushing the limits outwards.
What made you choose this model of collaboration?
The reason it’s joint is because then, from the very beginning, IBMers truly know what’s going on in the project. Together, we file patents. A lot of the technology is open-sourced because obviously students have to be able to write papers and get their PhD thesis, etc. But being there from the very beginning will then allow those technologies that make sense to truly be embedded into the product road maps.
A few months ago, we broadened the agreement with MIT to bring other corporations into this consortium. So if your corporation is interested, you can join the MIT-IBM Watson AI Lab as a member. The corporations are not part of the research projects, but they have access to the research projects, and they have access to IP of a subset of the projects. So far, four have signed up.
What other types of collaborations do you want to continue building?
We want to collaborate with a diversity of industries. Everything that comes out of these research programs will be positive and valuable to most industries if it works. So we want to get a dozen or so key companies that can truly bring their expertise, their pain points, their dreams, their data to the consortium.
When we set up the original lab with MIT, we also set no limitations for professors and students to create startups. Our hope is that many startups will come out of this lab that are AI-related and in the Cambridge and Boston community. Of course, the lab is new—we have the second anniversary in September—but hopefully soon there will be startups coming out of this to create a whole ecosystem.
What’s your five- and 10-year vision for emerging-technology partnerships?
I think the whole ecosystem—companies, universities, and startups—is becoming more and more important. For example, in our work with quantum computing, we have open-sourced the hardware, so it’s available through the web. 120,000 people from all over the world have gone in, done little experiments, and more than 160 technical papers have been written, so some are doing great research. These 120,000 users are also from all continents, including Antarctica, and have conducted 10 million experiments.
What I’m trying to say is it’s not just partnerships with corporations or startups. It’s partnerships with individuals: with individual researchers and with developers. My hope is that our other open-source platforms, like the AI Fairness 360 toolkit [which offers resources for addressing bias in machine learning], will also attract a lot of researchers around the world to make it better and then use it. Same with the open-source blockchain platform: IBM contributes, academics contribute, and then multiple corporations can use the platform and build on top of it. Partnerships are a new way of doing research.
Corrections: Sophie Vandebroek has served on the MIT dean’s advisory committee for a decade, not decades. The MIT-IBM Watson AI Lab is celebrating its second anniversary this September, not last.
A horrifying new AI app swaps women into porn videos with a click
Deepfake researchers have long feared the day this would arrive.
DeepMind’s AI predicts almost exactly when and where it’s going to rain
The firm worked with UK weather forecasters to create a model that was better at making short term predictions than existing systems.
People are hiring out their faces to become deepfake-style marketing clones
AI-powered characters based on real people can star in thousands of videos and say anything, in any language.
How AI is reinventing what computers are
Three key ways artificial intelligence is changing what it means to compute.
Get the latest updates from
MIT Technology Review
Discover special offers, top stories, upcoming events, and more.