AI is real now: A conversation with Sophie Vandebroek
Why there will never be another “AI winter,” and what IBM and MIT are doing together to ensure that.
More times than almost any other field of innovation, artificial intelligence has weathered recurring cycles of overinflated hope, followed by disappointment, pessimism, and funding cutbacks. But Sophie Vandebroek, IBM’s vice president of emerging technology partnerships, thinks the AI winters are truly a thing of the past, thanks to the huge amounts of computing power and data now available to train neural networks.
In this episode Vandebroek shares examples of real-world applications enabled by this shift, from image recognition to chatbots. And she describes the mission of the new MIT-IBM Watson AI Lab, a $240 million, 10-year collaboration between IBM researchers and MIT faculty and students to focus on the core advances that will make AI more useful and reliable across industries from healthcare to finance to security.
This episode is brought to you by Darktrace, the world leader in AI technology for cyber defense. Darktrace is headquartered in San Francisco and Cambridge, UK, and has nearly 2,500 customers around the world who use its software to detect and respond to cyber threats to their businesses, users, and devices. Darktrace has built innovative machine learning technology can spot unusual activity using an approach modeled on the human immune system. In the second half of the show, Darktrace CEO Nicole Eagan explains how Darktrace’s technology works and why companies need to bring new defenses to today’s cyber arms race.
SHOW NOTES AND LINKS
Elizabeth Bramson-Boudreau: From MIT Technology Review, I’m Elizabeth Bramson-Boudreau, and this is Business Lab, the show that helps business leaders make sense of new technologies coming out of the lab and into the marketplace. This episode is brought to you by Darktrace, the world leader in AI technology for cyber defense. Later in the program I’ll speak with the CEO of Darktrace, Nicole Eagan. She’ll show us how advances in AI and machine learning are giving us a new set of ways to defend against hackers and cyber criminals.
Elizabeth: But our first guest hails from one of the newest centers for AI research, the MIT-IBM Watson AI .Lab, just a couple of blocks from our offices here in Cambridge, Massachusetts. It’s the locus of more than 50 new projects involving IBM researchers and MIT faculty, all aimed at advancing the fundamental technologies behind artificial intelligence. And here to talk with us is one of the architects of that effort, Dr. Sophie Vandebroek.
Elizabeth: Sophie’s currently IBM’s vice president of emerging technology partnerships, and she’s known in the computing industry for her distinguished history pushing innovation forward—not only at IBM but at Xerox, where she spent over a decade as the chief technology officer at Xerox. She was the director of PARC Inc., the famous laboratory formerly known as Xerox PARC. In 2011 she was inducted into the Women in Technology Hall of Fame. In keeping with the themes of the MIT IBM Watson AI lab we started off talking about how AI is evolving and why it’s transforming businesses in ways that most executives are only starting to understand.
Elizabeth: Sophie, thank you for coming here to talk with us and welcome.
Sophie Vandebroek: Oh, it’s my great pleasure to be here. I’ve been an avid reader of your journal, so I’m very happy to participate in the podcast.
Elizabeth: I’m hoping you can talk to not only me but the people who are listening to this podcast about where AI is going, and the stage that we’re in in AI development. I know that a lot of people talk about how AI has been on the verge of transforming work, only to kind of have those hopes peter out. Could we possibly be in another one of these situations where it peters out, or is this different now?
Sophie: It’s very different now AI is real. And yes, the word artificial intelligence was coined 70 years ago almost. It’s so many decades later. So what happened? Why was it not real then, and why is it real now? There are two main reasons why it’s real now. And it’s both because of exponential laws. The first one is Moore’s Law that we all know and love very well. The transistor, the basic transistor was invented in the 50s. By 1975 there were 1000 transistors on a centimeter-square chip. Today there are 10 billion transistors on a centimeter-square chip that IBM develops today, and that compute power has resulted in the mobile devices we have in our pockets, the latest high performance computer, the Summit, you know, that IBM computer that Oak Ridge National Lab purchased recently. It does 200 petaflops, which is 200 thousand trillion calculations per second. I mean, super-fast.
Sophie: So we have a volume, I mean a huge amount of compute power, which is critical for AI to be real. In addition the second law, which is Metcalfe’s Law. Bob Metcalfe, who also was part of the Boston community for a long time, was at Xerox PARC, and as you know before joining IBM I was a few decades in Xerox working closely with the PARC team, when he invented Ethernet. And the Ethernet connected...the value of the Ethernet, the value of a network is proportional to 2 to the nth with n, the number of devices on the network. And so that’s called Metcalfe’s Law, and it not only referred to the Ethernet but obviously to the World Wide Web, to social networks, and it has created many very valuable companies that we all know today.
Sophie: But in addition it has created this huge volume of data, right? And so the data on the web together with the structured digital data that many enterprises have today, many enterprises have started to digitize all their work processes together with all the data comes from sensors with the Internet of Things and sensors and manufacturing and cameras, ubiquitous cameras, etc. There is a huge amount of data, in fact it exponentially increased over the last decade or more now. And so the "AI winters" happened because indeed there wasn’t the compute power and there wasn’t the data to train these neural networks. And today we have the compute power. We have the data. And a huge amount of progress has been made in the neural networks over the last five years, since, for the first time in 2012, it was a deep learning neural network which was running on a graphical processing unit, a GPU, that for the first time won a competition for image recognition. And in fact it’s in these narrow areas that AI has superhuman quality and super speed. And so for these reasons, these two exponentials, AI is real. And in fact I would say artificial intelligence itself is now at the beginning of an exponential curve, that we are creating exponentially fast new insights that individuals, no matter what industry you’re in, can use to make fast, real-time decisions. As well it can accelerate the discovery process in life sciences research and development overall. So it’s being used. And it has really capability to impact multiple fields.
Elizabeth: So that’s a great explanation of what has really enabled this shift and why AI is such a ubiquitous topic for business leaders today. What does it allow, what does AI allow businesses to do that might have been hard or even impossible to do a decade ago?
Sophie: Yeah, so it has, it allows businesses to both increase their effectiveness and efficiency from the bottom line point of view, from a profitability point of view. But it also allows them to create whole new business models and new revenue opportunities. Let me give an example of the first one. For example, virtual agents which fit in this category of narrow AI which we just passed that phase, we’re in the phase of broad AI today and we can talk about that before we will get to general artificial intelligence. But the virtual agents or the chatbots that many of us know and interact with today as part of customer servers, just didn’t exist a decade ago. Everything was done by call center agents that had to leverage big databases to get you your answers to your questions, et cetera. Well today, most of that, virtual agents can do a very effective and efficient way. In fact some of these virtual agents today will very quickly assess whether you’re an extrovert or introvert and adjust their language according to your style.
Sophie: Also automation. for example if you drive your car through a tollbooth today it’s totally automatic license plate recognition and reading of the license plates such that within the backend processes, you actually get charged for driving through the toll booths. Again most of these processes were all done manually in the past. Pictures would be taken of license plates, would be sent to India to be processed, and then four out of five people, if your license plate is input into the system, then you would get billed. These are all these transactional, routine, very narrow, very specific processes that are automated today.
Elizabeth: This all sounds fantastic. And as a business leader I can think you know why wouldn’t I want to see greater efficiencies. But are there things that I perhaps need to be thinking about around the risks of machine learning based tools?
Sophie: Yes, definitely there are risks. And many enterprises—at IBM it’s top of mind. We are creating the tools and capabilities as part of IBM OpenScale or we have other toolkits I’ll highlight. One of them is to help enterprises to deal with risk. It’s also starting to become top of mind of boards and directors of companies, to make sure that these risks related to deploying and embracing artificial intelligence as part of the organization are addressed. Let me let me just highlight a few. Number one is making sure that the algorithms, AI algorithms are fair—that the outcome of the AI algorithm, as AI assists humans in making decisions, that the decisions are fair and ethical and not biased. So we just launched, open source, anybody can help us improve it, so IBM research open-sourced the AI Fairness 360 Toolkit, where you can pull in your algorithm and then it’s checked for all kind of biases. Today we check for gender bias, age bias, race bias, things like zip code bias. Some of the reasons for the bias is that the data set with which the argument is trained, and especially in enterprises—enterprises don’t have huge volumes of data like in the consumer world, right, where there could be huge amount of cat pictures to train an image algorithm to recognize a cat. The enterprises where a company, let’s say a hospital or a school or an enterprise, has a limited amount of data to train the algorithms so the data might not have sufficient amount of diversity and inclusion within the dataset, so that in fact the algorithms become biased.
Sophie: One example for example is human resource departments are starting to use AI to help source new employees. And so if your source for software developers leveraging an AI algorithm that might be trained on your data, the algorithm will learn that most of the software developers are male, because that’s what you hired in the past. So the risk is that algorithms might then recommend that your next hire, looking at all the resumes, they might proportionately recommend more males than females for software engineering jobs. We all know that gender is irrelevant for a software engineer. It just so happens that historical data was biased within the data. So the tools will then recommend to have a more diverse dataset.
Elizabeth: Okay, so what are some of the other risks?
Sophie: Okay. The other risks are, it’s more I mean today’s algorithms, especially deep learning and neural networks, are like black boxes right. So the risk is, the algorithm will give you an answer. Yes, you get a loan, or no, you don’t get a loan, or yes you have skin cancer because narrow AI better than humans to identify skin cancer. But it can’t explain. It doesn’t explain why or how it got to that answer. And so explainability is very important. So that is a risk, that in your business you won’t be able to explain how certain answers were achieved. And in fact in the European Union, with GDPR, the General Data Protection Regulations, it is a requirement. Companies can’t even use AI if it cannot explain itself. Everything needs to be able to be explained.
Elizabeth: So tell us a bit about the MIT-IBM Watson Lab and its mission.
Sophie: Yeah. Thank you for the question. This is a very exciting partnership between IBM research and MIT that we established a little over a year ago. And so it’s a $240 million commitment by IBM over 10 years and it is a unique lab, a university-industrial collaboration lab, in the world. No other one of such kind exists. In fact myself and the dean of engineering at MIT, Anantha [Chandrakasan], started brainstorming in the summer of 2017 I believe it was...
Sophie: You told me it all happened rather quickly.
Elizabeth: It happened in three weeks. Maybe four weeks if I had if I include the lunch Anantha and I had before our senior vice president talked to the president of MIT on a Monday morning and three weeks later on a Friday, the contract was signed. And so it is indeed, the vision was to create this joint lab of about 100 researchers, and the researchers included IBM researchers, MIT professors, and students, and we celebrated the first anniversary in September. Last September 2018. And indeed we have 49 joint projects that are active today, with about 100 people or the equivalent of 100 people on those projects. And they are really, they are research projects and not applied technology. We really wanted to make sure that those 50 projects are addressing the most difficult problems in AI. And they are doing exactly that.
Sophie: So there are four pillars. We defined four pillars. One is around core AI algorithms and there is exactly where we are addressing these difficult issues like AI that can explain itself. Or learning from small data, different methodologies to learn from small data, like hospitals have a small set of patients but a small set of data.
Elizabeth: To address the problem you mentioned before about how on the enterprise side there isn’t, there often is not enough data to really train the algorithms.
Sophie: Not the way it’s done in the past in the right in the narrow AI phase. Now we are in this phase of broad AI where systems will have to learn from small data. So several of the projects in the MIT-IBM Watson AI lab are also associated with that.
Sophie: The second pillar is applying AI to industries. And today we’re looking at three industries: healthcare and life sciences, because IBM’s Watson Health business unit is headquartered right here in Cambridge, Massachusetts. AI applied to industry also is security, applied to security businesses. And of course, security is relevant for all industries. And then the third industry that we focus on is financial services, so finance and economics. So that’s a second pillar. There are four pillars in the MIT-IBM Watson AI Lab.
Sophie: The third one is, we call it the physics of AI. What are the hardware challenges to do efficient and effective training in the clouds as well as at the edge. And then the fourth category, one that I’m very excited, about is a category we call prosperity enabled by AI, or shared prosperity enabled by AI. It’s again looking at these challenges of how to create AI systems and that have truly moral values that can make ethical decisions. What is the future of jobs for example, is a project that we have in that category. And so yeah, these are the four pillars: core AI algorithms, physics of AI, AI for industries, and prosperity enabled by AI. And now that we celebrated our first anniversary, we have just agreed between MIT and IBM that we will open our doors for other large enterprises that are truly interested to be at the cutting edge of research in artificial intelligence to join our lab. So that’s what we’re working on next.
Elizabeth: Two areas that we at MIT Technology Review are spending a lot of time reporting on are cryptocurrencies or blockchain, and quantum computing. I would really like to hear what you all are doing in those areas. And maybe we can start with crypto. And I guess the question I’d have is and how do we think about blockchain as being more than a curiosity and actually something that’s trustworthy and stable and can kind of enhance the business context in which it’s used?
Sophie: Yes, you said the right word there. So it’s all about trust. When at IBM when we talk about blockchain, in fact blockchain, a lot of research was happening in blockchain for several years in the research labs, and IBM created a business now about two and a half years ago, a blockchain business unit. I see three kinds of areas were blockchain is being used today or where there is a lot of prototypes experimenting. One is indeed in cryptocurrencies like Bitcoin. And that’s how most people know blockchain, they think about Bitcoin. And that’s a whole area of cryptocurrency. At IBM we are not interested in cryptocurrency, because our customers are not interested in cryptocurrencies. We are interested in the underlying blockchain platform. And in fact the a lot of the underlying platform has been open-sourced on Hyperledger, run by the Linux Foundation, and IBM has contributed significantly to the code and we will continue to do so. The next one is having blockchain, this underlying platform, being used in value chains to track valuable goods or valuable digital goods as they go from where they originate to where they are being used. And I can give some examples. And the third area where it’s valuable, especially in the financial services industry, is around digital identity. And I can give some examples there. But what enterprises are interested in is to be able to create trusted transactions among partners that inherently that might not know each other like small businesses or larger businesses or distributors or farmers.
Sophie: And so creating the trust in a distributed way. So the blockchain networks that we have created with our with our clients are private networks. They’re not open for everybody to join. They are private. It’s permissioned-only networking. One of the first examples we did, starting many, many years ago and it’s now in operation is a blockchain network for food safety we created with Walmart. Walmart was a pillar member of this blockchain network and a lot of the Walmart suppliers are on the network. And there it is tracking food from the farm to the table. And especially the intention here is that if like an outbreak of E. coli or any other food safety issue...
Elizabeth: Romaine lettuce.
Sophie: Yeah, romaine lettuce. I mean, it happens all the time. We knew that the outbreak was in California somewhere but everything including lettuce whatever, grown here in Massachusetts, was taken off the shelf right. That’s what happens today. It takes a long time to track where an outbreak happened. But if you track all your goods through the block chain within two minutes or faster you can quickly track where this particular lettuce came from. Then you still need to go in and see at which point in the chain from farm to store to table did the E. coli actually contaminate the foods. But that’s easier than first figuring out even where did it come from. Right.
Elizabeth: Great. Well, absolutely wonderful to hear from you and to have the chance to talk to you again. And it’s great having the lab just around the corner. It’s a wonderful facility and it’s good to have you in the neighborhood. So thank you.
Sophie: Oh, thank you so much. It was my great pleasure.
Elizabeth: This episode is brought to you by Darktrace, the world leader in cyber AI technology. Darktrace is headquartered in San Francisco in Cambridge England. It has around 2,500 customers around the world who use its software to detect and respond to cyber threats to their businesses, users, and devices. Darktrace has built innovative machine learning technology that can spot unusual activity. To find out more about how that works, I talked with the company’s CEO, Nicole Eagan.
Elizabeth: Nice to be able to talk.
Nicole Eagan: No problem.
Elizabeth: Appreciate you doing this. At Darktrace you compare your brand of cyber security to the human immune system. And I hope you can explain to us what you mean by that.
Nicole: So what really was happening is I think the security industry was obsessed with trying to keep the bad guys out. And what we came to recognize is, many times the very sophisticated attackers such as the nation states are going to get into any network that they want to. So we decided to kind of turn the problem the other way around and assume that the bad guys were inside or were going to be able to get inside. That led us to this idea of actually basing our artificial intelligence on the principles of the human immune system. So if you think about the human body’s immune system, it has an innate sense of self that allows it to know what’s not self and have a very precise and rapid response. That’s exactly how our artificial intelligence works. It’s embedded inside each one of our customers’ companies and it’s just learning a sense of self, what’s normal. What we call the "pattern of life" of every user and device connected to that network. And that allows us to be able to find things that are out of the ordinary and literally stop the attacks or neutralize them in their tracks.
Elizabeth: And how do you see in general, more generally, cyber attacks changing these days, be they coming out of nation states or out of individual bad guys, cyber criminals?
Nicole: I was I was meeting with a chief security officer of one of our customers recently and I think he had a great way of describing it. He said “Just think, there’s a team somewhere else in the world and that team’s full time job is thinking about how to either steal your intellectual property or somehow get information from you.” And that’s really what companies are up against, and the reason for that is the kind of cyber arms race where we’re used to governments fighting against governments—while that’s still taking place, we now have this whole new dimension where nation states are actually possibly attacking the companies. And that means that that digital battlefield has really shifted and that’s something that most corporations really haven’t had to defend against in the past. Now you complicate and combine that with the fact that these nation states in many cases can also be organized with a very strong global cyber-crime ring. And that kind of cooperation between those entities is also kind of a new dimension. So that’s kind of what companies are up against that’s quite new and quite novel compared to the attacks of maybe five or 10 years ago.
Elizabeth: Okay. So when it comes to what Darktrace does, are you using artificial intelligence to detect attack, to defend against attack, or both?
Nicole: That’s an excellent question. You know I think in some cases companies use artificial intelligence simply to automate human processes. So for example each company usually has a security operations center. You’re going to have a number of threat analysts and incident responders in there. And there’s one approach that says well why not just take out AI and learn from the steps that they take in what’s called the Playbook to respond to breaches and automate it. And that that can give you a little bit of an efficiency gain. But at the same time it’s not going to be a game changer. The other thing I’ve seen AI used for is basically analyzing all of the historical attacks that have occurred on other people, on other companies, and try to use that as an indicator of future threats. Now while it sounds very interesting and kind of practical, it actually seems to be fundamentally flawed, and that’s because the attacks change so rapidly. In fact in many cases there’s just new strains of attacks where a single line of code is changed, and now what’s called the signatures no longer match. So in our case we are using of numerous types of unsupervised, supervised, and deep learning to be able to not only find the attacks but have the artificial intelligence know how to investigate the attack. And also most importantly how to actually take action. And that’s very rare. There’s in fact no other company using AI to take the action.
Elizabeth: Right. So you are doing both things, then. You’re both detecting and taking action.
Nicole: We’re really using the AI to detect, investigate, and take the action. And that last part, the take the action, is really difficult and really interesting bit. It’s great because it can respond to attacks very quickly in fact on average it can respond in less than two seconds to an attack. And when these attacks move at machine speed that’s absolutely critical. But the other thing we did find, from a practical perspective, is that it does take time for people in the security organization—maybe this is the first time they’re even working with artificial intelligence and being augmented—it takes some time for them to actually build that trust. So we’ve actually created a whole new capability of having it be able to make recommendations. What if the AI recommends what action it would take and has a human confirm it? And once the humans start seeing, wow it’s making the right recommendation every time, they build a trust and they put it into what we call active mode. So I think having done this now over the past five years across nearly 2,500 companies, we’ve gotten really good at understanding what it takes to build that trust relationship but also our algorithms have gotten really strong and really smart at responding to these attacks in real time.
Elizabeth: So as the defense gets better, isn’t it fair to say that attacks, too, will get better, perhaps using AI to fight back against AI-oriented or AI-organized cyber defense?
Nicole: You’re absolutely right, although it’s kind of early days and we’ve only seen indications that it can go in that direction, and we’ve seen things like behavioral attacks where the AI might learn, actually, your style and mode of communication that you use let’s say an email. It’s kind of been a somewhat basic machine learning at this stage. But we do fully expect that there will be a whole new category of attack called offensive AI. And that means that the attackers are going to start to use various forms of machine learning, AI, and eventually deep learning as part of the attacks. So that will change this whole industry overnight. And I think by and large that’s something that a lot of executives probably haven’t contemplated yet.
Elizabeth: Right. So it’s very interesting, because as you were talking about the way Darktrace takes stock of what "normal activity" is on a on a network, it occurs to me that there might be other use cases for that information, or that insight. And I wonder if beyond sort of cybersecurity, if you’ve thought about looking at normal activity to help with other kinds of things, like say regulatory compliance or risk management, things like that.
Nicole: Absolutely. I think it what’s been interesting is we’ve created a really unique dataset on behalf of our customers. So each one of them who uses Darktrace for security today actually has embedded artificial intelligence that’s learning the sense of self and is continuously learning and updating. And that’s a dataset that can be used for other things. It could be used for regulatory compliance. In fact we have some Darktrace customers using us today for compliance with HIPAA and HITRUST in health care, or with things like DFS, which is the New York State regulations for financial services. So we see early indicators already of how these artificial intelligence models and that unique data set can be leveraged. I think one really interesting use case is mergers and acquisitions. We have some companies using us in due diligence phases for M&A to actually get more visibility into the target asset’s environment. And today they’re using it to actually see if maybe there might be a competitor or a nation state inside of that network who might be trying to steal intellectual property for example. But there’s much broader types of M&A due diligence that it could be used for. And finally we have some customers using us also for compliance with data privacy like GDPR, by seeing what traffic might be going in and out of Europe. So absolutely, I think although today we kind of are only unlocking the power of that dataset and our AI models for cyber security, we could make a decision in the future to help customers use other keys to unlock it to deliver additional value.
Elizabeth: And do different things with that information. Yeah, it’s fascinating. Nicole, thank you so much for talking to me about this.
Nicole: Thank you very much, Elizabeth.
Elizabeth: That’s it for this episode of Business Lab. I’m your host Elizabeth Bramson-Boudreau. I’m CEO and publisher of MIT Technology Review. We were founded in 1899 at the Massachusetts Institute of Technology. You can find us in print, on the web, at dozens of live events each year, and now, in audio form. For more information about the magazine and the show please check out our website at TechnologyReview.com. Our show is available wherever you get your podcasts. If you enjoyed this episode we hope you’ll take a moment to rate and review us at Apple Podcasts. Business Lab is a production at MIT Technology Review. The producer is Wade Roush with editorial help from Mindy Blodgett. Special thanks to our guests Sophie Vandebroek and Nicole Eagan. And thank you to our sponsor Darktrace, the world leader in AI technology for cyber defense. Thank you for listening. We’ll be back soon with a new episode.
Geoffrey Hinton tells us why he’s now scared of the tech he helped build
“I have suddenly switched my views on whether these things are going to be more intelligent than us.”
Meet the people who use Notion to plan their whole lives
The workplace tool’s appeal extends far beyond organizing work projects. Many users find it’s just as useful for managing their free time.
Learning to code isn’t enough
Historically, learn-to-code efforts have provided opportunities for the few, but new efforts are aiming to be inclusive.
Deep learning pioneer Geoffrey Hinton has quit Google
Hinton will be speaking at EmTech Digital on Wednesday.
Get the latest updates from
MIT Technology Review
Discover special offers, top stories, upcoming events, and more.