As the internet took hold, and hacks, data breaches and other cyber threats became routine—organizations and governments have battled back through the steady development of cybersecurity technologies and strategies. But success in providing true cybersecurity has been elusive. Can we learn from some of the mistakes made along the way?
Chris Wysopal, CTO and cofounder of Veracode, has been involved in cyber defense since the beginning. In fact, as the Vulnerability Researcher at the seminal hacker think tank the L0pht, he has worked for decades to demand secure technologies from influential tech companies.
In this episode, Wysopal shares his work in the early years of cybersecurity, relating stories such as when he testified in front of the 1998 Senate on computer security. At that time, he argued for the adoption of regulations on large companies like Microsoft, in order to enforce accountability and the development of thoughtful, safer code that protects consumer privacy. These initial concerns have only grown, as there is still scant protection against code and firmware that allows for breaches. Where do we go from here? Wysopal shares some wisdom and some warnings.
Business Lab is hosted by Elizabeth Bramson-Boudreau, the CEO and publisher of MIT Technology Review. The show is produced by Katherine Gorman, with editorial help from Emily Townsend. Music by Merlean, from Epidemic Sound.
From MIT Technology Review. I'm Elizabeth Bramson-Boudreau. And this is Business Lab: The show that helps business leaders make sense of new technologies coming out of the lab and into the marketplace. Today we're going to focus on the history and evolution of cybersecurity and how those early approaches reverberate today. In other words, has the response to the very earliest cyber threats shaped how we look at security today?
Elizabeth Bramson-Boudreau: And are there any lessons for the ways we practice our digital safety now? This is the second in our series on cybersecurity where we are exploring everything from the latest on hacking attacks to what organizations can do to better protect their people and their data. Our guest today is Chris Wysopal, the CTO and co-founder of Veracode. He's been active since the very beginning in the ongoing battle for digital security. I'm here with Chris Wysopal, who is the co-founder and chief technology officer of Veracode, an internet cybersecurity firm. Chris has, before Veracode, quite a long history in cyber security and I think we're going to start there. Can you tell us, Chris, about your cybersecurity backstory?
Chris Wysopal: Yes, definitely. I started in cybersecurity probably like in the late ‘80s on bulletin board systems before the internet and it had this kind of underground appeal like alternative media sort of like the ‘zine world where you were kind of communicating one-to-one with this alternative space. Some of the files on these systems talked about hacking the phone system and it was the kind of information you couldn't get anywhere else. I was a college student at the time. It was kind of intriguing and I think that put me on a journey of trying to explore information around computers and networks that I did all through the ‘90s.
Elizabeth: So how have you seen the landscape evole...you've had quite a long career in this and so you've really seen the landscape change I would think over those 20, 30 years. What have you seen—how would you characterize the changes you've observed?
Chris: Yes, so in the early ‘90s I was a software engineer. I actually worked at Lotus right here in Cambridge and we got connected to the internet. And I kind of—we started thinking about what if we've had our software running on the internet so people over the internet could connect to our software? And that kind of opened my mind to the idea that, “wow, that sounds like there's going to be all these security issues.”
Elizabeth: At that stage. What made you see that the that access to the internet would invariably kind of lead both ways, both directions?
Chris: So I started working on software which was connected to the internet and it started to become a difficult problem to make it secure so that people couldn't just have access to the data that was behind your software is actually hard to build software that was secure. It was almost like we didn't know how. That really opened my eyes that as more and more things got connected to the internet it was going to become really problematic.
Elizabeth: And so, what is the biggest issue you see now? So, given that you know you were there at the very beginning you saw a lot of things emerge. Your own history was that you spent a period of time as a hacker. What do you see today that keeps you up at night that you think are the biggest things that not only you know people listening to this podcast but perhaps our lawmakers and our and our leaders ought to be thinking about?
Chris: You know the thing that scares me is that people build systems, software, IoT and they don't really think about security from the very beginning either they don't think about it at all, like they don't care, or they think about it at the very end and they do the bare minimum and they end up you know putting a product out there which is inherently going to get breached. It's inherently the problems are going to be found in it. And I just think that we're we continuously are putting out a stream of technology that is fundamentally broken from a security standpoint. And we're constantly cleaning it up.
Elizabeth: So, are you talking about software? You're talking about IoT-enabled devices? You're talking about all of these things? And why aren't people who create these things concerned about their security?
Chris: Yes. This was something we talked about when I testified before the Senate in 1998 which was the first computer—hearing on government computer security and some of things we talked about was the vendors don't know what to do. The vendors don't care about it because their customers aren't asking for it. The vendors aren't liable. They're licensing the software. If anything goes wrong they've disclaimed any liability. We had a market where people were able to put things out there with no downside from problems that fell off of their inability to secure. One of the biggest players at the time was Microsoft. And Microsoft was shipping all this software and it had lots of vulnerabilities. And anyone running—any businesses running the software was getting—they were getting broken into and that was just sort of the status quo like the party line of Microsoft was you know “No, we build secure software'” and our party line was “no you don't. We're going to show you the vulnerabilities in the software.” And there was a period of time where you had to raise awareness and almost shame these companies into building better things that they would sell.
Elizabeth: So do you think that at the time Microsoft really did understand that they were building products that weren't secure or do you think when they were saying our products are good enough they were being disingenuous?
Chris: I don't think they understood what they were building. I think you know the corporate PR line is “we build secure software. That's our position.” So, they were able to build like security features like authentication and cryptography. Actually, they did that fairly poorly, but they didn't understand about vulnerabilities. They didn't understand mistakes in the code could be leveraged by attackers to get at the data behind the application, run their own commands on the software. And that was where hackers came in and showed the vendors what was possible if you were the bad guy. And that's when you say you know you're a hacker. That's how I see saw my role and the role we had up at the L0pht, the hacker group I was part of, was publicizing that the vendors didn't know what they were doing and they have to start building software differently.
Elizabeth: This is great. Let's put this into context a little bit for people who are listening to this. So you testified in front of Congress in 1998 and at that point can you paint the picture for me a little bit about what the reasons were that you were doing and in your goals—what you were trying to accomplish at that point in time. And then I'm going to come back and ask you to link it up to where we are now where we've seen more recently people in front of Congress—Mark Zuckerberg quite notably. And what you're you know what connections and what experiences you're having when you see that sort of thing happening, you know all these years later.
You know we testified in 1998 but the few years running up into that we started doing what we call vulnerability research. We would take a popular product like Internet Explorer and we would set it up in a lab and we would reverse engineer it. We would try to figure out where the problems were and we would figure out ways to compromise it. I mean now we think of it as you know something that everyone knows about you click on a link and your computer is running someone else's soft—under someone else's control. I think everyone knows the risk now sometimes if you click on a link and it's malicious it could lead to someone else you know controlling your computer. That's the exact kind of vulnerabilities we were researching in '96, '97 and trying to get people to pay attention to. And we started doing this with just a goal of research and exploring. But after a while we realized that consumers don't know this is a problem. The vendors don't know but consumers don't know either. And so, we started to take on a consumer advocacy role where we would try to publicize that insecure software is going to lead to your computer being compromised.
And we got some press for that and that got us on the radar of the Senate when they were starting to tackle this issue. In '98 it was the first Government Accountability Office report that looked at the security of all the different federal agencies and that was done for the very first time. And so the Committee on Governmental Affairs Senator Thompson who was the chairman at the time wanted to have a hearing to talk about the results and how poorly the government did, probably because they're running a lot of Microsoft software at the time, but lots of other vendor stuff too. I don't know who put the bug in their ear, but they wanted to have more than the government viewpoint they wanted to have an external viewpoint on this. And we ended up getting tapped as you know sort of expert testimony like this is sort of the outsider view hackers have that great you know outsider view into problems that aren't you know motivated by the normal government or commercial.
Elizabeth: And, so, bridge between that experience and more recently like Mark Zuckerberg. Are you surprised at how—first of all does it seem different to you all these years later what you saw when he was there talking about Facebook? Or does it seem shockingly familiar? You know what are the comparisons are the differences you want to share with us?
Chris: Facebook has had a lot of different problems some, a lot of them of their own creation like just with the way that they're dealing with privacy and their business model. Some of the problems are due to you know it's challenging to secure software that big and complex and they've had breaches, right? It’s both security and privacy, when you when you think about it. The difference today is the risks are just so much higher because there's so much more dependence on systems like Facebook. Just think about the amount of personal information that's in Facebook that could be used against people that clearly would be—people wouldn't want exposed. And you know to some degree is being used against you know our whole system of democracy. Right? And against society and not just against individuals. So, the stakes seem so much higher now than back in '98 when we were just sort of trying to raise awareness, saying like “we see this is a problem that's going to get worse.” And then I look back and say you know “fundamentally a lot hasn't changed.” Fundamentally, there still is no liability for software. You know if Facebook gets breached it's like, “oh well you know it's hard to build secure software” and it seems like they everyone who builds software gets a pass when they get breached. So that fundamentally hasn't changed. But I think what has changed is this the impact of these breaches have on society. It just pervades our entire lives.
Elizabeth: Do you think that law makers regulators creating new laws to implement better cybersecurity is the answer? And if so, are there particular measures that you are most interested to see implemented.
Chris: Yes so I like to look at analogies in other business—other industries that the government has regulated, like things like you know food is regulated with ingredients labels and cleanliness and things like that. Safety is regulated in a lot of places: cars, airplanes, all kinds of transportation, the workplace and even consumer products. I like to look at those analogies and say you know “How did that get we how do we get the benefit without stifling those industries?” I mean obviously safety in cars has been a great success story. Right? Like we're so much safer driving now. And you know it doesn't—the car industry seems to be doing fine. So how can we have regulation that can improve security which is pretty close to safety especially when we start talking about IoT and devices that are controlled by computers. How can we, How can we do it in such a way that we don't stifle those industries? We have to tread very carefully but I think looking at some of some of the things that have worked like you know for food things like ingredients labels have worked. So just transparency about what's in there and letting the consumer decide. Like obviously if something turns out to cause cancer we remove that from the market.
But other things are just basically you know it's risk based, based on you know your health. So, we can think about transparency of what went into the software was it processed in a facility that made nuts you know? So what is it. How is it created? And what are the ingredients, like what pieces of open source are in there, what cryptography does it use? I think ingredient labels and transparency is a good way to start with that. But the other thing that I think needs to happen is when there are you know breaches that are that are big, treat them like you know like a transportation you know accident and try to get to the root cause of what happens. We always seem to do that with transportation and a lot of times when it's a breach we sort of say like “oh well this is how many records were compromised. We're gonna buy everyone you know credit reporting or credit monitoring and we're going to try to not have it happen again.” But you know we don't really know what happened. I think like 10 percent of the time you know the news comes out and usually that's because the attacker group is making claims.
Elizabeth: Right. I mean it's so interesting what you're saying because this sort of postmortem analysis about what happened you know certainly you can look at airplane accidents and they lay every single piece out on that on you know in a warehouse and they take a look at absolutely everything and they really try and figure out exactly what the black box is telling them etc. And there's a couple of different—I mean and then you talk about things like a food safety accident you know you have a situation where romaine lettuce goes bad, forget about it - you'll never find romaine lettuce for until you know that crisis has passed. And I think there's some real benefit from that kind of approach.
But I think the challenges and I'm interested in your point of view is that those are—there's something inherently easier for people to understand about what went into making that salad and prepared to pre-washed salad or what are the different pieces on the airplane that interact with one another? And somehow when it comes to software vulnerabilities and you know a great big network security breach maybe it's the absence of knowledge and real insight into what all the ingredients are. But for a precious few people I would think can the ingredients all laid out there render meaning.
Chris: But you have experts that are layered on top of this basic stuff like there are nutritionists, there are doctors and then they can give advice based on those basics. And I would argue like any kind of airplane crash is a very complicated pieces of failure that went into that like if you look at the most recent one with Ethiopia and Lion Air it's a pretty complicated chain of you know software, training you know maybe some bad design that that that that go into it. It's never one simple thing like some sensor failed which caused some software to fail and then there was a training failure. It’s usually like two or three things combined. And, so I think that actually is pretty similar to what we see in the cyber world.
Elizabeth: I mean I think maybe I'm more thinking about the lawmakers. I mean I can't help but replay in my mind many times the questions to Mark Zuckerberg are about “Well, how do you make money?' And as really betraying an ignorance in in Congress about the way that business and probably a whole lot of others besides run in modern internet which leads me to kind of find it a little bit hard to imagine we could get our way to a regulatory authority—whatever that means - that could have real ability to pull things apart. I hope that—I mean I also think it's an interesting and compelling thing to sort of drive towards, but I also wonder what your point of view particularly since you engaged Microsoft back in the day, around the power of these massive internet companies and the way that they will respond to some kinds of similar types of noises from lawmakers. I think airlines are in a different situation and food companies are in a different situation. How do you think Facebook or Instagram or Twitter or Google or whoever would respond or have responded thus far to these sorts of ideas?
Chris: A lot of those companies, like, just take Microsoft, they did respond in 2000...I think was actually 2002, Bill Gates came out and said “we're going to you know sort of stop development and everyone's gonna get trained and they now called it trusted computing.” And they set themselves on a path to building secure software and then probably 10 years after that, they actually do a really good job. Right? And I think part of it was to protect their brand, they had to do this. I look a lot at a lot of this and it's companies do this to protect their brand. Like if you if you want to look at the cloud providers like Amazon and AWS. they're very strong on security because they need to have a strong brand around security or no one's going to use their service. So that that's if there's alignment between the company's brand and what they're providing securely I think it can work. But I think the problem with Facebook is I don't think that alignment is necessary, necessarily there right, because of the way people are giving up their information. The alignment isn't always there. And if we look at like you know cheap you know IoT manufacturers that have names we've never really heard of coming out of China there's no brand alignment at all and they don't care at all. That's where I think the problem is bigger, it is not with maybe the larger players but it's like the next thousand players which just keep always having a vulnerability somewhere so that everyone's network is always compromisable. You know, like if you look at any kind of system it's the further the further you get away from the top players and the brand name that is the problem like WordPress is actually pretty good. But you could put a plug-in from some unknown player into your WordPress blog site and that's now the compromise. That’s where I think that the regulation or transparency has to focus, on lifting the bottom up not making the top players stronger.
Elizabeth: I want to move now to you know what's going on today and some of the innovations that are—been developed for greater security. We now live in a world with two-factor authentication around our passwords. How long do you think that in particular will be the standard and what other factors like biometrics and things beyond that do you think we'll move towards if any?
Chris: So I think two-factor is one of the greatest innovations especially you know where you can use your mobile device as the second factor, because you know it's almost as easy to use as a password and that's almost easy to set up as a password. I think that we're going to see two-factor last a very long time. Biometrics, you know that's fine for you know a hardware device something that you know is your door lock or your phone. It starts to get more problematic when those bio-metrics have to be shared widely over lots of systems, and keeping that secure. Like if you think about the biometrics on your phone, that thumbprint never leaves the secure chip on your phone, right? That's one of the things that makes it actually work. When people start to talk about biometrics you know at the ATM machine or you know other places that are on a broad network. Then I start to worry of it not really helping. And then there's actual risk there because now your thumbprint is—you know you can change your password but you can't change your thumb or at least not easily. I'm not so sure about bio-metrics.
Elizabeth: And how about cybersecurity issues in machine learning applications? We're talking a lot about machine learning and many of the people listening to this podcast are using machine learning solutions to analyze big sets of data. What are some of the concerns that they should be bearing in mind?
Chris: Machine learning can definitely be used to solve some cybersecurity problems. I think it gets into challenging territory when you're trying to separate normal behavior from malicious behavior because they're just so closely matched. I mean there's extreme cases where you know you always log in from Massachusetts and then all of a sudden you're logging in from China for the first time and we're actually good at that kind of stuff. That's sort of the anti-fraud detection that credit card companies use. And so that kind of like extreme anomalies - that's fine. But it's it gets more difficult when you're trying to understand you know is a behavior of a user on a network actually the kind of malicious behavior versus you know the normal human behavior? Like maybe someone's accessing a spreadsheet for the first time because they've just been told to do something. Do you want to block that activity? Do you want someone physically following up on that activity? That that could lead to a lot of false positives. So, when it gets to human behavior that is just sort of protecting you know your PC or your network, I think it's a challenge and we're not quite there yet.
Elizabeth: And for people who are executives who are running companies who need to think about their security, how worried should they be? And I often wonder you know is this a case of a terrorist attack which statistically is rather unlikely? Or is this more like the threat of a traffic accident on the way to work? Where should I be focusing my worries? And you know the number of companies that would love to tell me it's terribly dangerous in terms of cybersecurity risks that I face here at Technology Review is probably infinite. And how do I filter the signal from the noise?
Chris: Yeah. So it's challenging because you know you don't you don't know you know what risk level you should sort of tolerate as a business like different businesses have different risk levels like obviously something that is a bank has a different risk level than a media company. Right? A media company can survive you know their website being trashed but a bank can't necessarily survive losing billions of dollars. Right? There’s different risk tolerance for every kind of business. But we're seeing some - and that's like when it's more of a targeted attack. Something I'm worried about is the random indiscriminate attack like we saw with the NotPetya virus which took down it was Merck and I think there's a couple of global transportation companies and Maersk, I think was one of them. And to the tune of hundreds of millions of dollars like they were not able to operate their global businesses for like weeks, you know? So that's something that every business has to worry about. I would—if I was going to focus my cybersecurity I would focus on those types of events—widespread events - first and then I would focus on after that like, are there targets directly towards my business? And you know how do I how do I deal with those? I think you can separate it into the two and it's gonna be a little bit different for every business.
Elizabeth: Yeah. And what about people who are entering the cybersecurity field now who are—you know either they've come they've been hackers or they are just really interested in in making the web and businesses doing, you know operating on the web more secure? What do you tell them? Because things have changed a lot. You know hackers and hacking has become much more sophisticated. It's certainly a tool of you know nation states. How do you advise them to kind of think about where they want to build their careers?
Chris: I was actually at this event at VMI in Virginia. It was a capture-the-flag event where 13 different collegiate teams came together to do these cyber capture-the-flags. And I spoke to them and you know what I what I like to say is “start with the basics.” You really need to understand you know how TCP/IP networking works and the internet works. You have to understand how operating systems work like Unix and Windows. You need to know something about coding and you need you need sort of this broad basic foundation. I think just to start on and then you have to decide what you want to specialize in. Right? Are you going to be on defense or are you going to be on the red team? Are you going to do incident response and forensics? Like what I ended up doing is you know, working with people who develop software to help them develop software better. I think everyone needs that foundation because there's so many different layers that you can attack systems on. You can attack on the network you can attack at the OS and I guess another whole area is at the human layer. Like how do you trick people to do stuff? I think I think you need to understand the basics of all those layers and then focus in on where you have an aptitude. You know obviously some people just naturally gravitate to offense or defense. I ended up you know starting on offense and then moving over to defense because I like that better. I would try a few things and then you know go deep and focus in one area.
Elizabeth: Great. Well this is a very interesting conversation and I really appreciate your taking the time.
Chris: Well thanks for having me here.
Elizabeth: That's it for this episode of Business Lab. I'm your host Elizabeth Bramson-Boudreau. I'm the CEO and publisher of MIT Technology Review. We were founded in 1899 at the Massachusetts Institute of Technology. You can find us in print, on the web, at dozens of live events each year. And now in audio form. For more information about us, please check out our website and TechnologyReview.com. The show is available wherever you get your podcasts. If you enjoyed this episode, we hope you'll take a moment to rate and review us at Apple Podcasts.
Business Lab is a production of MIT Technology Review. This episode was produced by Collective Next with help from Emily Townsend and with editorial help from Mindy Blodgett. Special thanks to our guest Chris Wysopal. Thanks for listening and we'll be back soon with our next episode.
A chip design that changes everything: 10 Breakthrough Technologies 2023
Computer chip designs are expensive and hard to license. That’s all about to change thanks to the popular open standard known as RISC-V.
Modern data architectures fuel innovation
More diverse data estates require a new strategy—and the infrastructure to support it.
Chinese chips will keep powering your everyday life
The war over advanced semiconductor technology continues, but China will likely take a more important role in manufacturing legacy chips for common devices.
The computer scientist who hunts for costly bugs in crypto code
Programming errors on the blockchain can mean $100 million lost in the blink of an eye. Ronghui Gu and his company CertiK are trying to help.
Get the latest updates from
MIT Technology Review
Discover special offers, top stories, upcoming events, and more.