Thomas Malone is a professor of management at MIT’s Sloan School of Management, founder and director of the MIT Center for Collective Intelligence, and author of the 2018 book Superminds: The Surprising Power of People and Computers Thinking Together. The book explores the different ways groups of people make decisions, and how new forms of artificial intelligence, especially machine learning, can help. Malone predicts that AI, robotics, and automation will destroy many jobs—including those of high-skilled knowledge workers—while at the same time creating new ones. By investing in the right kinds of AI, he says, organizations can help keep workers productive and happy—and make sure our “superminds” are actually smarter than our regular minds.
This episode is sponsored by Citrix, the company powering the digital transformation inside organizations of all sizes. In the second half of the show, Citrix's global chief technology officer Christian Reilly explains why machine learning is now a “force multiplier” making all kinds of consumer and enterprise applications more useful.
Business Lab is hosted by Elizabeth Bramson-Boudreau, the CEO and publisher of MIT Technology Review. The show is produced by Wade Roush, with editorial help from Mindy Blodgett. Music by Merlean, from Epidemic Sound.
Show notes and links
Elizabeth Bramson-Boudreau: From MIT Technology Review, I'm Elizabeth Bramson-Boudreau, and this is Business Lab, the show that helps business leaders make sense of new technologies coming out of the lab and into the marketplace. This episode is brought to you by Citrix, the company powering the digital transformation inside organizations of all sizes. Later in the show we'll hear from the global chief technology officer of Citrix, Christian Reilly.
But first we're going to talk with Tom Malone. Tom is one of the smartest people I know studying how organizations think and how computers and people working together can think more intelligently.
Tom is a professor of management at M.I.T. Sloan School of Management. Tom is also the founder and director of the M.I.T. Center for Collective Intelligence. Back in 1998 he was one of the first scholars to recognize the emergence of e-lancing or what we now call the Gig Economy. In 2018 Tom published a major book called Superminds that looks at the different ways people make decisions together and how new forms of artificial intelligence especially machine learning can help.
Here at Technology Review we're especially interested in how A.I. is reaching into the world of knowledge work. We've covered the way robotics and automation are making it harder in some ways for low-wage, low-skill blue collar workers to hold onto jobs. But these days there are also signs that A.I. will change the way higher-skilled knowledge workers do their jobs too. That doesn't mean we'll all be outsmarted by computers. But it does mean we'll need to think harder about how organizations can adopt the right kinds of A.I. to keep workers productive and happy and what they can do to make sure our super minds are actually smarter than our regular minds. My meeting with Tom Malone was a chance to talk about some specifics. So here's our chat.
So, it's wonderful to see you again Tom.
Tom Malone: Great to be here.
Elizabeth: So we're going to talk about the book you've written and about the ideas that you've putting across in your book. It's called Superminds and it argues that a group of people can in some sense be conscious and intelligence sometimes in fact more intelligent than any of the individuals in the group. You also argue that computers can make these super minds even smarter.
So first, maybe tell us what got you started thinking along those lines and was there a moment when you thought, Hey there's enough here that maybe I should look at a book, and you know basically just tell us what brought you to that place.
Tom: Actually there is a very specific moment in 2005. Soon after my previous book The Future of Work was published in 2004, I was speaking at a conference in Palo Alto at Stanford and two of the other speakers were Esther Dyson, the well-known computer industry analyst and investor, and Vernor Vinge, the well-known science fiction writer who among other things helped popularize the concept of the singularity.
So the three of us went out to dinner after speaking at the conference that day. And we're talking about Vernor's most recent book which he was just about to finish at the time, and talking about things that he was interested in, and Vernor was talking about what he called superhuman intelligence, something like people and computers and stuff like that. And we had a really interesting conversation. I was talking about what to do and had been thinking about what I wanted to do next after my previous book. And by the end of that dinner I had an unusual feeling, which wasn't that I had decided what I was going to do next; the feeling I had was that I had finally admitted to myself what I was going to do next.
And so at the time I called it superhuman computing or superhuman intelligence. Later I thought a better word for it was collective intelligence. And I used that term for quite some time, including still the name of the research center at MIT that I direct. And then in the course of writing my book which was kind of the summary of the last 10 or 15 years of thinking about this topic I realized that in some ways an even better term for that thing instead of collective intelligence or collectively intelligent systems, a better term for that was superminds.
Elizabeth: What was it that you were seeing in the world at that time that made it kind of clear that this was where you needed to go.
Tom: So in some sense when I wrote The Future of Work I was thinking about looking around at the world and saying what are organizations doing today, and what are kind of the logical extensions the next steps they could take in the directions they're kind of headed already. More decentralization was one of the things I mentioned or emphasize more strongly in that book. And from my point of view one of the very nice things about collective intelligence as a way of framing all this is it's not saying kind of what's next. It's saying what's the endpoint, and then how are we headed in that direction. So in some sense even though people don't usually think of it this way, in some sense the reason to have an organization in the first place is so that some group of people can do things together better. And often that means more intelligently than they could do them if they were just working all alone. So in some sense the endpoint is perfect collective intelligence.
And in fact in my book I talk about that as a useful way of thinking. If you're thinking about how could my company be smarter, a useful thing to do is to think about what would we do if we were perfectly intelligent. If we took into account everything knowable in making every single decision. Of course in most real cases you can't begin to do that, or at least you can't completely do that, but you can begin to do that. You can think how far can we go toward being perfectly intelligent. So in that sense it wasn't so much looking around at the world and saying this is the next big thing because of X. It was looking at the world and saying how can we think about the really long term here and then use that as a way of projecting the next little while.
Elizabeth: So the book Superminds was clearly written with a with business leaders, folks listening to this, in mind as your audience. So what are the main things you hope they'll take away either from reading the book hopefully or from this discussion about the book?
Tom: So in my view the most important contribution of the book is not any single fact or method that you can use. I think the most important contribution, at least I hope the most important contribution, is a new way of looking at the world. It's a way of looking at the world where you can see superminds all around you, not only other companies but markets and communities and democracies and all these kinds of things all around us all the time. In particular as a business leader I think that means you could and should think of your own organization as a kind of supermind. It's a thing. It's an entity. It's an intelligent entity. And then an obvious question is, how can I make my organization smarter. And so the book gives several ways of thinking about that. One is thinking about the different cognitive processes that any supermind or any intelligent entity needs to do, like thinking about what kind of decisions does my organization need to make, my organizational supermind. What does it have to decide. What does it have to sense about the world in order to make those decisions. What does it have to remember about the past in order to make those decisions well. So each of those questions leads you to a bunch of other possibilities. Many of which you may never have thought of before.
Elizabeth: When organizations decide that they want to get smarter by bringing in more computing or more A.I., what do you see as the easy problems for them to apply this smartness to, this additional smartness to, and which are the hard ones?
Tom: One kind of easy problem in a certain sense I think is the kind of problems you can solve with what I call hyperconnectivity. We've talked a lot about AI in the world recently and even in this interview so far. I think that an equally if not more important thing that computers can do is to create hyperconnectivity, to connect people to other people and often to computers as well, at scales and in rich new ways that were never possible before. So we've seen this already. The Internet is perhaps the best example of a technology for creating hyper connectivity and all the things that are built on it. Social networks, Google search, all those kinds of things. And I think that no new kind of dramatically difficult technological things have to happen in order for us to use hyperconnectivity in many many new ways.
When we move into the area of artificial intelligence as opposed to hyperconnectivity then the places where AI can help are often the ones where you have enough data that can be captured in machine-readable form to teach and use algorithms to do things that either people did before or maybe people could never do before. So, for example, one function, business function where this is often sort of easy is in sales. It's easy to measure the effects of sales. Some people sell more than others and we've got dollar measures for that. It's harder to measure the inputs of sales but you can also you can certainly count things like how many customer calls do you make and how often do you meet with them and stuff like that and if you're doing it online, what do you say. So there's a lot to be learned, a lot that today's machine learning systems can learn about things about sales effectiveness and so forth. In the case of sales, the harder part is generating the actions that may affect the results. So even though a computer can count how many times you call on a sales prospect, a computer can't easily figure out what you're going to say at the beginning of the meeting about your weekend and your kids.
So there's still a need for people there, but computers can do a lot of the analysis to help the whole process be more efficient in many cases. I guess the hardest parts would be where it's hard to even measure the inputs and outputs. So when you're designing a new software product or a new car or something it's not obvious how to even measure the outputs or the inputs.
Elizabeth: How do you think business leaders can think about investing in a AI or machine learning, to shift from seeing it as cost-cutting, so, removing workers or making the workers that you have more efficient, and more about bolstering creativity, making workers feel more self-actualized and happier, in order that they are retained and therefore more productive, et cetera?
Tom: No I think it's a great question. The question of how can we shift the emphasis. I think in a certain sense the answer is just by doing it. In other words for various reasons, not all of which I'm sure I understand, we have this great focus on AI in particular that it's going to do things that people used to do and then put people out of work. And when you're trying to develop AI applications or apply AI in companies, a lot of people think of it that way. That's kind of the mindset we bring to the problem. But that's certainly not required by economics. In fact in in business there are two ways of making more money. One is to reduce your costs. The other is to create more value and be able to sell it for more. So I think we've been way too focused on AI applications for cost cutting and not nearly focused enough on AI applications for value creation. In fact even from an economic point of view I suspect that's where the real opportunity is. You can only make so much money by cutting costs, but there is in some sense no limit to how much money you can make if you're able to do some new thing that people want that couldn't even be done before. That's a lot more exciting in many cases economically.
Elizabeth: I think what's interesting about that—I think you're absolutely right. And I think that when it comes to budgeting it's always very clear what the cost is but it's always a lot harder to figure out what is the potential benefit going to be, because you don't know, really. So I think that's probably why, part of it and I think we're often limited by our own creativity in that respect.
Tom: In my mind that's the key thing. It's our own imagination our own mindsets or our own worldviews that are the real limit here. To some degree perhaps we have an opportunity, maybe even an obligation to help the world move toward a mindset that's more productive, more open to these new possibilities. But if you spend your time thinking about how to create AI applications that will create jobs, to use them you'll need more people to do new things, you'll think of some of those. And I think we should be spending a lot more of our time on that
Elizabeth: I'm imagining that you were writing this book in a time when the 2016 election was underway and perhaps even in the early months of the presidency of Donald Trump. And I think that was a time when we were just beginning to understand how certain kinds of superminds like Facebook can produce outcomes that perhaps are not uniformly understood to be good ones. So are you as optimistic about super minds as you were when you started writing the book?
Tom: So you're right that I was writing my book during the campaign and right after the election of 2016. Your question is whether I'm more optimistic now than or less optimistic now than when I wrote the book. I don't think I ever thought that superminds always did good things. The world and the history of the world is full of superminds, some of which are smart and some of which are stupid, some of which are good and some of which are evil. Nazi Germany for instance would be an example many people would pick as a supermind that was, at least while it existed, in many ways very intelligent. It accomplished goals very effectively. But many people would say the goals it was accomplishing were evil and the way it was doing them was evil.
So I don't think so I never thought and I don't think that superminds are always good. When I wrote the book I was intentionally trying to emphasize the positive possibilities but I didn't think they would always happen. And interestingly just about the time the book came out in May of 2013 the zeitgeist in the world shifted. Up until just about that time people were all excited about how good Facebook and Google and all these things are. And just about that time the Facebook Cambridge Analytica scandal happened, and kind of the world all of a sudden was talking about all the negative possibilities. So when I talk about the book now I make a point of emphasizing near the beginning that computers can make supermind smarter but they can also make superminds more stupid. Like when fake news influences voters in a democracy. That's often an example in which the computers makes the democracy more stupid. And what I really think we need to do is think about how to use these technologies wisely in ways that have the best chances of creating good outcomes. If you want to do that I still think it's very useful to talk about what are the good possibilities that we should be striving for.
Elizabeth: Tom you've talked about how in the future, artificial intelligence and machine learning will may eliminate some old jobs but also may create new jobs. What happens in the transition time? There may be quite a few people impacted by that transition. How should we be preparing ourselves for that? And what should it what will it feel like and look like when we're in it?
Tom: A very important question, because even though in the long run I'm very optimistic that enough new jobs will be created to provide work for as many people as want to work, I think there is a transition period that we need to worry about. And that's not necessarily positive for everyone. There will be some individuals whose old jobs go away and who for various reasons either can't or don't want to do the new jobs that are available. So it is worth worrying some about how we manage that as a society. And there are several possibilities for how to do that. One is using technology to do a better job of matching so matching the people to the jobs. If you have to do that by knocking on doors that's much more expensive than if you just put your resume into LinkedIn or whatever and it automatically gets matched. Another more important, probably, way is to train people to do the new things that need to be done. One of the interesting possibilities here is to use the capabilities of technology to allow that training to happen in much more flexible ways. Instead of having to go sit in a classroom eight hours a day learning something from a professor in the front of the room, it's now, as is obvious to everyone essentially, it's now possible for you to do much of that learning sitting at home or sitting in your current workplace on a break or whatever, earning online in all kinds of ways.
I think that even has possibilities for new kinds of apprenticeships where you can be not just learning in a class but you can be participating in work in a way that's somewhat redundant with other work that's going on. So in many of the new kinds of decision making that are made possible by this technology you want more than one person's opinion. Not just one doctor that gives a diagnosis but maybe five people give a diagnosis. And some of those people don't have to be fully fledged, credentialed doctors. Maybe they can be medical students. Or in other domains you know if you're trying to predict whether the competitor is going to launch a new product in a certain category by a certain date, maybe you don't have to have the world's best market researchers making those predictions. Maybe you can have MBA students or people who would like to be MBA students making those predictions. And if they do a good job of predicting then they are establishing their own credentials. And even if they don't they're still adding more data points to the averages so that makes the predictions and so they've contributed some value and learned how to do it along the way.
Elizabeth: Tom, I want to thank you for taking the time with us today. This is fascinating conversation. It's always interesting to speak to you about your latest work. And once again thank you for being here and sharing your ideas with us.
Tom: Thank you It's my pleasure.
Elizabeth: This is the final episode of a three-part miniseries on the future of knowledge work produced with sponsorship from Citrix. The company uses cloud server technology to make sure knowledge workers have access to the apps and the data they need wherever they may be located on a given day. When you're managing that many applications and that much data it turns out you can use AI in some interesting ways to make it all fit together better, and even to make life more satisfying and productive for your knowledge workers. Recently I had a chance to sit down and talk with Citrix's Global Chief Technology Officer Christian Reilly, and I started off by asking him what Citrix is doing to build more intelligence into the way they deliver applications to workers.
Christian Reilly: So I think at the heart of that is the change in landscape of applications themselves. I mean the way we think about applications is very different today than it was 20 25 years ago and I think a large part of that is the way that we've actually thought about what the application is trying to do. And by that I mean you know historically we've had lots and lots of large complex enterprise applications that have been very function based. You know they take an entire business function and they serve all of that, from say order to cash as a great example of an application that would typically have done that in a historical sense. As we saw the advent of cloud services, as we saw the advent of mobile devices and mobile applications, we've seen applications change into what I would say is a more process specific way. So individual applications getting much smaller and actually providing a sort of a subset of business process and a subset of business outcome. So of course the big applications still exist but the smaller ones are becoming much more popular in the way that we interact. So if you kind of put that together and think about traditional applications that have been complex, they've been hard to use. There have been lots of different versions of them for lots of different reasons. And we've got this influx of the smaller more lightweight more agile applications. I think what we've actually figured out is that there's an interesting juxtaposition between productivity and the challenge of these existing applications and the smart people have begun to really think about machine learning in ways that are truly challenging the way that we get work done.
So from a very simple perspective I would give one example where perhaps we are an organization that hires 20,000 people. And we have a system that allows people to request time off. So historically what we have had to do is go into the application, request the time off, and then somebody else would have to approve it. But now with machine learning we can actually understand hey you know you go into this application every Wednesday to check for your time off in your team. What if we could understand that and what if we could just provide you a simple mechanism for saying hey you know, instead of you doing this every Wednesday, I'm going to look at the way that you use that business process and I'm going to learn from that and I'm going to deliver you a different way of doing this which ultimately has a better outcome for you because it's quicker. It doesn't change your attention from what you were doing. And I can actually understand what it is that you intend to do. And my machine learning, or in this case, an artificial intelligence approach, would actually understand what you do in the system and deliver you a different way of working.
Elizabeth: So what I think what you're describing, Christian, is this idea that AI and machine learning may not only have intelligence to do great analysis on the things going on within the platform, but also play a role in the way the applications and the operating systems are designed in the first place.
Christian: Yeah absolutely. I mean I think what's really interesting, a huge trend that we see, is, and you know, maybe I should kind of go back a little bit and talk about you know the early years of artificial intelligence, AI. It's not a new concept. It's been around since the 1950s. But what is really interesting and I think is being the force multiplier, is the fact that now we're able to leverage machine learning models and capabilities as cloud services themselves. So the barrier to entry of deploying machine learning is actually getting lower and lower. Literally week on week month on month. So what's interesting from that perspective is that not just new applications that are developed that have inherent artificial intelligence and machine learning capabilities but traditional applications that we can actually retrofit that same concept to, and ultimately drive bigger business benefits, bigger business outcomes. So I think absolutely, the way that we're designing applications now, everything will have artificial intelligence built in. Whether that's a smart TV or a home device, a new laptop, a new phone, will all have some kind of machine learning or AI capability. But what's really interesting is that sets of cloud services now exist for taking really complex problems and making them relatively simple. So the overall capability set that we have is much bigger, and the applicability of those capabilities is much wider. And I think that gets really interesting from how we can drive true business benefits, not just for new applications but in traditional business.
Elizabeth: There is a narrative out there that most of us are hearing, about how AI and machine learning will go a long way towards cost-cutting. So, shedding workers, directing people to work more efficiently. What's the narrative that illustrates this concept that AI and machine learning is actually a positive and makes the way that we work in the future a more enjoyable and edifying experience?
Christian: Well, I think collectively we've been worrying about the end of work from automation for centuries. There's a famous example in the UK actually of Queen Elizabeth I who refused a patent on an on an automatic knitting machine because she was concerned about the effects of automation on the ladies at the time that were knitting for a living. And ultimately she was quite willing to not entertain the patent but couldn't stop the automation and lots of organizations acquired these knitting machines and then over a period of time the number of knitting jobs actually grew exponentially. So it's been quite interesting to follow. There's a number of other examples similar to that. And I think we're at the same sort of juncture now when we talk about the threats of AI. Of course there's a threat I think for some jobs. Look at traditional jobs that maybe like call centers or contact centers that can relatively simply be augmented by some of the artificial intelligence of machine learning.
So I think there will definitely be a point where some jobs are lost to automation, to machine learning. But I think the way to look at it really is, how can we actually apply this in a morally correct way that enables us to eliminate some of the really laborious tasks that people do whether that's a doctor's appointment or you know even a hairdresser appointment, or approving a timesheet. These are not value added things for us as humans. So I think the more that we can actually apply machine learning we can apply digital assistants and virtual assistants to actually deal with the things that are repetitive that don't add a lot of value. I think we can actually free up time, we can free up brainpower, we can free up resources for people to be more creative, so rather than be concerned about the theory between artificial general intelligence and the robots coming to take over the world, let's focus on the artificial narrow intelligence, the things that we see every day when we use Siri, or we use Cortana, or we use Google Assistant, or we have a recommendation from Amazon, or we see more and more of this technology being built into line of business applications that really eats away at those labor intensive laborious and repetitive tasks. I think we focus on that. We free up some intellectual capital for people to be more creative, to get away from the drudge of everyday life. That's where I think we can add the most value and perhaps we shouldn't be as concerned about the robots coming to take over our world, because hey in my opinion that probably will never happen.
Elizabeth: Great. Well that's a relief. Christian, thank you. This has been wonderful. It's been wonderful to hear from you about these issues and to learn more about Citrix.
Christian: Well, thank you.
Elizabeth: That's it for this episode of Business Lab. I'm your host, Elizabeth Bramson-Boudreau. I'm the CEO and publisher of MIT Technology Review. We were founded in 1899 at the Massachusetts Institute of Technology. You can find us in print, on the web, at dozens of live events each year, and now in audio form. For more information about us please check out our website at technologyreview.com.
This show is available wherever you get your podcasts. If you enjoyed this episode we hope you'll take a moment to rate and review us at Apple Podcasts. Business Lab is a production of MIT Technology Review. The producer for this episode is Wade Roush with editorial help from Mindy Blodgett. Thank you to our sponsor Citrix, the company creating people centric solutions for a better way to work. Thank you for listening. We'll be back soon with our next episode.
A Roomba recorded a woman on the toilet. How did screenshots end up on Facebook?
Robot vacuum companies say your images are safe, but a sprawling global supply chain for data from our devices creates risk.
The viral AI avatar app Lensa undressed me—without my consent
My avatars were cartoonishly pornified, while my male colleagues got to be astronauts, explorers, and inventors.
Roomba testers feel misled after intimate images ended up on Facebook
An MIT Technology Review investigation recently revealed how images of a minor and a tester on the toilet ended up on social media. iRobot said it had consent to collect this kind of data from inside homes—but participants say otherwise.
How to spot AI-generated text
The internet is increasingly awash with text written by AI software. We need new tools to detect it.
Get the latest updates from
MIT Technology Review
Discover special offers, top stories, upcoming events, and more.