Skip to Content
Uncategorized

Untapped Networks

What do Microsoft, Kevin Bacon, and cell-signaling pathways have in common? According to sociologist Duncan Watts, all three are part of the new science of networks.

For Columbia University sociologist Duncan Watts, Microsoft’s continuing battle with the hacker and the aftermath of 9/11 share a striking similarity: both reveal the peril and the power of networks. Watts is one of the leaders in an emerging field he terms the “new science of networks.” Here, large-scale groups of people and micro-scale networks of biological cells form and reform according to many of the same principles. Watts, who is exploring these theories at Columbia’s Collective Dynamics Group, recently published a book on the subject titled Six Degrees: the Science of a Connected Age. TR editors caught up with Watts to discuss, among other things, why engineers should start paying close attention to how human networks operate.

TR: Why do you call this the “New Science of Networks”?

Watts: Well, that name is a bit misleading. Theories of networks have been around for a long time, so the science itself really isn’t new. What is new is the synthesis of ideas from a variety of disciplines: math, computer science, sociology, biology. Until recently these fields haven’t been aware that we’re all working on the same kinds of problems. But now, physicists are starting to get ideas from sociologists, and so on. Collaborations are also starting. That’s exactly what our group at Columbia is all about.

TR: What are some basic applications of network science?

Watts: There are lots of applications in, say, the life sciences, social sciences, and engineering. One application that already works is Google. Google takes advantage of the fact that the Web is a network and that the links are created by individuals who all know something. Many links pointing to a particular site is a consensus. That’s how Google ranks search results-results far better than those based on content analysis. It’s about being connected to people who are connected.

TR: So, this is more than academic theory.

Watts: Absolutely. Corporations could benefit a great deal by thinking about their problems not just as technology or engineering problems but as network problems.

Look at Microsoft. I find it ironic that Microsoft is fighting so hard to not have its software packages segregated. What the company doesn’t understand is that its biggest enemy is not the government or Netscape, it’s the hacker. You see, one of the software industry’s fundamental assumptions is that universal homogeneity is good because you can share all sorts of things. And you can-like viruses. The code red virus spread universally within minutes. If it had a payload, that could have caused massive problems.

Now, if you’re a corporation and you get a couple of those things because you have a Microsoft platform, you’ll be switching to Linux tomorrow. Microsoft has a tremendous business problem to deal with, and I don’t think the company realizes that. By simply trying to make their software more robust, they’re approaching the problem in the obvious way. Instead, why not break yourself into a few competing companies that don’t produce identical code? That way you decrease the homogeneity so that problems in one software package don’t leak over into the other. Then businesses can invest in portfolios of different packages. As antithetical as it sounds, it may be the only way to disrupt the “network”-which is why they have all these problems in the first place.

TR: Could an understanding of human networks influence how engineers do their work?

Watts: Yes. By analogy, think about biomimicry-that is, creating systems based on biological systems, reverse engineering how the organism does it. The idea in biomimicry is that many “engineering” problems have already been solved in nature, so why not mimic how nature does it? In the same way you can do “sociomimicry”-creating systems based on human organizations.

For example, Toyota group had a catastrophic failure when one of its factories that manufactured a particular safety component burnt to the ground. The company had no reserves and wouldn’t be able to rebuild the factory for at least six months. They had been churning out 15,000 cars a day, and now their production drops to zero in three days. This is the worst nightmare come true-one that could really end a company.

Suddenly they go into this frenzy of completely decentralized activity. Two hundred different companies collaborate to form six entirely independent production systems using none of the specialized equipment designed to build these component parts. They just jury-rig things from all over the place, and within a week production is up and running. It’s a phenomenal kind of recovery. Makes me think of the bad guy in Terminator 2: you blow a hole in him, he meshes around a little bit, then he’s as good as new.

Engineers would love to build systems that can self-heal in this way. But if you look at the Columbia and the power grid in the Western U.S., you see the opposite. Small failures become catastrophes. What we’d like are systems that avoid not only little failures but catastrophic failures as well, and which can, in a decentralized way, rewire themselves to adapt. We think that there’s a lot of potential for this if we could understand how human systems absorb these kinds of shocks. We may be able to engineer systems that have these same sorts of properties.

The same is true of research problems. Harvard psychologist Stanley Milgram’s discovery that anyone in the world can contact anyone else in only six steps-the famous “six degrees of separation”-is really a search phenomenon. And it’s actually a kind of search that computers have a difficult time performing. If you have a peer-to-peer network and you need to find a particular data file without a centralized directory, how do you do it? Currently, either you replicate it all over the place, or you do some brute-force broadcast search which ends up swamping the network. If we could learn how humans do this kind of thing then maybe we can design better algorithms for computers.

TR: Your book concludes with a chapter on September 11. How do the events of that day illustrate these ideas?

Watts: On September 12, 2001, one hundred thousand people had nowhere to go to work. But somehow, within a week, all those companies were functioning again-and they don’t even know how they did it. I attended a roundtable discussion with some of these people, and they said, well, we kind of did this and sort of did that and got some help from these people and more help from those people-and pretty soon we’re in an office somewhere.

You see, most of us view human organizations as if they’re trees: you chop off the trunk and nothing gets to the peripheries. But really, they’re more like leaves. A leaf may look like it has the same branching structure that a tree does, but if you chop a hole into the middle of a leaf and then pump fluid in it, the fluid oozes around the hole and then goes to the rest of the leaf. And that’s what human organizations are like. You can blow a hole right in the middle, but still pump information around the damage.

People have a local view of the world. I have my friends, and everyone else is “out there” somewhere-I don’t know about them or care about them and certainly can’t affect them. The science of networks is the antithesis of that world view. You affect things out there and they affect you. Sometimes that’s good because you can draw on resources that you didn’t know about yesterday, and sometimes it’s bad because you get affected by a disease or your computer crashes from a virus, and the only thing that you did wrong was buy Microsoft. So, the world is both small and big. All these metaphors are true, and the trick is to figure out an analytical framework that’s precise enough to give you some traction on these problems.

Keep Reading

Most Popular

Large language models can do jaw-dropping things. But nobody knows exactly why.

And that's a problem. Figuring it out is one of the biggest scientific puzzles of our time and a crucial step towards controlling more powerful future models.

The problem with plug-in hybrids? Their drivers.

Plug-in hybrids are often sold as a transition to EVs, but new data from Europe shows we’re still underestimating the emissions they produce.

Google DeepMind’s new generative model makes Super Mario–like games from scratch

Genie learns how to control games by watching hours and hours of video. It could help train next-gen robots too.

How scientists traced a mysterious covid case back to six toilets

When wastewater surveillance turns into a hunt for a single infected individual, the ethics get tricky.

Stay connected

Illustration by Rose Wong

Get the latest updates from
MIT Technology Review

Discover special offers, top stories, upcoming events, and more.

Thank you for submitting your email!

Explore more newsletters

It looks like something went wrong.

We’re having trouble saving your preferences. Try refreshing this page and updating them one more time. If you continue to get this message, reach out to us at customer-service@technologyreview.com with a list of newsletters you’d like to receive.