Skip to Content

More Trouble with Programming

The second part of our interview with Bjarne Stroustrup, the inventor of C++.
December 7, 2006

The first part of this interview engendered such debate in the comments section of our site, as well as on aggregator sites like Slashdot, that Technology Review chose to address some of the objections to C++ raised by readers.

Technology Review: Name the coolest and lamest programs ever written in C++, and say what worked and didn’t work.

Bjarne Stroustrup: Google! Can you even remember the world before Google? (It was only five years ago, after all.) What I like about Google is its performance under severe resource constraints. It possesses some really neat parallel and distributed algorithms. Also, the first Web browsers. Can you imagine the world without the Web? (It was only about 10 years ago.) Other programs that I find cool are examples of embedded-systems code: the scene-analysis and autonomous driving systems of the Mars Rovers, a fuel-injection control for a huge marine engine. There is also some really cool code in Photoshop’s image processing and user interfaces. What I like about these programs is how they are structured to be reliable and responsive under pretty harsh resource constraints. Some of Photoshop’s ways of managing internal complexity (for instance, the graphical user interface [GUI] layout and access from image-processing algorithms to the pixel data) are just beautiful.

If you look at code itself, rather than considering the resulting program, we could look at something like Shape [an abstract class that defines an interface by which shapes like circles and rectangles can be manipulated in C++], which I borrowed from Simula. It’s still the language-technical base of most GUIs, such as the one on your computer or your iPod, or whatever. A more modern example would be the find or the sort algorithm in the Standard Template Library [STL] in C++. That’s the language technical basis for much modern high-performance C++ code, especially of programs that need to do simple operations on lots of data. What is common to these examples of code is that they cleanly separate concerns in a program, allowing separate developments of parts. That simplifies understanding and eases maintenance. These basic language techniques allow separate “things” to be separately represented in code and combined only when needed. However, code is something appreciated by programmers, rather than most people. I have always been a bit envious of graphics people, robotics people, etc. What they do is so concrete and visible; what I do is invariably invisible and incomprehensible to most people. I know many mathematicians feel the same way.

Sorry, I’m not going to shame anyone by naming their work “the lamest C++ program ever.” It’s oh so tempting … but no, it wouldn’t be fair.

TR: Following structured programming (of which the best known example is Pascal) and object-oriented programming (your own C++), what will be the next big conceptual shift in the design of programming languages? Some years ago, Technology Review put its money on aspect-oriented programming. Does AO represent a conceptual shift of the same kind that OO did?

BS: I hope you didn’t put too much money on it! I don’t see aspect-oriented programming escaping the “academic ghetto” any day soon, and if it does, it will be less pervasive than OO. When it works, aspect-oriented programming is elegant, but it’s not clear how many applications significantly benefit from its use. Also, with AO, it appears difficult to integrate the necessary tools into an industrial-scale programming environment.


One reason I am cautious is that I saw an earlier language built on similar ideas, called R++, struggle to gain acceptance some 15 years ago. It did well for a major application, but repeatedly, enthusiastic people discovered [that] introducing major changes into their tool chains and processes was complicated and expensive, and that educating new developers was difficult and time-consuming. Naturally, aspect-oriented programming may avoid some of these problems, but to succeed it needs to dodge all, and more. In computer science, a major new idea will succeed only if it is sufficiently capable in every relevant area. To succeed on a large scale, a new language must be good enough in all areas (even some the language designer has never heard of), rather than just superb at one or two things (however important). This is a variant of the simple rule that to function, all essential parts of a machine must work; remove one, and the machine stops. The trick is to know which parts are essential. Please note that I’m not saying, “Aspect-oriented programming doesn’t work,” but to be “the next big thing,” you have to provide major gains in an enormously broad range of application areas.

All that said, I don’t know what the next major conceptual shift will be, but I bet that it will somehow be related to the management of concurrency. As programmers, we have been notoriously bad at thinking about lots of things happening simultaneously, and soon our everyday computers will have 32 cores.

You didn’t mention generic programming. It is definitely worth thinking about in this context. Over the last decade, it has made a major change to the way C++ libraries are designed and has already led to the addition of language features in Java and C#. I don’t think of generic programming as the “next paradigm,” because C++ has directly supported it since about 1990, and the C++ standard library crucially depends on it. What if the next big thing has already arrived and nobody (except programmers) noticed?

It’s worth noting, perhaps, that I don’t actually believe in a popular [use] of the word “paradigm” (that derives from Thomas Kuhn’s ideas in The Structure of Scientific Revolutions) when it is applied to programming. I do not believe that a paradigm completely replaces previous paradigms in one revolutionary moment (or “shift”). Instead, each programming paradigm adds to what worked previously, and as a paradigm matures, it is increasingly integrated with previous paradigms. Kristen Nygaard was fond of saying that multiplication didn’t completely replace addition, and, by analogy, whatever would come after object-oriented programming would include object-oriented programming as a subset. I tend to agree. The evolution of C++ was guided by this view, and the evolution of Java and C# provides further examples.

TR: Computer languages remain generally difficult to learn. One might argue that for computers to become more than “helper” tools that enable mass computations and widespread communications, they must evolve again–and one key may be in simplifying the process of coding so that more individuals are able to participate in development.

BS: I think that would be misguided. The idea of programming as a semiskilled task, practiced by people with a few months’ training, is dangerous. We wouldn’t tolerate plumbers or accountants that poorly educated. We don’t have as an aim that architecture (of buildings) and engineering (of bridges and trains) should become more accessible to people with progressively less training. Indeed, one serious problem is that currently, too many software developers are undereducated and undertrained.


Obviously, we don’t want our tools–including our programming languages–to be more complex than necessary. But one aim should be to make tools that will serve skilled professionals–not to lower the level of expressiveness to serve people who can hardly understand the problems, let alone express solutions. We can and do build tools that make simple tasks simple for more people, but let’s not let most people loose on the infrastructure of our technical civilization or force the professionals to use only tools designed for amateurs.

In any case, I don’t think it is true that the programming languages are so difficult to learn. For example, every first-year university biology textbook contains more details and deeper theory than even an expert-level programming-language book. Most applications involve standards, operating systems, libraries, and tools that far exceed modern programming languages in complexity. What is difficult is the appreciation of the underlying techniques and their application to real-world problems. Obviously, most current languages have many parts that are unnecessarily complex, but the degree of those complexities compared to some ideal minimum is often exaggerated.

We need relatively complex language to deal with absolutely complex problems. I note that English is arguably the largest and most complex language in the world (measured in number of words and idioms), but also one of the most successful.

TR: Please talk about the pros and cons of maintaining backward compatibility with the existing knowledge base (for example, consider your determination to maintain high compatibility to C when you developed C++). It might seem that a clean break would produce leaps of progress; but is this really a realistic proposition?

BS: Java shows that a (partial) break from the past–supported by massive corporate backing–can produce something new. C++ shows that a deliberately evolutionary approach can produce something new–even without significant corporate support. To give an idea of scale: I don’t know what the marketing budget for Java has been so far, but I have seen individual newspaper advertisements that cost more than the total of AT&T’s C++ marketing budget for all time. “Leaps” can be extremely costly. Is such money well spent? Maybe from the point of view of corporate budgets and corporate survival, but given a choice (which of course I’ll never have), I’d spend that money on research and development of evolutionary changes. Note that almost by definition, research money is used to fund attempted leaps that tend to fail while competing with evolutionary changes.

However, evolution shouldn’t be an excuse for doing things the way we’ve always done them. I would like for evolutionary changes to occur at a faster pace than they do in C++, and I think that’s feasible in theory. However, that would require funding of “advanced development,” “applied research,” and “research into application” on a scale that I don’t see today. It would be essential to support the evolution of languages and libraries with tools to ease upgrades of real systems and tools that allowed older applications to survive in environments designed for newer systems. Users must be encouraged to follow the evolutionary path of their languages and systems. Arrogantly damning older code as “legacy” and recommending “just rewrite it” as a strategy simply isn’t good enough. “Evolutionary languages” tend to be very conservative in their changes because there is no concept of supporting upgrades. For example, I could imagine accepting radical changes in source code if the change was universally supported by a solid tool for converting from old style to new style. In the absence of such tools, language developers must be conservative with the introduction of new features, and application developers must be conservative with the use of language features.


One problem with an evolutionary approach is that there are few academic outlets for incremental change–especially not when that incremental change directly relates to real-word systems. This is a sign that computer science and software engineering haven’t yet emerged as the theoretically and empirically well-founded foundation of real-world software development. Theory and practice seem rarely to meet, and researchers are pushed away from real-world software towards less messy academic topics. Conversely, many developers completely ignore research results.

A major issue in language evolution is control of its direction. For a new language, users have a choice: they can adopt it or not (though this becomes less clear-cut if the new language is heavily pushed by their platform supplier). But once they use a language, it is harder to ignore new features (e.g., one user might not like a feature, but a colleague or a library supplier may think it great and use it). And it is impossible to ignore incompatible changes. The ISO C++ standards process relies on volunteers, who can devote only part of their time to the standards work. This implies that it is slow moving. It also seems to imply that end users are consistently underrepresented compared to suppliers of compilers and tools. Fortunately, the C++ committee has always been able to attract many dozens of active members for its meetings and many more online, so it avoided the parochialism of a small group. Until very recently, academics have been completely absent.

C++ provides a nice, extended case study in the evolutionary approach. C compatibility has been far harder to maintain than I or anyone else expected. Part of the reason is that C has kept evolving, partially guided by people who insist that C++ compatibility is neither necessary nor good for C. Another reason– probably even more important–is that organizations prefer interfaces that are in the C/C++ subset so that they can support both languages with a single effort. This leads to a constant pressure on users not to use the most powerful C++ features and to myths about why they should be used “carefully,” “infrequently,” or “by experts only.” That, combined with backwards-looking teaching of C++, has led to many failures to reap the potential benefits of C++ as a high-level language with powerful abstraction mechanisms.

TR: Would you consider Microsoft’s heavily marketed language environment .NET one of your progeny? It does, after all, offer a high level of extrapolation and “pluggable” components. It’s also the language most desired in corporate job candidates. What are its pros and cons?

BS: .Net is “the progeny” of a large organization, though Anders Hjelsberg has a large hand in it through C#, the language he designed, and its libraries. I suspect that C++ played a significant role, but primarily through MFC (which is not one of the more elegant C++ libraries) and as an example of something perceived as needing major improvement. C# as a language is in some ways closer to C++ than Java is, but the main inspiration for .Net and C# is Java (specifically, J2EE). Maybe C++ should be listed as a grandparent for .Net but as both a parent and a grandparent of C#.

.Net is a huge integrated system backed by Microsoft. That’s its major advantage and disadvantage. Personally, I’m a great fan of portability. I want my software to run everywhere it makes sense to run it. I also want to be able to change suppliers of parts of my system if the suppliers are not the best. Obviously, suppliers of huge integrated systems, such as .Net and Java, see things differently. Their claim is that what they provide is worth more to users than independence. Sometimes they are right, and of course some degree of integration is necessary: you cannot write a complete application of any realistic size without introducing some system dependencies. The question is how deeply integrated into the application those system dependencies are. I prefer the application to be designed conceptually in isolation from the underlying system, with an explicitly defined interface to “the outer world,” and then integrated through a thin layer of interface code.

Keep Reading

Most Popular

Large language models can do jaw-dropping things. But nobody knows exactly why.

And that's a problem. Figuring it out is one of the biggest scientific puzzles of our time and a crucial step towards controlling more powerful future models.

OpenAI teases an amazing new generative video model called Sora

The firm is sharing Sora with a small group of safety testers but the rest of us will have to wait to learn more.

Google’s Gemini is now in everything. Here’s how you can try it out.

Gmail, Docs, and more will now come with Gemini baked in. But Europeans will have to wait before they can download the app.

This baby with a head camera helped teach an AI how kids learn language

A neural network trained on the experiences of a single young child managed to learn one of the core components of language: how to match words to the objects they represent.

Stay connected

Illustration by Rose Wong

Get the latest updates from
MIT Technology Review

Discover special offers, top stories, upcoming events, and more.

Thank you for submitting your email!

Explore more newsletters

It looks like something went wrong.

We’re having trouble saving your preferences. Try refreshing this page and updating them one more time. If you continue to get this message, reach out to us at customer-service@technologyreview.com with a list of newsletters you’d like to receive.