Select your localized edition:

Close ×

More Ways to Connect

Discover one of our 28 local entrepreneurial communities »

Be the first to know as we launch in new countries and markets around the globe.

Interested in bringing MIT Technology Review to your local market?

MIT Technology ReviewMIT Technology Review - logo


Unsupported browser: Your browser does not meet modern web standards. See how it scores »

One problem with an evolutionary approach is that there are few academic outlets for incremental change–especially not when that incremental change directly relates to real-word systems. This is a sign that computer science and software engineering haven’t yet emerged as the theoretically and empirically well-founded foundation of real-world software development. Theory and practice seem rarely to meet, and researchers are pushed away from real-world software towards less messy academic topics. Conversely, many developers completely ignore research results.

A major issue in language evolution is control of its direction. For a new language, users have a choice: they can adopt it or not (though this becomes less clear-cut if the new language is heavily pushed by their platform supplier). But once they use a language, it is harder to ignore new features (e.g., one user might not like a feature, but a colleague or a library supplier may think it great and use it). And it is impossible to ignore incompatible changes. The ISO C++ standards process relies on volunteers, who can devote only part of their time to the standards work. This implies that it is slow moving. It also seems to imply that end users are consistently underrepresented compared to suppliers of compilers and tools. Fortunately, the C++ committee has always been able to attract many dozens of active members for its meetings and many more online, so it avoided the parochialism of a small group. Until very recently, academics have been completely absent.

C++ provides a nice, extended case study in the evolutionary approach. C compatibility has been far harder to maintain than I or anyone else expected. Part of the reason is that C has kept evolving, partially guided by people who insist that C++ compatibility is neither necessary nor good for C. Another reason– probably even more important–is that organizations prefer interfaces that are in the C/C++ subset so that they can support both languages with a single effort. This leads to a constant pressure on users not to use the most powerful C++ features and to myths about why they should be used “carefully,” “infrequently,” or “by experts only.” That, combined with backwards-looking teaching of C++, has led to many failures to reap the potential benefits of C++ as a high-level language with powerful abstraction mechanisms.

TR: Would you consider Microsoft’s heavily marketed language environment .NET one of your progeny? It does, after all, offer a high level of extrapolation and “pluggable” components. It’s also the language most desired in corporate job candidates. What are its pros and cons?

BS: .Net is “the progeny” of a large organization, though Anders Hjelsberg has a large hand in it through C#, the language he designed, and its libraries. I suspect that C++ played a significant role, but primarily through MFC (which is not one of the more elegant C++ libraries) and as an example of something perceived as needing major improvement. C# as a language is in some ways closer to C++ than Java is, but the main inspiration for .Net and C# is Java (specifically, J2EE). Maybe C++ should be listed as a grandparent for .Net but as both a parent and a grandparent of C#.

.Net is a huge integrated system backed by Microsoft. That’s its major advantage and disadvantage. Personally, I’m a great fan of portability. I want my software to run everywhere it makes sense to run it. I also want to be able to change suppliers of parts of my system if the suppliers are not the best. Obviously, suppliers of huge integrated systems, such as .Net and Java, see things differently. Their claim is that what they provide is worth more to users than independence. Sometimes they are right, and of course some degree of integration is necessary: you cannot write a complete application of any realistic size without introducing some system dependencies. The question is how deeply integrated into the application those system dependencies are. I prefer the application to be designed conceptually in isolation from the underlying system, with an explicitly defined interface to “the outer world,” and then integrated through a thin layer of interface code.

10 comments. Share your thoughts »

Tagged: Computing

Reprints and Permissions | Send feedback to the editor

From the Archives


Introducing MIT Technology Review Insider.

Already a Magazine subscriber?

You're automatically an Insider. It's easy to activate or upgrade your account.

Activate Your Account

Become an Insider

It's the new way to subscribe. Get even more of the tech news, research, and discoveries you crave.

Sign Up

Learn More

Find out why MIT Technology Review Insider is for you and explore your options.

Show Me

A Place of Inspiration

Understand the technologies that are changing business and driving the new global economy.

September 23-25, 2014
Register »