Compared to the versions that are hacked together late at night under insane deadline pressure, the programming languages to come out of academia are failures. Well, not all of them. History can speak for itself. Via UC Irvine computer scientist Cristina Videira Lopes, who deserves credit for any insight you might get from this post, which is a gloss on her excellent, if long, Research in Programming Languages:
Languages people use and love:
- PHP - Hacked together by by Rasmus Lerdorf in 1994. “Originally used for tracking visits to his online resume, he named the suite of scripts ‘Personal Home Page Tools,’ more frequently referenced as ‘PHP Tools.’” According to the informal survey at langpop.com (site may be down) it’s the 4th most popular language on the planet.
- Python - Guido van Rossum, circa 1990. “I was looking for a ‘hobby’ programming project that would keep me occupied during the week around Christmas.” (6th most popular.)
- Ruby, by Yukihiro “Matz” Matsumoto circa 1994. “I wanted a scripting language that was more powerful than Perl, and more object-oriented than Python. That’s why I decided to design my own language.”
Meanwhile, the languages designed by academics who are obsessed with internal consistency and correctness include a bunch of mostly dead tongues: Fortran, Cobol, Lisp, C and Smalltalk. The only exceptions are .NET and Java, which were the products of considerable investment by Microsoft and Sun.
In light of this history, as well as her own experience in academia, Lopes argues that the reason the ivory tower is no longer creating programming languages that people actually use is that it treats programming as a science, when really, it’s more of a design discipline.
I would love to bring design back to my daytime activities. I would love to let my students engage in designing new things such as new programming languages and environments — I have lots of ideas for what I would like to do in that area! I believe there is a path to establishing a set of rigorous criteria regarding the assessment of design that is different from scientific/quantitative validation.
Indeed, argues Lopes, it was one of those exceptional cases in which a programmer in academia was given free rein that gave rise to the web.
One good example of design experimentation being at odds with scientific evidence is the proposal that Tim Berners-Lee made to CERN regarding the implementation of the hypertext system that became the Web. Nowhere in that proposal do we find a plan for verification of claims. That’s just a solid good proposal for an intriguing “linked information system.” I can imagine TB-L’s manager thinking: “hmm, ok, this is intriguing, he’s a smart guy, he’s not asking that many resources, let’s have him do it and see what comes of it. If nothing comes of it, no big deal.” Had TB-L have to devise a scientific or engineering assessment plan for that system beyond “in the second phase, we’ll install it on many machines” maybe the world would be very different today, because he might have gotten caught in the black hole of trying to find quantifiable evidence for something that didn’t need that kind of validation.
A lot of this comes down to human factors in programming languages. If they’re not easy to use, they won’t spread. In this way, languages and entire systems (like UNIX) have been likened to computer viruses. This sort of thing is difficult if not impossible to measure. It’s subjective – the sort of problem that design, not science, can solve. The fact that computer “scientists” will be those designers is merely semantic. Code is poetry, after all.
This new data poisoning tool lets artists fight back against generative AI
The tool, called Nightshade, messes up training data in ways that could cause serious damage to image-generating AI models.
The Biggest Questions: What is death?
New neuroscience is challenging our understanding of the dying process—bringing opportunities for the living.
Rogue superintelligence and merging with machines: Inside the mind of OpenAI’s chief scientist
An exclusive conversation with Ilya Sutskever on his fears for the future of AI and why they’ve made him change the focus of his life’s work.
How to fix the internet
If we want online discourse to improve, we need to move beyond the big platforms.
Get the latest updates from
MIT Technology Review
Discover special offers, top stories, upcoming events, and more.