Skip to Content
Uncategorized

Ancient Text

Lisp is a very old computer language, and it’s still widely used.
January 1, 2007

Lisp–the list processor language–is “the greatest single programming language ever designed,” according to computer scientist Alan Kay. It was born in 1958 because John McCarthy, then an assistant professor at MIT, working on new tools for artificial-intelligence research, wanted a language in which one could write programs that would make logical inferences and deductions. Previous languages, including ­Fortran, were numeric, which made for powerful number-crunching. But Lisp made use of symbolic expressions, which treated both data (such as numbers) and code as objects that could be manipulated and evaluated. This enabled programmers to create conditional expressions–Lisp made possible the now-familiar “if-then-else” structure–and today Lisp is used as a “macro” language, allowing users of software such as Emacs to create their own mini-applications that can automate tasks. The text (click here), from page 13 of McCarthy’s 1962 Lisp 1.5 Programmer’s Manual, uses Lisp to define the function evalquote. For its elegance and profundity, Kay compared this piece of code to James Clerk ­Maxwell’s four equations describing electricity and magnetism.

Multimedia

  • A hack from page 13 of McCarthy's 1962 Lisp 1.5 Programmer's Manual

“Lisp was a piece of theory that unexpectedly got turned into a programming language,” wrote Paul Graham in his 2004 book Hackers and Painters. McCarthy’s exploration of how to think about problems and how to create methods for solving them resulted in a computer language that has endured for five decades and changed the nature of computer programming. Church’s Thesis–a central tenet of computation theory, named for the mathematician Alonzo Church–proposes that any possible calculation can, given enough time and computing power, be performed by a recursive function. Steele contends that “Lisp is the practical application of Church’s Thesis.”

Keep Reading

Most Popular

Large language models can do jaw-dropping things. But nobody knows exactly why.

And that's a problem. Figuring it out is one of the biggest scientific puzzles of our time and a crucial step towards controlling more powerful future models.

The problem with plug-in hybrids? Their drivers.

Plug-in hybrids are often sold as a transition to EVs, but new data from Europe shows we’re still underestimating the emissions they produce.

Google DeepMind’s new generative model makes Super Mario–like games from scratch

Genie learns how to control games by watching hours and hours of video. It could help train next-gen robots too.

How scientists traced a mysterious covid case back to six toilets

When wastewater surveillance turns into a hunt for a single infected individual, the ethics get tricky.

Stay connected

Illustration by Rose Wong

Get the latest updates from
MIT Technology Review

Discover special offers, top stories, upcoming events, and more.

Thank you for submitting your email!

Explore more newsletters

It looks like something went wrong.

We’re having trouble saving your preferences. Try refreshing this page and updating them one more time. If you continue to get this message, reach out to us at customer-service@technologyreview.com with a list of newsletters you’d like to receive.