Skip to content

Why LISP Never Took Off

April 11, 2012

LISP is a programming language designed by John McCarthy in 1958 while he was at MIT. It’s one of the oldest high-level programming languages. The name LISP is an acronym for LISt Processing, because linked lists are the core data structure of the language. After its inception, the language quickly gained traction in AI research circles because of its simplicity, its mathematical elegance and its expressiveness. Today, the term LISP usually refers to languages derived from LISP, the two most famous of those being Scheme and Common Lisp.

I’ve been exposed to Scheme and other LISP variants on several occasions during my studies. I will freely admit that I find LISP languages interesting and conceptually elegant. The Scheme philosophy is one of “less is more”. The language, at its core, is very simple, but provides powerful facilities for expanding the language itself, in the form of macros. Scheme is also very consistent and logical. Its design has clearly been given careful thought.

Scheme programmers seem to love the language. They praise its many qualities: its elegant syntax, its conciseness, its functional design, its expressiveness, the productivity gains it provides. It’s a language so powerful, it allows you to redefine the language itself through the use of macros. Scheme programmers love the language so much, they almost make it sound like strong AI would have been achieved years ago, if only everyone programmed in Scheme.

The question when it comes to Scheme and other LISP variants, however, is how come no LISP language ever really caught on? If everyone who gives the language any serious consideration loves it, how come so few people use it? Why are we stuck with so many flawed, inelegant, unnecessarily complex mainstream languages when there’s a much cleaner, elegant and expressive alternative right there in the form of Scheme?

Rudolf Winestock attempted to answer this question in his essay entitled The Lisp Curse. His answer: LISP is just too powerful for its own good. Its expressive power is its ultimate drawback. I don’t believe expressiveness is much of a downside. I agree with Winestock, however, that one of the real issues with LISP is the fragmentation of the community. In this post, I intend to look at some more concrete reasons why LISP languages, in particular Scheme, never caught on, despite their many strengths.


When it comes to programming in C, there’s a handful of big name compilers. GCC and Clang for Mac and Linux. Visual C++ on Windows. These are all fully-featured and competent compilers. Choosing a C compiler isn’t very difficult. They’re all rather good and usually rather straightforward to install and use. When it comes to programming in Scheme, however, the choice isn’t so clear. There’s many Scheme implementations out there: Bigloo, Larceny, Chicken Scheme, Guile, Gambit Scheme, MIT/GNU Scheme, Chez Scheme, Scheme48, Hop and more. None of these stands out as the ideal choice. They have varied degrees of support for the language, widely varying performance, and installing some of these on your platform might well force you to take a stroll through dependency hell.

Newcomers might easily be scared away from the language before they even get to try it. They might try to find a Scheme implementation and find that it’s unmaintained and broken, or that it’s a pain to install on their system. They might also discover that the implementation they chose doesn’t fully support the latest Scheme specification, but rather some more-or-less conformant dialect of Scheme with undocumented features. You might think I’m just making up these scenarios, but they are actually part of my initial contact with Scheme. Although I haven’t tried Common Lisp myself, I believe it suffers from a similar problem.

Less is Not More

Scheme follows a minimalist design philosophy. It provides a small set of constructs as well as macros, and the programmer is expected to extend the language to fit his or her needs. In theory, this sounds very nice. A minimalist, “less is more” type of design should mean less room for incompatibilities among implementations. It should mean the language is easier to implement and perhaps help it remain more semantically consistent.

The problem is that Scheme provides perhaps too little. The C++ standard library provides you with standard containers, such as hash maps and binary trees, as well as algorithms operating over those containers, such as sorting algorithms. It also provides basic support for multithreading and string tokenization. The latest Scheme standard (R6RS) doesn’t specify any of this, even though these are very basic features that many if not most programmers will need at some point.

Most Scheme implementations have chosen to extend the language to compensate for what the standard lacks. The issue here is that different implementations may implement these unspecified features in different ways. This causes further problems. Those who want to write Scheme code will often be forced to use non-standard features specific to whichever implementation they are working with. Sharing code then becomes problematic, because other implementations may or may not support the features needed. The limitations of the Scheme standard make it difficult to implement portable Scheme libraries.

Batteries Not Included

One of the biggest frustrations I’ve had when trying to write Scheme code is the poor documentation. It’s not that easy to learn what you need to know about the language. Google searches will often yield irrelevant results, or results that are specific to one Scheme implementation. The official language specification document is probably the best source of documentation, which is rather sad. What if you need to use implementation-specific features? Well, the Gambit Scheme page has a long list of undocumented extensions.

Perhaps the Scheme community suffers from a lack of desire to document. From what I’ve seen, it’s fairly typical for Scheme code to have very few comments (if any) and be full of one-letter variable names. There seems to be a prevalent assumption that the code is self-evident and doesn’t need any explaining. This assumption is rather strange, because Scheme is probably the language that most needs documentation out there. In a language where you can use macros to (re)define language constructs, you should never assume that anything is self-evident.

The (Lack of) Speed

Scheme is dynamically typed. It’s very much a dynamic language. Just like Python and JavaScript, Scheme allows you to dynamically change the type of variables at run-time. This makes it rather tricky to compile Scheme code efficiently. There’s been lots of research in the past few decades on how to implement efficient JIT compilers for dynamic languages. From the work on SELF, to Mozilla’s work on Tracing JITs, to all the work done on PyPy, to all the work that’s been done in the Java world.

I’ve got bad news for you. There are no JIT compilers for Scheme. Okay, there may be some experimental Scheme JIT out there, but as far as I know, all the major Scheme compilers compile code ahead of time. This is sad, because JIT compilers are an ideal choice for dynamic language. They allow you to take advantage of type information that’s only available at run-time. The Scheme community seems to think it knows better, and has decided to stick with static compilation, disregarding the last 20 years of research into dynamic languages.

By default, most Scheme and Common Lisp implementations use numerical towers to represent numbers with arbitrary precision. This makes it difficult to optimize Scheme code, because overflow checks need to be inserted into the generated code.  Non-integer numbers are represented by arbitrary precision rationals if possible, which make for better precision, but terrible performance. In order to get around this, special operators need to be used to operate on machine-precision integers and floating-point numbers. Scheme has taken the view that everyone should pay for the performance cost of arbitrary precision arithmetic, rather than only those who need it, because this is more mathematically elegant.


Some Scheme programmers seem to believe that Scheme is God’s gift to the world of programming and that it’s the right tool for every use. Don’t get me wrong, I like many things about Scheme (and other LISPs), but I’m a pragmatist, and I just don’t find it all that convenient to use in practice. I also fail to understand why anyone would think Scheme is a good first language to teach a new programmer. It may be mathematically elegant, but it’s certainly not the most intuitive language out there, or the best choice for new programmers, when there’s alternatives like Python.

My first exposure to LISP was in a first year undergraduate class on programming language semantics. In this class, we had to do the classic experiment of implementing an interpreter for a simplistic LISP dialect in Scheme. Most of my classmates, who had little programming experience, were rather confused. I was mostly confused because I didn’t really get the point. In order to understand the interpreter we were writing, you needed to already understand the level of language semantics the class was teaching. In the end, the interpreter in question was trivially simplistic and predictably inefficient. They essentially showed us how not to implement an interpreter if you care about performance to any degree. Talk about unrewarding programmer masturbation.

  1. “They praise its many qualities: its elegant syntax, its conciseness, its functional design, its expressiveness, the productivity gains it provides.” This sounds like the words of many programmers, regardless of their language of choice. :)

    A great article.

    LISP was always a language without a clear purpose. PHP is used for basic web development; C for performance; Java for making money; Python for its clarity and extensive, well-written toolset; JavaScript for browser development. Where does it leave a language like LISP?

    • Nicely done writing. :-)

      “This sounds like the words of many programmers, regardless of their language of choice. :)”

      IMHO, C++ syntax could be much more elegant, even after the advent of the new standard, Java isn’t a perfect piece also. I think C# is way better than the other two in this regard (unfortunately, I have no experiences with Python). The so-called OOP support in Javascript is rather ugly (not counting Brainfuck and the like, of course). I think, C++ is still relatively redundant, but still the most expressive language among these. Java is trying to keep up with C# in functionality. So I don’t think this statement stands for any language.

      I have tried to learn and use LISP, but I have never got used to it. It may be elegant in some way, but on larger projects the code became very hard to read, understand and maintain, mostly because of the ocean of parentheses and the very deep nesting of expressions.

      I had been thinking about to use LISP as a scripting language (like in GIMP) in one of my own pojects, but I have finally choosen Lua instead, due to similar reasons, and because the performance and convenience was important.

      By the way, I have a fond dream about the perfect productive language. It is strongly typed, compiled, supports but not requires GC, supports reflection, it has functionality and syntax similar to C#, and has a similar class library to .NET BCL. C# is very productive, but the code produced by any of the current JIT compilers almost never reaches the performance that is provided by C++. CG and JIT does not allow to produce code that can guarantee response time (real-time applications, games). Moreover, interpreted languages always have limitations in performance, which becomes a concern in large projects again.

      • Shien permalink

        Have you tried Rust? It’s strongly typed (with inference for convenience), compiled, has optional GC, and has a lot of functionality similar to C#. However, its stdlib is still immature, and is deprecating things really fast, and I’m pretty sure it can’t do reflection, although if you really want to introspect program state you can directly manipulate the memory registers.

    • First the pedantic “It’s Lisp not LISP” (Sorry pet peeve)

      More to the point: Lisp predates these languages. It did have a purpose: To be a language closer to expressing math and algorithms at a high level. Keep in mind that Lisp is old, so old only Fortran is older nowadays. Lisp was (and is?) the prototypical high level, programmer-friendly language in an era where C was still considered high level and slow.

  2. “I have a fond dream about the perfect productive language”


    • Yes, seems promising indeed (but it still has avoidable GC, and of course, I don’t know how well does it work in this case). I’m curious about how developer support and practical applications will turn out with it. Currently, I don’t know if it worth the energy to start dealing with it or any other new language, it always takes considerable time and energy to become familiar with a new one, especially if the “good old” ones serve me quite well…. :-) I think I will see which way the cat jumps…

    • German Diago permalink

      Rust is a better choice, but still not ready.

  3. Recent lisp user/programmer permalink

    > In this class, we had to do the classic experiment of implementing an interpreter for a simplistic LISP dialect in Scheme.

    I’ve done this on my own not too long ago and found it enlightening although I’ve done it in C. ( )

    What you found “trivially simplistic” was a pleasant and rewarding experience for me.

    Judging from your blog you probably know a lot more about it than me but optimization is a different problem and I’m glad you can demystify the concept of an interpreter in few lines of scheme albeit slowly. I don’t if you have read SICP (the MIT free online book mentioned at some point in any LISP discussion) but it’s one of its numerous exercices along with building a virtual machine (still in scheme) to optimize it in the following chapter. Scheme is just a tool with some nice starting point for the project :)

    I’ll agree that the community is divided but you haven’t look hard enough. If you are still interested in modern Lisp have a look at Clojure. It’s byte-compiled (targets the JVM) and JIT’d. It’s very expressive (special syntax for sets, vector, maps, etc), has batteries included, OOP and a nice java FFI. Should be pragmatic enough for you :)

    There’s also Racket (ex Dr. Scheme) which is byte-compiled + JIT and has some nice features. The REPL is especially great.

    — written from Emacs (ELisp) ;)

  4. Mikko Tiihonen permalink

    Might it be that procedural thinking is more natural to us humans than functional? My own experience from programming is that it takes a certain fondness of mathematical logic and perseverance to uncover the full potential of functional programming – and after that there is no return to the old way of thinking. Functional way of thinking is very powerful regardless of the language you use…

  5. I was under the impression that the better Scheme compilers already generate code which is much faster than any JIT for Python, Ruby, or similar.

    • The stock Python and Ruby implementations are incredibly slow. I’d be interested to see Scheme compilers pitted against Google V8, LuaJIT or IonMonkey on highly dynamic benchmarks though. I don’t think Scheme would fare too well, not without resorting to lots of special annotations.

  6. Erik permalink

    I think Stalin would fare quite well. Chez Scheme also, but to a lesser extent

  7. Racket is JIT-ted. There is a version of Chez that is JIT-ted. Kawa targets the JVM.

    • Targeting the JVM doesn’t count (i.e.: javac is a static compiler). Racket and Chez, I’d have to look into what optimizations their JITs do. If Chez compiles statically by default though, my guess is the JIT is mostly there to make an REPL/eval possible, and is basically a static compiler that runs on demand, not something conceived to take advantage of the opportunities JIT compilation offers.

      • The advantage of targeting the JVM is precisely that you get to exploit its JIT. Kawa does exactly what javac does: it generates bytecode that the JVM then executes with or without JIT. It also has a REPL in which the expressions you type are byte-compiled and then executed on the fly. With a modern JVM like HotSpot, this works very nicely.

        • The Java JIT is too low-level to be able to exploit Scheme-specific knowledge in optimizing it. It’s made to run Java. It doesn’t know much about Scheme’s typing, for one, only what can be approximated by Java’s typing. Like it or not, a Scheme to Java bytecode compiler can hardly be as efficient as a JIT tailored specifically for the Scheme semantics.

Trackbacks & Pingbacks

  1. There’s Too Many Programming Languages! | Pointers Gone Wild

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: