Skip to content

The Robotic Revolution

The last three years have seen a slew of impressive deep learning feats. Machines are finally able to effectively extract meaning from the fuzzy thing that is the real world. This is already enabling all kinds of things, from smarter websites to self-driving cars. I’m very enthusiastic about this rapid progress. I think it’s becoming very clear that in the next 20 years, the world will see an increasing robotic presence. I don’t mean to say that androids will instantly become ubiquitous. We’re obviously not there, both in terms of AI capabilities, and in terms of being able to produce lightweight, affordable and nimble robots. What I mean to say is that many of us have already accepted self-driving cars as an inevitability, since there are already working prototypes. The question is then, why stop there?

How much of a stretch is it to go from a self-driving car to a street-sweeping robot? What about an automated garbage-disposal truck with a robot arm that picks up garbage bags without human workers? What about an automated lawnmower you can control with your smartphone? It’s obviously going to take some time for these things to be developed, but I think we can all agree that the automated garbage-disposal truck is not that far-fetched. How much more of a stretch is it to go from such a robot to one that restocks store shelves? Those who have warned us of robots taking human jobs are most likely right.

In the domestic realm, there is already a Berkeley prototype of a robot that can fold laundry. I think one of the biggest hurdles there is going to be the cost. The economic reality is that going from a university prototype to a useful product requires several years and a large monetary investment in R&D. Furthermore, no investors are going to fund the development of such products if the amount of people who can afford to buy them is too small to make a profit. This means that even though we’re very close to being technologically capable of building domestic robots, it’s going to take some time before they reach the market.

Still, I think at this point, it’s only a matter of time. Robots are going to become increasingly present in the world around us, and this will likely generate a feedback cycle. The range of robotic capabilities will expand, the cost of components will go down, and as robots enter the global mindshare, people will become increasingly likely to want to apply robotics to various tasks. We’re about to see a robotic revolution. At this point, it seems inevitable.

There’s Too Many Programming Languages!

It’s an opinion that often comes up in development circles. We’re in the middle of a sort of language boom right now, and every time someone comes along and announces a new language, other people suggest that we should all stop creating new programming languages, because there’s already too many out there. Common complaints include the amount of effort needed to constantly learn new languages, the fragmentation caused by the constant introduction of new languages and the lack of innovation in these.

If you feel that new languages require too much effort to learn, the first thing I have to say is: tough luck. Programming is very much about constantly learning new things and being able to learn on your own. Once you’ve mastered a few programming languages, you should begin to see repeating patterns and find that the skills you’ve already acquired are very transferable. For instance, I’ve never written a single line of Go, but I’ve used C, C++ and D. I’m sure I could get started writing Go code within a few hours, and become reasonably proficient within a week. Go doesn’t scare me. Another important thing to realize is that not knowing a programming language isn’t necessarily a flaw you have to correct. Case in point: there are still many job postings out there for COBOL programmers. It’s perfectly acceptable to specialize in one or a few languages of your choosing.

As for the issue of fragmentation, I think there’s truth to it, but it’s not as bad as people imagine. There are only a few languages out there which are truly mainstream. These have existed for years, and if you know just 3 out of the top 10, you’ll be well equipped to realize personal projects or land yourself a development job somewhere. The new programming languages that come out, for the most part, remain relatively fringe and are used mostly by hobbyists. Most of them will likely die out. I’ve written my PhD project in D, and I’ve found that surprisingly few programmers have ever written code in that language. In fact, none of the programmers I’ve met in person outside of DConf had ever used D.

The biggest kind of fragmentation problem, in my opinion, is self-inflicted. I’ve spoken to people at various companies who told me that their architecture was made of some mishmash of parts written in five or six different languages. That seems like an issue to me, if only because more languages means more dependencies, more breakage, more interoperability problems and more code maintenance issues. That’s not the fault of all these languages for existing though, it’s simply bad planning. The administration there let some capricious programmers get their wish and include new code written in their pet language in the system, with little regard for the added complexity this language soup would introduce.

There’s some argument to be made that many of the new languages coming out lack originality. At the moment, most of them are statically typed and compiled ahead of time and most of them have similar syntax. It’s true that there isn’t a lot of innovation overall, but I don’t think that’s a reason to stop creating new programming languages. The design space is huge, infinite in fact, and we’re only beginning to explore it, in my opinion. Consider that even to this day, all the commonly used languages are based on the editing of text files. Also remember that even the languages which don’t succeed on a large scale, such as LISP, can have a tremendous impact on other languages down the road. Imagine a world where there were only two programming languages in existence: COBOL and Fortran. Is that a world you would like to live in? I would argue that there is a need for new languages to solve new problems.

Basic Block Versioning – My Best Result Yet

For those not familiar with this blog, my PhD research is focused on the optimization of programs written in dynamically typed programming languages, JavaScript (JS) in particular. In JS, every arithmetic operator and every object property access needs to do dynamic dispatch based on the type of its operands. For instance, the addition operator can work on integers, strings and floating point values. It can actually accept values of any type as operands. The types of both inputs are implicitly tested at run time so that the correct behavior can be chosen. Much of these type tests are redundant, because even in a dynamically typed programming language like JavaScript, most variables don’t change type over the execution of a program.

Earlier this year, my first paper about Basic Block Versioning (BBV) was accepted at ECOOP 2015. BBV is a JIT code generation technique I’ve been working on which is very effective for eliminating redundant dynamic type tests. In this first paper, we (my advisor and I) were able to show that our technique eliminates 71% of dynamic type tests across our set of benchmarks, resulting in significant performance improvements. Last week, I submitted a paper about an interprocedural extension to BBV. This extends on the original work we did by generalizing the technique to propagate type information through function calls, about function parameters and return types as well.

The improvements are quite striking. We’re now able to eliminate 94.3% of dynamic type tests on average, and we eliminate more than 80% of type tests on every benchmark. To put things in perspective, I decided to compare this result with what’s achievable using a static type analysis. I devised a scheme to give me an upper bound on the number of type tests a static analysis could possibly eliminate. First, execute all the benchmarks and record the result of each type tests. Then, re-execute the benchmarks with the type tests that always evaluate to the same result removed. This is equivalent to using a static type analysis with access to “perfect” information about which type tests are going to be redundant. The resuls of this experiment are shown in the graph below:


I was very pleased when I first saw these results. The “perfect” static analysis eliminates, on average, 91.7% of type tests, which is less than what we achieve with interprocedural BBV. You might be wondering how this is possible, how BBV can possibly eliminate more type tests than what should be an upper bound on the number of type tests that can be eliminated. The main point is that the analysis is just an oracle that tells us whether any given type test is going to be redundant and safe to eliminate or not. In contrast, BBV has the power to selectively duplicate sections of code, which makes it possible to eliminate even more type tests.

The main reason that code duplication (or versioning), is useful, is that it allows BBV to separate out contextual information which wasn’t present in the original untransformed program. If you want a simple example, think of a loop where some variable x is an integer in the first iteration, and then becomes a string in every subsequent iteration. A traditional type analysis will see that this variable could be either an integer or a string, and conclude that we don’t know what type this variable will have at run time. In contrast, BBV might be able to unroll the first iteration of the loop, and know that it will be integer in this first iteration, and a string in every other iteration, thus eliminating all type tests on the variable x.

There are interesting implications to the results obtained with BBV, in my opinion. I was recently talking to someone at Facebook about HHVM, their virtual machine for PHP and Hack. They were telling me that Hack has gradual typing, and that this information isn’t yet used to optimize code in HHVM. Based on the results I got, I would say that there is probably no real need to type-annotate code in order to improve performance. Gradual typing can be useful for documentation, tools and safety, but it doesn’t really offer performance advantages. BBV can already eliminate over 94% of type tests on average, and this is only the beginning, there are still many easy ways to further improve upon these results. Alternatively, flipping what I just said on its head, if a JIT compiler can known the types of most variables at code generation time, the type assertions introduced by gradual typing can be implemented at little to no cost.

All Hope is Not Lost for Flying Cars

It’s a huge science fiction cliché, but it’s something people seemed to actually believe back in the 1950s: in the future, there’s going to be flying cars. Wouldn’t that be amazing? No traffic jams, no intersections, go much faster than you could in a street vehicle. Unfortunately, it’s 2015, and we’re still driving around in ground cars. Population is increasing, the amount of traffic in big metropolitan areas is generally rising, and we’re not getting to work any faster.

Why is it that flying cars aren’t a thing? You can easily point to several reasons. For one, the cost of fossil fuels is rising and current Vertical Take-Off and Landing (VTOL) aircraft are not at all energy efficient. A small two-seater helicopter can easily burn 10 gallons of fuel in just an hour. It’s also difficult to imagine that the average Joe could really afford a flying car when even a small used helicopter from 1963 can cost over 130K. Then there’s licensing and safety issues. Drunk driving is a problem now, just imagine if drunk people could fly over the city at high speeds. Maintenance-wise, it’s one thing when your engine stalls on the highway, but imagine what would happen when poorly maintained vehicles break down in the sky.

It recently dawned on me, however, that technology has in fact advanced quite a bit when it comes to flying things. Now, you can buy yourself an electrically-powered, gyro-stabilized quadcopter for just over $50. This wasn’t possible when I was a kid. You’d have been looking at a few hundred dollars for the most basic gas-powered airplane (maybe $500 in 2015 dollars), it would have been a huge amount of effort to maintain, and you couldn’t realistically have flown that in the city.

What does that have to do with flying cars? Well, I had this thought: what if you could scale up flying electric drones? Amazon and Google want to build bigger drones that can deliver packages, but what if you could use drones to deliver people? Thanks to the great push for electric cars, lithium batteries are getting cheaper, lighter and ever more efficient. High-performance electric motors are getting cheaper too. Very soon, it might very well be possible to build electric vehicles that are relatively inexpensive and powerful enough to carry people.

A few days ago, I found out that someone had already built such a thing. The Swarm vehicle weighs just 148kg, sports 54 rotors and can deliver a whooping 22 Kilowatts of power. It’s powerful enough to lift one person and cost just 6000 british pounds to build. Okay, this thing only has an autonomy of 10 minutes, poor controls, and it probably isn’t really safe. It’s really just a prototype, but I still think it’s a great proof of concept. Battery technology is progressing constantly, so within a few years, such electric flying vehicles could likely be made smaller and have more autonomy.

I believe that if such large-size electric flying drones could have just 15-20 minutes of autonomy, interesting applications would already become possible. Doing a little back-of-the-envelope math, the island of Montreal (where I live) is only about 16km wide. I myself am only about 8 kilometers away from downtown. If a flying vehicle could fly at 100 kilometers per hour, it would only need 5 minutes at cruise speed to fly me to downtown, assuming it’s flying in a straight line. Make it 7 a minute flight to account for acceleration and deceleration. That probably leaves sufficient autonomy for the vehicle to return to some nearby supply station to charge.

What, I think, makes such an idea much more realistic is that such drones could be computer-controlled. Completely removing the need for a human pilot makes flights faster and accidents much less likely. The possibility of having multiple independent computer-controlled rotors and multiple batteries also allows for a level of redundancy that might make such a vehicle much safer than a helicopter whose one engine failing means low survivability. If you could add a giant computer-controlled airbag to protect riders, pedestrians and infrastructure from emergency crashes, and cover the drone in flexible foam, you could possibly have a pretty safe autonomous human-delivery mechanism.

The scenario I have in mind goes something like this: you dial an app on your phone, within 4 minutes a flying drone pod shows up, flashes safety lights to signal its presence, and gently lands in front of your doorstep (or perhaps at a designated landing pad). A door opens up, you hop into the single seat, the door closes shut and locks itself, and 2 minutes later you’re already flying away. After a 5 to 10 minute flight, you’re deposited up to 16km away from home. Would you ride inside a drone pod to work, if it could mean completely avoiding traffic and shortening your commute to mere minutes?

EDIT 2016-01-07: the Chinese firm EHang has just unveiled the EHang 184, a prototype of a computer-controlled drone meant to carry a single passenger short distances. This is pretty much exactly what I had in mind when I wrote this post four months ago.

Why You Should be a Little Scared of Machine Learning

I recently blogged about my thoughts on the medium-term future of the internet, and the imminent coming of the Smart Web. There’s been a huge amount of progress in machine learning in the last five years, largely due to breakthroughs in deep learning. You might not be directly aware of it, but we’re at the beginning of a machine learning boom right now, a neural network renaissance. Google and Facebook are pouring huge amounts of money into deep learning. In the next few years, we’re going to see the fruits of these investments. Self-driving cars, automatic closed captions and more accurate machine translation come to mind, but I would argue that the ramifications are going to quickly expand much beyond this. If you think computers and the internet have changed the world in the last 20 years, you should really brace yourself for what’s coming, because really, that was just a warm up.

A few day ago, I interviewed at a web advertisement company in New York. Let’s call them Cloud7. They explained to me that they do Real-Time Bidding (RTB). According to them, every major internet ad provider does this now. When you click on a link and start loading a webpage, the ad provider gets blobs of data providing them with a rough idea of who you are (age, sex, income bracket), the websites you’ve been to, what you’ve been shopping for, etc. Many advertisers, wanting to sell you their products, then get to bid some amount (cents, fractions of cents) to buy ad spaces on the page you’re loading. Multiple ad auctions are over in tens of milliseconds before the page is done loading. If you’re wealthy and you’ve been visiting many car websites recently, then car vendors might be willing to outbid everyone to show you car ads, because they stand to make much more money selling you a car then a shoe company would selling you shoes.

You’ll be interested to know that the web advertisement world is already set up so that the information ad providers like Cloud7 receive about you is in part supplied by outfits referred to as third party data providers. There is already, as of now, a market in place for APIs that can produce information about visitors to a webpage. Information about you is already automatically gathered by multiple entities, traded for a monetary value and used to better pick the ads you see. The technology is somewhat primitive right now, but it’s improving constantly. Improvements in ad targeting can translate in huge revenue increases, so there’s a clear incentive to make these systems smarter. There’s a huge incentive to gather a richer set of information about you, and the market to buy and sell that information is already in place.

What deep learning will allow us to do is to bridge the semantic gap between the fuzzy thing that is the real world, and the symbolic world computer programs operate in. Simply put, machines will soon have much more understanding of the world than they currently do. A few years from now, you’ll take a picture of your friend Sarah eating an ice cream cone, and some machine in the cloud will recognize Sarah in the said picture. It will know that she’s eating ice cream, probably chocolate flavored by the color of it. Facial expression recognition will make it possible to see that she looks excited with a hint of insecurity. Combining information from multiple third party data providers, it won’t be too difficult to infer that you and Sarah are on your third date together. Looking at browsing history and social network profiles, it might be possible to have a pretty good idea how you two feel about each other, and whether this relationship is going to flourish or perish. What you yourself don’t know is that Sarah wanted to impress you so much, she went out and bought a new dress to wear on this date during her lunch break. Odds are you two will see each other again.

Why would Google or Facebook care about your date with Sarah, and your feelings for each other? Because that information can be useful and valuable in the right hands, which makes that information worth money. You might be more interested in having meals at fancy restaurants near her work in the next few weeks, or in buying that PlayStation 5 game she’s been talking about. Personal lubricant, scented candle and fluffy handcuff manufacturers think you might be more interested in their products than before. I don’t think this is so far-fetched. Google, Facebook, Amazon and every consumer-facing company out there want your money. The better they understand you, your life, and the world, the better chance they have at successfully getting you to hand them your cash. They might actually make your internet experience way more fun in the process. At the very least, the ads you see are going to be increasingly smart and relevant, which isn’t necessarily a bad thing.

Unfortunately, not everyone has “Don’t Be Evil” as their company motto. There’s another group of businesspeople, besides advertisers, which stands to profit hugely from machine learning. The people I’m talking about are scammers. Deep learning can be used to recognize people and objects, extract semantic information out of pictures, videos and tweets, but that’s not all it’s useful for. As illustrated in this amazing blog post, neural networks can also be used to generate content. Soon enough, scammers might be able to automatically produce content that begins to look eerily real. I don’t think it’s that far-fetched to think that your writing style could be imitated, complete with accurate details of your life thrown in. What if there was a program that could generate fake naked pictures of you and e-mail them to people you know? Worse, what if it were possible for a piece of software to call people you know and impersonate your voice on the phone? Sure, the machine doing the calling isn’t self-aware, but if it can have some rudimentary understanding of what people say to it and follow some kind of script, that might still be enough to cause a lot of trouble.

What Killed Smalltalk?

I’ve been thinking about designing my own programming language for a long time. I’ve actually been keeping a lot of notes, and even throwing together some code when I can find time. My plan is to build something that takes inspiration from LISP, JavaScript and Smalltalk. I think there’s a niche to be filled. There are many new programming languages coming out lately, but most of them are statically typed and compiled ahead of time. The dynamic languages that do come out often don’t perform very well (see CPython, Ruby) or have poorly thought out semantics.

I’ve written a few blog posts pointing and laughing at perceived failures of JavaScript, but the truth is that programming language design is hard. There’s no limit to the complexity of the things you can build with programming code. No matter the design choices you make, no matter the language you design, there are bound to be some inconsistencies and weaknesses somewhere. I think that Smalltalk is a very inspiring programming language, revolutionary in many ways, but it’s also one that has gone extinct. It’s interesting, in my opinion, to ask ourselves why Python thrives but Smalltalk died.

Like LISP, Smalltalk implemented some interesting features which have influenced other languages (such as Java and JavaScript). Some nifty features of Smalltalk are really cool, but still aren’t implemented in any other language. For instance, Smalltalk had the ability to suspend running programs into a saved image, and resume execution later at the saved point. As you can imagine, this is extremely powerful and useful. Forget saving documents or saving games, just save the state of an entire program, no implementation effort required.

I found an interesting talk on YouTube titled “What Killed Smalltalk could Kill Ruby Too”:

Robert Martin makes the case that one of the big weaknesses of Smalltalk is that it was just “too easy to make a mess”. Smalltalk was highly dynamic, and encouraged people to “monkey patch” things and do quick fixes/hacks. He also makes the point that Smalltalk just “didn’t play well with others”. When you think about it, Smalltalk had its own source control, IDE and GUI built into live images, living alongside your program. Smalltalk isn’t just a language, it’s an operating system and a way of life. It’s conflating things that would maybe be best left separate.

It seems to me that in some key areas, the Smalltalk creators placed their own radical ideas above everything else. They chose idealism over pragmatism. Smalltalk was a language created with a grandiose vision. It had some deeply rooted principles which didn’t necessarily work so well in practice, such as the idea that everything had to be an object, that the object metaphor should be applied everywhere, one size fits all. At the end of the day, programmers want to get things done and be productive. If the language design or implementation gets in the way of getting things done, people will leave. Pragmatism is key for a programming language to succeed.

Smalltalk was also designed with the idea that it should be easy to learn and intuitive. This has led its creators to have a heavy focus on graphical user interfaces. I watched an introduction to Self on YouTube (Self is a direct descendent of Smalltalk) and saw the heavy emphasis on interacting with objects through UIs. The user interfaces showcased in this video are, in my opinion, horribly complex and unintuitive. Pretty much all of the interactions done through the UI would have been simpler and easier to understand if they had been done by writing one or two lines of code instead!

When you sit down and think about it for one second, you have to realize that programming doesn’t fundamentally have anything to do with graphical user interfaces. Yes, you can use programming code to create GUIs, but there is no reason that programming should have to involve GUIs and be tied to them. The metaphor of writing code has been extremely successful since the very beginning, and it probably makes more sense to the mathematical mind of a skilled programmer. Not everything has to have a visual metaphor. This is again a case of pushing some idealistic principle too far, in my opinion.

I believe that a lack of pragmatism is something that has killed many languages. Not just Smalltalk, but Scheme too. My first experience with Scheme involved trying and failing to install multiple Scheme distributions because I couldn’t get all the dependencies to work. Then, finally getting a Scheme compiler installed, and struggling to implement simple routines to parse text files, because Scheme doesn’t include the most basic string routines. The Scheme compiler I’d selected bragged that the code it produced was highly optimized, but once I finally managed to write my own string routines, I compiled my program, ran it, and it was dog slow. Parsing a one-megabyte CSV spreadsheet took over a minute. I ended up rewriting the code in Python. Why don’t more people code in Scheme? Because they try to realize their ideas in Scheme, and it just doesn’t quite work out.

JavaScript is the C++ of the Web

When I started my PhD, back in 2009, I told my advisor I wanted to work on optimizing dynamic programming languages. A big part of my thesis was going to involve the implementation of a JIT compiler for some dynamic language, and so our discussion rapidly became focused on which language I should be working with. In the end, we ended up choosing JavaScript. It was a good compromise: a widely-used “real-world” programming language, warts and all, that was still small enough for one person to realistically implement a compiler for. The ECMAScript 5 specification was around 250 pages long, and I read the whole thing from cover to cover before I began working on Higgs.

Since then, I feel I’ve been watching JavaScript go the way of C++, it’s becoming a “kitchen sink” language. So many new features have been added that the new ES6 specification document is literally twice the length of the ES5 specification. Worse yet, a year before the ES6 specification was even completed, there was already a laundry list of features scheduled for integration into ES7. They weren’t nearly finished with ES6, and they were already planning ES7. There are a number of semantic inconsistencies in JavaScript that need fixing, but the ES6 and ES7 additions do nothing to fix those, they merely add new features (read: complexity) to the language.

Personally, I’m a big fan of simplicity and minimalism in programming language design. I think that smaller languages have the potential to be easier to implement, optimize, teach, debug and understand. The bigger your language, the more semantic warts will pop out and the more behavioral inconsistencies are going to occur between different VM implementations. If JavaScript is really “the assembly language of the web”, then why does it need all these high-level features? The logical thing to do would have been to freeze as much of the JS semantics as possible, and focus on improving support for JS as a compiler target. I believe that the answer as to why JS keeps growing is largely design by committee.

Of course I’m biased. I implemented my own JavaScript JIT compiler and the fact is, I’m too busy to keep up with all these new additions. Still, it seems to me that in the web world, nobody takes the time to pause, breathe and think things out for even a moment. Case in point: Mozilla made a lot of noise with asm.js, a standard for compiling native code to JS that was allegedly better than Google’s Native Client. I think asm.js is still new enough that developers haven’t really had any time to adopt it, it’s only been used in tech demos, but Mozilla and Google are already working on WebAssembly, which in all likelyhood will make asm.js irrelevant. Think about that for a second: asm.js, which is still very new (2013, it’s only two years old), is already mostly irrelevant, before anyone even had time to adopt it.

WebAssembly is essentially what Brendan Eich told us we didn’t really want or need: a bytecode format for the web. A somewhat more neutral platform for all compilers to target. As a compiler implementer, it still seems to me like it’s a bit of an unfortunate compromise: a way to retrofit a web-bytecode into JavaScript VMs. It’s going to take programs encoded as Abstract Syntax Trees (ASTs) as input, whereas GCC, clang, and other real-world compilers usually generate Control Flow Graphs (CFGs) at the output stage, not ASTs. Forcing compilers to convert CFGs back into ASTs seems like a decision made to simplify the job of WebAssembly VM implementers, at the expense of everyone else.

All Possible Thoughts

I’ve recently been thinking about the topic of originality. You’ll often hear people say that “it’s all been done before” and “what’s old is new again”. The world population has recently passed the 7 billion mark. According to some estimates, there have been up to 120 billion human beings alive since the dawn of humanity. In a world so big, it’s hard to believe you’re unique. It’s easy to feel irrelevant and worthless. Some philosophers have even tried to make the argument that all possible thoughts have been thought of before, leaving you no chance of ever coming up with anything original. After all, human beings have existed for hundreds of thousands of years, and if there’s been 120 billion of us so far, there’s been a lot of thinking going on.

I think the best way to answer this question is with a thought experiment. We don’t know enough about neuroscience to exactly define what a “thought” constitutes. I’ll make some simplifying assumptions to give us some chance to grasp at this problem.

Let’s imagine that:

  • Thoughts are patterns of neural firings in a small cluster of 512 neurons in your brain.
  • Every human being has this same neural cluster.
  • The wiring of the thought cluster entirely fixed, and identical in every individual, not affected by environment or genetics.
  • Neurons in the thought cluster fire in a synchronized manner, 1000 times per second

In this imagined view, each thought is representable by a boolean vector of 512 bits, and any brain can have up to 1000 thoughts per second. In our imagined, simplified world, there are (2^512) ~= 1.34×10^154 possible thoughts in total.

Using some back of the envelope math, assuming there have been 120 billion human beings alive so far, each living for 100 years, each having up to 1000 possible thoughts per second, this gives us:

1000 * (365 * 24 * 60 * 60) * 100 ~= 3.2 * 10^12 thoughts per human being over a 100 year lifespan.

Hence (120 * 10^9) * (3.2 * 10^12) = 3.84×10^23 thoughts happened so far, out of 1.34×10^154 possible thoughts.

You might be wondering what the point of this was. My example is obviously ridiculous. Human thoughts likely are not patterns of firings in a cluster of 512 neurons. We have tens of billions of neurons in our brains, each with thousands of synapses, and our neurons do not fire according to a synchronous clock like a modern silicon chip. Furthermore, each brain’s connectivity is uniquely affected by a combination of both environment and genetics, and hence, no two people have exactly the same neurons and synapses in the same place.

The point is that the estimate of 1.34×10^154 possible thoughts is probably off by one hundred orders of magnitude. However, the estimate of 3.2 * 10^10 thoughts per year per human being may actually be generous. Hence, I surmise that not every possible thought has been thought. Far from it. The universe will likely dissipate before that has any chance of happening.

Feels like Censorship

I just got informed that my second paper on basic block versioning, an extension of my previous work, has been rejected. Most academics don’t really talk about these things. You probably shouldn’t publicly say that your paper has been rejected, because you want to project some kind of image of never-ending flawless success. The calculated, business-like, aseptic thing to do is to keep quiet, rework your paper, submit it somewhere else, rinse and repeat.

I’m talking about it. I need to let out some steam, express my frustrations a little bit. If that’s a bad career move, well, so be it. I don’t want to spend my life hiding behind a façade, pretending I’m perfect and always cheerful. Living life without ever expressing yourself is a fast path to depression, if you ask me. At the moment, I’m both frustrated and sad. I’ve spent months working on this paper. It was a good paper. Somehow though, it wasn’t good enough. It didn’t make the cut. Better luck next time. Call me cynical, but it is a little depressing considering this conference has an acceptance rate of about 45%. Damn.

I’ve worked with a conference’s program committee before. I’ve had to evaluate a paper about a programming language that consisted of a hand-written AST encoded in XML, with no tool support. I think the paper was 8 pages long. They were pitching this as a revolutionary new idea. This was back in the day of the XML-all-the-things craze. Are you telling me that my latest submission is in the same category as the XML one? I guess when it comes to computer science conferences, you’re either a zero or a one. There is no middle ground. Your idea is either deserving of publication, or piped into /dev/null.

The perverse thing is that this constant stream of rejection discourages exploration. As an academic, you really want your papers to get accepted. Your funding and ultimately your academic career depend on it. I’ve already started to adapt the way that I work. When I started my PhD, I had no idea how the paper game was played. Now, when I have a new idea for my research, I have to ask myself: it this publishable? It’s really interesting, it has a lot of potential, but is it publishable?

To publish your idea, you should craft the smallest possible publishable unit. It needs to be sexy and trendy. It needs to be about JavaScript. It needs to reference as many recent papers as possible, and ideally, point in the same direction as those papers. Contradicting established wisdom is not smart. Suggesting alternatives to the established wisdom is not very smart either. You’re contradicting iron-clad, proven, mathematical facts, which means you are wrong.

The reason why conferences have limited acceptance rates dates back to the days when conference papers were published in these books called “proceedings” which were purchased, printed and shipped in the mail. You couldn’t accept every paper, it wasn’t physically or financially possible. Nowadays, it’s estimated the Google server farms have a total storage capacity rated in multiple exabytes. Conceivably, we could make all submissions to all conferences available on conference websites.

Why do so many computer science papers come without any source code? Because the current practices in our field discourage replication and encourage “massaging” of results. In the spirit of transparency, we could make all submissions available, along with all of the reviewer comments. Maybe we don’t want all papers to be on the same footing. Maybe your paper would get ranked into class A, B, C or D, maybe you’d get some score on a 5 or 10 point scale. Certainly, not everyone could realistically be invited to come and give a talk. Still, is there really a need to silently discard 50 to 90% of all submissions to a conference?

It feels like censorship. When a paper is rejected, it strongly discourages further exploration of that research avenue. You’re telling me that my idea doesn’t deserve to be seen. Worse, you’re giving my academic competitors a chance to beat me to the punch. Science is about proving and disproving things, but it’s also about playing with ideas. In the world of computer science conferences, there’s very little room for disproving anything, and even less room for playing with ideas. We don’t have time for that. The next conference deadline is coming up real soon, and we have funding applications to write. Peer reviews can become peer pressure, a civilized form of hazing.

Fortunately, my paper is already online on arXiv. It’s timestamped. It’s out there. I don’t know if I’ll have time to publish this paper at an academic conference before the end of my PhD, I’m being pressed to finish as soon as possible, and submit more papers. If it gets rejected one or two or three more times, it might never get into any conference. I can at least take some comfort in the idea that some of my research was published, and my latest work is out there. It might inspire someone to explore a similar research direction.

My personal opinion is that academic research in compilers is dying. It’s going to go the way of operating systems research. Why? Because there’s too much infrastructure to build. It takes too long. It’s just not practical to publish about. These days, the game-changing, innovative work in compilers is largely happening in the industry, and it’s being done by people who left academia.

Presented at ECOOP

This week I am in Prague, at the European Conference on Object Oriented Programming (ECOOP) to present my research on Basic Block Versioning. Getting to ECOOP was fairly stressful. I was flying overnight but can never manage to sleep on airplanes. Sleep-deprived, I had to run like mad in an attempt to make an impossible connection in Paris. The Charles de Gaulle airport is organized in a way that I had to wait for two shuttle buses and go through security twice. Fortunately, the Paris-Prague flight was slightly delayed, and I barely made the connection, but my checked luggage did not.

I presented my paper Wednesday afternoon. The talk went very smoothly and the audience questions were rather friendly. The paper is now available online from the publisher if you’re interested in reading it. I was very happy to see that my talk and all others were filmed. The video is not yet available, but I have uploaded the slides. In addition to giving a talk, I also presented a poster explaining the main aspects of my paper. I was pleasantly surprised when they informed me that I had won the distinguished poster award.

There are many interesting people here, including VM engineers from Mozilla and Google, Brendan Eich and Bjarne Stroustrup. I had the privilege of visiting touristic sites, sharing a meal and discussing VM design with Carl Friedrich Bolz (of PyPy fame) and Sam Tobin-Hochstadt. My main regret is that I’ve had a very difficult time adapting to the local time zone. I’m sleeping poorly at night and crashing every afternoon. This has resulted in me missing many interesting talks. I’m looking forward to the recorded videos being uploaded. The VM and language design talks from Curry On are of particular interest to me.