The AI Breakthrough that Won't Happen

It's now been over 50 years since the birth of the AI field, and we still have no androids or sentient computers. Some believe it's because Artificial General Intelligence (AGI) is simply unachievable, others say it's because we just don't have the computing capacity required yet, and then some think it's because we're lacking a fundamental breakthrough, we just haven't grokked the "general-purpose intelligence algorithm". The latter view is reflected in Jeff Hawkin's On Intelligence book.

I'm currently reading Nick Bostrom's Superintelligence book (which I highly recommend), and it's got me thinking about intelligent machines, a topic that's fascinated me since childhood. My own opinion is that strong AI is almost inevitable. Assuming that humanity doesn't destroy itself or fall into a new middle age, sentient computers will happen, it's only a matter of time. I'm of course hesitant to put a time scale on this prediction, since such predictions have so often been wrong, but I'm inclined to believe that it will happen within this century.

I personally don't believe that computing capacity has ever been the problem. We've had impressively powerful supercomputers for decades, we just haven't had very effective machine learning algorithms. The fact is that until recently, nobody knew how to train a neural network with 5 layers of depth short of brute forcing the problem with genetic algorithms, which is obviously not what human brains do. Now, with deep learning, we're finally starting to see some interesting progress in machine learning, with systems outperforming human beings in image recognition tasks.

Still, I don't believe that deep learning is the one breakthrough that will lead to strong AI. Deep learning is impressive and cool, but it isn't strong AI, it's simply much better machine learning than what we've had until now. I simply don't think strong AI will come from one single breakthrough. We're not going to suddenly grok the one algorithm for AGI. The people who believe this, in my opinion, fail to appreciate the complexity of the human brain. Our brains are made of a multitude of components specialized to perform different tasks effectively. Hence, it would be surprising if we could obtain human level intelligence from some one single algorithm.

The obsession with cracking the AGI algorithm, in my opinion, stems from the obsession some computer scientists have with mathematical elegance and the study of algorithms that fit within a single page of text. Algorithms short and simple enough that emergent mathematical properties can easily be proved or disproved. The human brain, however, is the result of an evolutionary process that occurred over hundreds of millions of years, and is too complex to be described by an algorithm that fits within a single page. The reason we haven't created AGI, in my opinion, is that several component parts need to be put together in non-trivial ways. At this stage, machine learning, as a field, has largely been figuring out how these individual components can be built and optimized.

I think there will come a point where we begin to put different machine learning components, such as image classifiers, planners, databases, speech recognizers and theorem provers together into a coherent whole, and begin to see something resembling AGI. I don't think, however, that some computer will suddenly reach sentience at the flick of a switch. Engineering is similar to natural selection in that it's often a process of iterative refinement. It's now possible to fly anywhere on the globe in a jet-propelled airplane, but the airplanes of today are very different from the Wright brother's prototype. There's no reason to think AI is any different.

The first AGIs probaby won't be "superintelligent". It's likely that they will initially lag behind humans in several domains. They may show great intelligence in some areas while failing to grasp key concepts about the world which seem obvious to us. One can simply think of dogs. They're sufficiently intelligent to be useful to us, but they can't grasp everything humans do. Provided that we do eventually create AGI, these AIs will likely reach and surpass human intelligence, but this process is could take years or even decades of iterative refinement.

An interesting question, in my opinion, is whether the first AGIs will exist as agents on the internet, or whether they will come in the form of embodied robots. In the near future, I predict that we will see an increasing presence of intelligent agents online, the rise of the smart web. These agents will be used to automatically extract semantic information from web content, by doing things such as tagging videos or text with metainformation, and perhaps predicting the behavior of human individuals. This, however, is not AGI. There's an argument to be made that robots may bring forth AGI simply because the need is there. We expect robots to effectively deal with the complexities of the real world as it is, which seems to require human-level intelligence.