Skip to content

The AI Breakthrough that Won’t Happen

May 10, 2015

It’s now been over 50 years since the birth of the AI field, and we still have no androids or sentient computers. Some believe it’s because Artificial General Intelligence (AGI) is simply unachievable, others say it’s because we just don’t have the computing capacity required yet, and then some think it’s because we’re lacking a fundamental breakthrough, we just haven’t grokked the “general-purpose intelligence algorithm”. The latter view is reflected in Jeff Hawkin’s On Intelligence book.

I’m currently reading Nick Bostrom’s Superintelligence book (which I highly recommend), and it’s got me thinking about intelligent machines, a topic that’s fascinated me since childhood. My own opinion is that strong AI is almost inevitable. Assuming that humanity doesn’t destroy itself or fall into a new middle age, sentient computers will happen, it’s only a matter of time. I’m of course hesitant to put a time scale on this prediction, since such predictions have so often been wrong, but I’m inclined to believe that it will happen within this century.

I personally don’t believe that computing capacity has ever been the problem. We’ve had impressively powerful supercomputers for decades, we just haven’t had very effective machine learning algorithms. The fact is that until recently, nobody knew how to train a neural network with 5 layers of depth short of brute forcing the problem with genetic algorithms, which is obviously not what human brains do. Now, with deep learning, we’re finally starting to see some interesting progress in machine learning, with systems outperforming human beings in image recognition tasks.

Still, I don’t believe that deep learning is the one breakthrough that will lead to strong AI. Deep learning is impressive and cool, but it isn’t strong AI, it’s simply much better machine learning than what we’ve had until now. I simply don’t think strong AI will come from one single breakthrough. We’re not going to suddenly grok the one algorithm for AGI. The people who believe this, in my opinion, fail to appreciate the complexity of the human brain. Our brains are made of a multitude of components specialized to perform different tasks effectively. Hence, it would be surprising if we could obtain human level intelligence from some one single algorithm.

The obsession with cracking the AGI algorithm, in my opinion, stems from the obsession some computer scientists have with mathematical elegance and the study of algorithms that fit within a single page of text. Algorithms short and simple enough that emergent mathematical properties can easily be proved or disproved. The human brain, however, is the result of an evolutionary process that occurred over hundreds of millions of years, and is too complex to be described by an algorithm that fits within a single page. The reason we haven’t created AGI, in my opinion, is that several component parts need to be put together in non-trivial ways. At this stage, machine learning, as a field, has largely been figuring out how these individual components can be built and optimized.

I think there will come a point where we begin to put different machine learning components, such as image classifiers, planners, databases, speech recognizers and theorem provers together into a coherent whole, and begin to see something resembling AGI. I don’t think, however, that some computer will suddenly reach sentience at the flick of a switch. Engineering is similar to natural selection in that it’s often a process of iterative refinement. It’s now possible to fly anywhere on the globe in a jet-propelled airplane, but the airplanes of today are very different from the Wright brother’s prototype. There’s no reason to think AI is any different.

The first AGIs probaby won’t be “superintelligent”. It’s likely that they will initially lag behind humans in several domains. They may show great intelligence in some areas while failing to grasp key concepts about the world which seem obvious to us. One can simply think of dogs. They’re sufficiently intelligent to be useful to us, but they can’t grasp everything humans do. Provided that we do eventually create AGI, these AIs will likely reach and surpass human intelligence, but this process is could take years or even decades of iterative refinement.

An interesting question, in my opinion, is whether the first AGIs will exist as agents on the internet, or whether they will come in the form of embodied robots. In the near future, I predict that we will see an increasing presence of intelligent agents online, the rise of the smart web. These agents will be used to automatically extract semantic information from web content, by doing things such as tagging videos or text with metainformation, and perhaps predicting the behavior of human individuals. This, however, is not AGI. There’s an argument to be made that robots may bring forth AGI simply because the need is there. We expect robots to effectively deal with the complexities of the real world as it is, which seems to require human-level intelligence.

7 Comments
  1. stan permalink

    I saw Nick Bostrums Ted Talk. And meandered my way through Ray Kurzweil’s “Singularity”. I use my own Metaphor to Grok whats happening when AI happens.

    At some point Trees and Humans were sufficienty established on the planet. Until humans began clearcutting trees not for nefarious reasons.

    The AI is going to eventually be operating so differently. That our conversations on “roots”, “sunlight” & “soil”. Will have as much relevance to an AI as Mammal strolling through the forest.

    A tree is made up of the same carbon and molecules we have. Trees Can’t and won’t possibly plan for mammals!

    We built our cities. And civilization but even your local tree hugger depends on resources. I agree with Bostrum. We better make damn sure that the foundation of a future AI. Plays nicely with the slow mammals!

    How would we even know if an AI emerged?
    You are correct that it would be impossibly complex. I fall in the camp of Stephen Hawkings warning. And Elon the inventor of the Tesla Car.

    • I do think that being careful is warranted. The dangers are real. What might play in our favor is that AI might evolve fairly slowly in the beginning, which would give us some chance to put safeguards in place and “beta test” things before they get out of hand. Having multiple AI entities instead of a monolithic AI also means we may have AIs protecting humanity should things go sour (a scenario present in the Hyperion book series). Of course, safeguards or not, there is danger, especially in the long term, but it’s not clear that machine intelligences would have the same motives that we humans do. They might not care about greed or self-preservation. They may be infinitely patient with us, even though we are dumb animals.

  2. “Assuming that humanity doesn’t destroy itself or fall into a new middle age”

    [Insert standard boilerplate cautioning against the view that Middle Ages a period of technical regress]

  3. I too think that AGI is an inevitability. To have an opposing view seems to be making an appeal to something that we cannot model nor simulate, which is a touch metaphysical.

    In respect of your point: “The human brain, however, is the result of an evolutionary process that occurred over hundreds of millions of years, and is too complex to be described by an algorithm that fits within a single page.”

    I would suggest that we shouldn’t necessarily discount simple algorithmic approaches to AGI because the brain is complex. It may be the case that the brain is unnecessarily complex for learning, having inherited all of that evolutionary ancestry.

    Yes, the human brain is excellent at learning. But maybe that’s not the only way to learn.

    Nice post, thanks.

  4. Once AGI has been developed, how does one teach it to high schoolers and undergraduates?

    “It’s now been over 50 years since the birth of the AI field, and we still have no androids or sentient computers.”

    The problem is the humans, not the computers, in particular an attitude, incentives, and education problem. People will have to stop writing extra-cryptic papers (Gerald Tesuaro, I’m looking at you… http://www.bkgm.com/articles/tesauro/tdl.html) that sit around for literally decades before the community has the time and background to begin to digest the ideas and turn them into something that resembles “chapter six of SICP”.

    “I personally don’t believe that computing capacity has ever been the problem.”

    Quite right, the problem has ALWAYS been the habits of the academic community…which have served to keep the specialized knowledge of AI in the hands of the few.

    “we just haven’t had very effective machine learning algorithms”

    OK, you have the domain right, there IS a conspicuous lack of ML techniques in the average undergraduate CS education, but “Learning to Predict by the Methods
    of Temporal Differences” came out in ’88 (although it appears that RSS didn’t make it freely available on his website until July of last year http://webdocs.cs.ualberta.ca/~sutton/papers/sutton-88.pdf). I think this is a “last mile” problem where work needs to be done to break down difficult concepts into bite-sized chunks so that undergraduates and high schoolers can master this material. I think turning TD-Gammon and its ilk into chapter six of SICP would be a big step forward.

    “Engineering is similar to natural selection in that it’s often a process of iterative refinement.”

    I agree…the social effects of the academic community dominated and kept at a trickle the pace of iteration. There was a feeling for many years that only certain special people (CS grad students, maybe a few mathematicians and statisticians as well) should have the right to iterate since it was felt that the quality of progress in the field was more important that the quantity. But now we have github, which has turned everything inside out.

    “We’re not going to suddenly grok the one algorithm for AGI.”

    Hold on; we CAN grok the one social algorithm that lets the community go as far down the path to AGI as desired Already GIT has led to a revolution in programming practice. Getting the people organized and working with the right tools is the trick. But as for algorithms, I hope that it will become easier to offload some of the “grunt work” of programming onto computers; a program that can refactor extensions of itself and express those extensions in mathematical terms (these are the basic tasks I’d expect of an intelligent human programmer) would be, in my opinion, the equivalent of “power steering” for development. Right now, development on a moderately large is like running through mud: huge amount of effort required to get anything done. Most developer time is spent reading through code, trying to figure out what is going on. Today, refactorings are almost always by hand (you have to understand the program model in order to refactor it; how many programming languages directly benefit from the parallel development of mathematical models, to the point where the model can be used to “pivot” the code? None that I know of).

    “whether the first AGIs will exist as agents on the internet”

    I’m 99% certain it will be bots…today we have websites like hacker news that aggregate information and filter it by voting. Tomorrow we will have bots that aggregate their own code according to votes. This has already happened: https://github.com/botwillacceptanything/botwillacceptanything/

    I’m fairly confident that github is 80% AGI already. Maybe there will be a “frankenstein’s monster” moment once once a bot is observed to create a pull request for itself that refactors its own code. This will be one step toward “programming with math” as opposed to current practice, “programming with code”.

  5. Nicholas permalink

    True AGI has more to do with the method of interface, in today’s computers the processing is divorced from the memory. until we have processors that have memory anything that is conciously controlling itself as AGI will be impossible. Though significantly advanced Memristors will be able to create AGI. It might be time to start investing in IBM.

Trackbacks & Pingbacks

  1. 1p – The AI Breakthrough That Won’t Happen | Exploding Ads

Leave a comment