Skip to content

The AI Risk Isn’t What You Think

October 20, 2017

Recently, a number of prominent figures, including Elon Musk, have been warning us about the potential dangers that could arise if we can’t keep artificial intelligence under control. The fear surrounding AI dates back a long time. Novels and stories about robot takeovers date back as far as the 1920s, before the advent of computers. A surge of progress in the field of machine learning, and the subsequent investment of hundreds of billions of dollars into AI research by giants such as Google, Amazon, Facebook and Microsoft, has brought this fear back to the forefront. People are waking up to the fact that the age of AI and robots is coming soon, and that self-aware AI could very likely become a reality within their lifetime.

The 2014 book Superintelligence: Paths, Dangers, Strategies by Nick Bolstrom embodies this fear. In his book, Bolstrom details multiple scenarios in which AI could spiral out of control. I believe that the author has achieved his goal: he has successfully scared many researchers into paying attention to the existential threat surrounding AI, to the point where AI safety is now a serious field of research in machine learning. This is a good thing. However, I think that Bolstrom’s book is in many ways alarmist, and detracts from some of the bigger, more immediate threats surrounding AI.

Much of the Doomsday scenarios in the Superintelligence book are centered on the idea that AI entities will be able to rapidly improve themselves, and reach “escape velocity” so to speak. That they will go from human-level intelligence to something much beyond in a ridiculously short amount of time. In many ways, I believe this portrays a poor understanding of the field of machine learning, and the way technology usually progresses. I see at least three factors that make this scenario unlikely:

  1. While the idea of an AI entity rewriting its own machine code may be seductive to sci-fi authors, the way deep neural networks operate now, they would be hard pressed to do such a thing, particularly if they weren’t designed with that purpose in mind.
  2. Currently, machine learning researchers are struggling to put together enough computational power to train neural networks to do relatively simple things. If an AI became self-aware tomorrow, it probably couldn’t double its computational power over night, because doing so would require access to physical computing resources that simply aren’t there.
  3. Sudden explosive progress is not the way any past technology has progressed. As rapidly as computers have evolved, it took decades and decades to get from the ENIAC to the computers we have now. There is no reason to think that AI will be incredibly different. So far, the field of machine learning has seen a fairly gradual increase in the capabilities of the algorithms we have. It took decades to get to where we are now.

Silicon Valley likes to tell us that technological progress goes at an exponential rate, but fails to deliver any real evidence backing this dogmatic belief. In the case of self-aware AI, I think a more likely scenario is that we will be building machines with increasing levels of awareness of the world. We’ll build robots to clean up around our homes, and the first ones will be fairly stupid, limited to a small set of tasks. With never generations, they’ll become capable of doing more and more, and understanding more and more complex instructions. Until, eventually, you’ll be talking to a robot, and it will understand you as well as another human being would.

In my opinion, the advent of self-aware AI will require several more breakthroughs in machine learning. It may also require several generations of hardware that is designed with the sole purpose of accelerating neural networks. The good thing is that if self-aware AI takes a long time to emerge, the first general-purpose AIs will have a fairly limited understanding of the world, and limited computational capabilities. This means those first AIs will simply not be capable of taking over the world. It also means we may have several years to test a number of fail-safe mechanisms between the time where AIs start to have a useful understanding of the world, and the point where they are genuinely dangerous.

I think that, in some ways, the focus on the existential threat surrounding AI detracts us from a bigger, more immediate danger. AI is an immensely powerful tool. In the hands of giant corporations like Google and Facebook, it can be used to sift through every text message and every picture you post online. It can be used to analyze your behavior, control the information you see. The biggest risk posed by AI, in my opinion, is that it’s a tool that can be used to manipulate your life in ways that are useful to those who control the AI. It’s an incredibly powerful tool which is controlled by a very small few.

 

7 Comments
  1. Antony Riakiotakis permalink

    Well written and level-headed article. I’ve seen some research from Google on automating selection of learning components for various tasks, so in the future I guess one can imaging an AI doing that, though doing it for real world tasks requires understanding/experience of the real world problem and this is the very unsolved problem that we currently have anyway. In any case, the computational resource scarcity argument is convincing enough – unless AIs can figure an optimized way to function? Still too far off for scenarios like that though.

  2. “Silicon Valley likes to tell us that technological progress goes at an exponential rate, but fails to deliver any real evidence backing this dogmatic belief.”

    im inclined to agree. if computers were designed to jump, the height they jump to increases exponentially. but any “progress” in terms of application is far more gradual. people conflate increases in raw processing power with equal increases (doubling) of any other, mostly unrelated abilities. the raw power of computing progresses exponentially. the actual design and application often doubles back and has ups and downs– both in software and hardware, as they introduce and work out kinks in the designs.

  3. I am glad that you mention that hardware is a limitation. Too many people, unfortunately even hardware engineers (https://spectrum.ieee.org/video/semiconductors/nanotechnology/how-will-we-go-beyond-moores-law-experts-weigh-in), think that software is the limiting factor (e.g. no parallelism).
    Hardware always sets the limit (cost, parallelism, memory, speed) for major systems (e.g. AI, games, compilers, JIT, databases, search engines, alternative internets,…).
    While I think that (military) AI has the theoretical potential to harm and even destroy human civilization this century, I see no need for governments and companies to regulate and limit progress (like e.g. with cryptography today) as proposed by some.
    I see no problem with big companies that try to manipulate people.
    I see a problem with people and big companies that try to work and live like in the past and that hamper progress and improvement intentionally.
    IMO after the next decade:
    – There is no poverty anywhere in the world.
    – An exponential increase of genetically and chemically optimized transhumans with implants will drive exponential progress in science and AI.

  4. n a permalink

    Good stuff from everyone here so far. While I don’t think AI will ever become “self-aware”, I think one of the limiting factors of self-improvement for AI is something humans take for granted: motivation. If you limit the motivation of an AI, it’s not going to ever dream of taking over the world. It’s what people DO with that data that matters, as you already pointed out in your other article. I think what needs to be done on our part – as geeks – is to bring this stuff down to earth and raise the awareness of more simple-minded people by putting in their terms. This is admittedly hard to do, but if it isn’t done, we’re going to have a ton of people in this country who fear the robot before they fear the real threat – the person controlling it.

    • IMO self-awareness is very easy to achieve because self-awareness is awareness of your own body and actions. Self-driving cars have self-awareness.
      Motivation of self-improvement can be very simple and very complicated:
      – In the easy case, the AI program is just computed.
      – In the complicated case, the AI (program) must be changed so that its execution does what is wanted.
      The more complicated problem is the concept of “improvement”: Improvement based on what judgement ? What scenario ?
      IMO an AI should be judged based on what tasks the AI can complete.
      http://machineperson.org/AI.html

  5. All useful considerations, but why can’t it be as simple as “turn the power off” if things get out of control? Are we really going to cede control so completely?

    • Same reason no Wall Street bankers have been jailed after 2008. Same reason nobody has done anything about government surveillance. Systems of power are difficult to shift.

Leave a comment