Thanks to Elon Musk’s fame and his concerns about the risks of AI, it seems like everyone’s talking about it.
One difficulty that I’ve noticed is agreement on exactly what risk we’re talking about. I’ve had several discussions in just the last few days, both at the Defrag conference in Colorado and online.
One thing I’ve noticed is that the risk naysayers tend to say “I don’t believe there is risk due to AI”. But when you probe them further, what they are often saying is “I don’t believe there is existential risk from a skynet scenario due to a super-intelligence created from existing technology.” The second statement is far narrower, so let’s dig into the components of it.
Existential risk is defined by Nick Bostrum as a risk “where an adverse outcome would either annihilate Earth-originating intelligent life or permanently and drastically curtail its potential.” Essentially, we’re talking about either the extinction of humankind, or something close to it. However, most of us would agree that there are very bad outcomes that are nowhere near an existential risk. For example, about 4% of the global population died in WWII. That’s not an existential risk, but it’s still horrific by anybody’s standards.
Runaway AI, accelerating super-intelligence, or hard takeoff are all terms that refer to the idea that once an artificial intelligence is created, it will recursively improve its own intelligence, becoming vastly smarter and more powerful in a matter of hours, days, or months. We have no idea if this will happen (I don’t think it’s likely), but simply because we don’t have a hard takeoff doesn’t mean that an AI would be stagnant or lack power compared to people. There are many different ways even a modest AI with the creativity, motivation, and drive equivalent to that of a human could affect a great deal more than a human could:
- Humans can type 50 words a minute. AI could communicate with tens of thousands of computers simultaneously.
- Humans can drive one car at a time. AI could fly all the world’s airplanes simultaneously.
- Humans can trip one circuit breaker. AI could trip all the world’s circuit breakers.
- Humans can reproduce a handful of times over the course of a lifetime. AI could reproduce millions of times over the course of a day.
- Humans evolve over the course of tens of thousands of years or more. Computers become 50% more powerful each year.
So for many reasons, even if we don’t have a hard takeoff, we can still have AI actions and improvement that occur far faster, and with far wider effect than we humans are adapted to handling.
Skynet scenario, terminator scenario, or killer robots are terms that refer to the idea that AI could choose to wage open warfare on humans using robots. This is just one type of risk, of many different possibilities. Other ways that AI could harm us include deliberate mechanisms, like trying to manipulate us by controlling the information we see, or by killing off particular people that pose threats, or by extorting us to deliver services they want. This idea of manipulation is important, because while death is terrible, the loss of free will is pretty bad too.
Frankly, most of those seem silly or unlikely compared to unintentional harm that AI could cause: the electrical grid could go down, transportation could stop working, our home climate control could stop functioning, or a virus could crash all computers. If these don’t seem very threatening, consider…
- What if one winter, for whatever reason, homes wouldn’t heat? How many people would freeze to death?
- Consider that Google’s self-driving car doesn’t have any manual controls. It’s the AI or it’s no-go. More vehicles will move in this direction, especially all forms of bulk delivery. If all transportation stopped, how would people in cities get food when their 3-day supply runs out?
- How long can those city dwellers last without fresh water if pumping stations are under computer control and they stop?
Existing technology: Some will argue that because we don’t have strong AI (e.g. human level intelligence or better) now, there’s no point in even talking about risk. However, this sounds like “Let’s not build any asteroid defenses until we clearly see an asteroid headed for Earth”. It’s far too late by then. Similarly, once the AI is here, it’s too late to talk about precautions.
In conclusion, if you have a conversation about AI risks, be clear what you’re talking about. Frankly, all of humanity being killed by robots under the control of a super-intelligence AI doesn’t even seem worth talking about compared to all of the more likely risks. A better conversation might start with a question like this:
Are we all talking about the same thing when we talk about the risks of AI? http://t.co/aAS4Q22auK
`Talking about AI Risks
http://t.co/tcJvqjWTXJ
[…] hand-in-hand with AI development. William Hertling put it best on his blog in the article “Talking about AI Risks” when he […]