In one of my writers groups, we’ve been talking extensively about AI emergence. I wanted to share one thought around AI intelligence:
Many of the threats of AI originate from a lack of intelligence, not a surplus of it.
An example from my Buddhist mathematician friend Chris Robson: If you’re walking down a street late at night and see a thuggish looking person walking toward you, you would never think to yourself “Oh, I hope he’s not intelligent.” On the contrary, the more intelligent, the less likely they are to be a threat.
Similarly, we have stock trading AI right now. They aren’t very intelligent. They could easily cause a global economic meltdown. They’d never understand the ramifications.
We’ll soon have autonomous military drones. They’ll kill people and obey orders without ever making a judgement call.
So it’s likely that the earliest AI problems are more likely to be from a lack of relevant intelligence than from a surplus of it.
On the flip side, Computer One by Warwick Collins is a good AI emergence novel that makes the reverse case: that preemptive aggression is a winning strategy, and any AI smart enough to see that it could be turned off will see people as a threat and preemptively eliminate us.
I believe that the case that intelligence implies benevolence is too quickly discounted by AI risk advocates (note though that it does not guarantee benevolence). More here.
A similar article to the one I linked to in the previous comment has been posted yesterday. It can be found here. Quote,
If a computer is designed in such a way that:
(a) it has the motivation “maximize human pleasure”, but
(b) it thinks that this phrase could conceivably mean something as simplistic as “put all humans on an intravenous dopamine drip”, then
(c) what you have is NOT a computer that could ever be “all-powerful”.