Ramez Naam, author of Nexus and Crux (two books I enjoyed and recommend), has recently put together a few guest posts for Charlie Stross (another author I love). The posts are The Singularity Is Further Than It Appears and Why AIs Won’t Ascend in the Blink of an Eye.
They’re both excellent posts, and I’d recommend reading them in full before continuing here.
I’d like to offer a slight rebuttal and explain why I think the singularity is still closer than it appears.
But first, I want to say that I very much respect Ramez, his ideas and writing. I don’t think he’s wrong and I’m right. I think the question of the singularity is a bit more like Drake’s Equation about intelligent extraterrestrial life: a series of probabilities, the values of which are not known precisely enough to determine the “correct” output value with strong confidence. I simply want to provide a different set of values for consideration than the ones that Ramez has chosen.
First, let’s talk about definitions. As Ramez describes in his first article, there are two versions of singularity often talked about.
The hard takeoff is one in which an AI rapidly creates newer, more intelligent versions of itself. Within minutes, days, or weeks, the AI has progressed from a level 1 AI to a level 20 grand-wizard AI, far beyond human intellect and anything we can comprehend. Ramez doesn’t think this will happen for a variety of reasons, one of which is the exponential difficulty involved in creating successively more complex algorithm (the argument he lays out in his second post).
I agree. I don’t see a hard takeoff. In addition to the reasons Ramez stated, I also believe it takes so long to test and qualify candidates for improvement that successive iteration will be slow.
Let’s imagine the first AI is created and runs on an infrastructure of 10,000 computers. Let’s further assume the AI is composed of neural networks and other similar algorithms that require training on large pools of data. The AI will want to test many ideas for improvements, each requiring training. The training will be followed by multiple rounds of successively more comprehensive testing: first the AI needs to see if the algorithm appears to improve a select area of intelligence, but then it will want to run regressive tests to ensure no other aspect of its intelligence or capabilities is adversely impacted. If the AI wants to test 1,000 ideas for improvements, and each idea requires 10 hours of training, 1 hour of assessment, and averages 1 hour of regressive testing, it would take 1.4 years to complete a round of improvements. Parallelism is the alternative, but remember that first AI is likely to be a behemoth, require 10,000 computers to run. It’s not possible to get that much parallelism.
The soft takeoff is one in which an artificial general intelligence (AGI) is created and gradually improved. As Ramez points out, that first AI might be on the order of human intellect, but it’s not smarter than the accumulated intelligence of all the humans that created it: many tens of thousands of scientists will collaborate to build the first AGI.
This is where we start to diverge. Consider a simple domain like chess playing computers. Since 2005, chess software running on commercially available hardware can outplay even the strongest human chess players. I don’t have data, but I suspect the number of very strong human chess players is somewhere in the hundreds or low thousands. However, the number of computers capable of running the very best chess playing software is in the millions or hundreds of millions. The aggregate chess playing capacity of computers is far greater than that of humans, because the best chess playing program can be propagated everywhere.
So too, AGI will be propagated everywhere. But I just argued that those first AI will require tens of thousands computers, right? Yes, except thanks to Moore’s Law (the observation that computing power tends to double every 18 months), the same AI that required 10,000 computers will need a mere 100 computers ten years later and just a single computer another ten years after that. Or an individual AGI could run up to 10,000 times faster. That speed-up alone means something different when it comes to intelligence: to have a single being with 10,000 times the experience and learning and practice that a human has.
Even Ramez agrees that it will be feasible to have destructive human brain uploads approximating human intelligence around 2040: “Do the math, and it appears that a super-computer capable of simulating an entire human brain and do so as fast as a human brain should be on the market by roughly 2035 – 2040. And of course, from that point on, speedups in computing should speed up the simulation of the brain, allowing it to run faster than a biological human’s.”
This is the soft takeoff: from a single AGI at some point in time to an entire civilization of that AGI twenty years later, all running at faster than human intellect speeds. A race consisting of an essentially alien intelligence, cohabiting the planet with us. Even if they don’t experience an intelligence explosion as Verner Vinge described, the combination of fast speeds, aggregate intelligence, and inherently different motivations will create an unknowable future that likely out of our control. And that’s very much a singularity.
But Ramez questions whether we can even achieve an AGI comparable to a human in the first place. There’s this pesky question of sentience and consciousness. Please go read Ramez’s first article in full, I don’t want you to think I’m summarizing everything he said here, but he basically cites three points:
1) No one’s really sure how to do it. AI theories have been around for decades, but none of them has led to anything that resembles sentience.
This is a difficulty. One analogy that comes to mind is the history of aviation. For nearly a hundred years prior to the Wright Brothers, heavier than air flight was being studied, with many different gliders created and flown. It was the innovation of powered engines that made heavier than air flight practically possible, and which led to rapid innovation. Perhaps we just don’t yet have the equivalent yet in AI. We’ve got people learning how to make airfoils and control services and airplane structure, and we’re just waiting for the engine to show up.
We also know that nature evolved sentience without any theory of how to do it. Having a proof point is powerful motivation.
2) There’s a huge lack of incentive. Would you like a self-driving car that has its own opinions? That might someday decide it doesn’t feel like driving you where you want to go?
There’s no lack of incentive. As James Barrat detailed in Our Final Invention, there are billions of dollars being poured into building AGI, both in big profile projects like the US BRAIN project and Europe’s Human Brain Project, as well as countless smaller AI companies and research projects.
There’s plenty of human incentive, too. How many people were inspired by Star Trek’s Data? At a recent conference, I asked attendees who would want Data as a friend, and more than half the audience’s hands went up. Among the elderly, loneliness is a very real issue that could be helped with AGI companionship, and many people might choose an artificial psychologist for reasons of confidence, cost, and convenience. All of these require at least the semblance of opinions.
More than that, we know we want initiative. If we have a self-driving car, we expect that it will use that initiative to find faster routes to destinations, possibly go around dangerous neighborhoods, and take necessary measures to avoid an accident. Indeed, even Google Maps has an “opinion” of the right way to get somewhere that often differs from my own. It’s usually right.
If we have an autonomous customer service agent, we’ll want it to flexibly meet business goals including pleasing the customer while controlling cost. All of these require something like opinions and sentience: goals, motivation to meet those goals, and mechanisms to flexibly meet those goals.
3) There are ethical issues. If we design an AI that truly is sentient, even at slightly less than human intelligence we’ll suddenly be faced with very real ethical issues. Can we turn it off?
I absolutely agree that we’ve got ethical issues with AGI, but that hasn’t stopped us from creating other technology (nuclear bombs, bio-weapons, internal combustion engine, the transportation system) that also has ethical issues.
In sum, Ramez brings up great points, and he may very well be correct: the singularity might be a hundred years off instead of twenty or thirty.
However, the discussion around the singularity is also one about risk. Having artificial general intelligence running around, potentially in control of our computing infrastructure, may be risky. What happens if the AI has different motivations than us? What if it decides we’d be happier and less destructive if we’re all drugged? What if it just crashes and accidentally shuts down the entire electrical grid? (Read James Barrat’s Our Final Invention for more about the risks of AI.)
Ramez wrote Infinite Resource: The Power of Ideas on a Finite Planet, a wonderful and optimistic book about how science and technology are solving many resource problems around the world. I think it’s a powerful book because it gives us hope and proof points that we can solve the problems facing us.
Unfortunately, I think the argument that the singularity is far off is different and problematic because it denies the possibility of problems facing us. Instead of encouraging us to use technology to address the issues that could arise with the singularity, the argument instead concludes the singularity is either unlikely or simply a long time away. With that mindset, we’re less likely as a society to examine both AI progress and take steps to reduce the risks of AGI.
On the other hand, if we can agree that the singularity is a possibility, even just a modest possibility, then we may spur more discussion and investment into the safety and ethics of AGI.