Mark Zuckerberg wrote about how he plans to personally work on artificial intelligence in the next year. It’s a nice article that lays out the landscape of AI developments. But he ends with a statement that misrepresents the relevance of Moore’s Law to future AI development. He wrote (with my added bold for emphasis):
Since no one understands how general unsupervised learning actually works, we’re quite a ways off from building the general AIs you see in movies. Some people claim this is just a matter of getting more computing power — and that as Moore’s law continues and computing becomes cheaper we’ll naturally have AIs that surpass human intelligence. This is incorrect. We fundamentally do not understand how general learning works. This is an unsolved problem — maybe the most important problem of this century or even millennium. Until we solve this problem, throwing all the machine power in the world at it cannot create an AI that can do everything a person can.
I don’t believe anyone knowledge about AI argues that Moore’s Law is going to spontaneously create AI. I’ll give Mark the benefit of the doubt, and assume he was trying to be succinct. But it’s important to understand exactly why Moore’s Law is important to AI.
We don’t understand how general unsupervised learning works, nor do we understand how much of human intelligence works. But we do have working examples in the form of human brains. We do not today have the computer parts necessary to simulate a human brain. The best brain simulations by the largest supercomputing clusters have been able to approximate 1% of the brain at 1/10,000th of the normal cognitive speeds. In other words, current computer processors are 1,000,000 times too slow to simulate a human brain.
The Wright Brothers succeeded in making the first controlled, powered, and sustained heavier-than-air human flight not because of some massive breakthrough in the principles of aerodynamics (which were well understood at the time), but because engines were growing more powerful, and powered flight was feasible for the first time around the point at which they were working. They made some breakthroughs in aircraft controls, but even if the Wright Brothers had never flown, someone else would have within a period of a few years. It was breakthroughs in engine technology, specifically, the power-to-weight ratio, that enabled powered flight around the turn of the century.
AI proponents who talk about Moore’s Law are not saying AI will spontaneously erupt from nowhere, but that increasing computing processing power will make AI possible, in the same way that more powerful engines made flight possible.
Those same AI proponents who believe in the significance of Moore’s Law can be subdivided into two categories. One group argues we’ll never understand intelligence fully. Our best hope of creating it is with a brute force biological simulation. In other words, recreate the human brain structure, and tweak it to make it better or faster. The second group argues we may invent our own techniques for implementing intelligence (just as we implemented our own approach to flight that differs from birds), but the underlying computational needs will be roughly equal: certainly, we won’t be able to do it when we’re a million times deficient in processing power.
Moore’s Law gives us an important cadence to the progress in AI development. When naysayers argue AI can’t be created, they’re looking at historical progress in AI, which is a bit like looking at powered flight prior to 1850: pretty laughable. The rate of AI progress will increase as computer processing speeds approach that of the human brain. When other groups argue we should already have AI, they’re being hopelessly optimistic about our ability to recreate intelligence a million times more efficiently than nature was able to evolve.
The increasing speed of computer processors as predicted by Moore’s Law, and the crossover point where processing power aligns with the complexity of the human brain tells us a great deal about the timing of when we’ll see advanced AI on par with human intelligence.