I was honored to be interviewed by the inimitable Nikola Danaylov (aka Socrates) for the Singularity 1 on 1 podcast.

In our 45 minute discussion, we covered the technological singularity, the role of open source and the hacker community in artificial intelligence, the risks of AI, mind-uploading and mind-connectivity, my influences and inspirations, and more. You can watch the video version below, or hop over to the Singularity 1 on 1 blog for audio and download options.

Great news: The Last Firewall audiobook is available now from Audible and iTunes. Go grab a copy!

Narrated by the talented Jennifer O’Donnell, and produced by Brick Shop Shop, this unabridged production is nearly ten hours long. I’m really happy with the result.

Sorry it’s a few months late. I promised it would be available in December, but we had delays due to snowstorms, illness, and a late decision to change a few voices. I’m glad we took the time to get it right, even if that meant it’s out later than expected.

On the topic of DRM, since I know I’ll get emails about it: I prefer DRM-free content, and anywhere I’m given the opportunity as an author to opt-out, I do. Audible is great in that they allow the author and narrator to split royalties, giving indie authors a way to produce audiobooks without the huge up-front cost of narration and production. That’s why I work with them and probably will continue to do so. Unfortunately, they apply DRM, and since my agreement gives them exclusive distribution rights, there’s no way around for me. I don’t think anybody likes DRM but I’m glad Audible is indie-friendly. If you feel strongly about DRM, I encourage you to let Audible know via twitter (@audible_com) and email (customersupport@audible.com). Maybe with enough pressure, they’ll come around to what their customers want.

I hope you enjoying listening to The Last Firewall. This makes the first time the entire series is available on audio, so if you haven’t tried it yet, go get the whole series. (Plus, if you sign up for an Audible account and get one of my novels first, I get a small bonus. If you want to support your indie author, Audible is the way to do it!)

Daniel Suarez, author of the amazing Daemon, has a new book coming out today: Influx.

What if our civilization is more advanced than we know?

The New York Times bestselling author of Daemon–“the cyberthriller against which all others will be measured” -Publishers Weekly) –imagines a world in which decades of technological advances have been suppressed in an effort to prevent disruptive change.

Are smart phones really humanity’s most significant innovation since the moon landings? Or can something else explain why the bold visions of the 20th century–fusion power, genetic enhancements, artificial intelligence, cures for common disease, extended human life, and a host of other world-changing advances–have remained beyond our grasp? Why has the high-tech future that seemed imminent in the 1960’s failed to arrive?

Perhaps it did arrive…but only for a select few.


Particle physicist Jon Grady is ecstatic when his team achieves what they’ve been working toward for years: a device that can reflect gravity. Their research will revolutionize the field of physics–the crowning achievement of a career. Grady expects widespread acclaim for his entire team. The Nobel. Instead, his lab is locked down by a shadowy organization whose mission is to prevent at all costs the social upheaval sudden technological advances bring. This Bureau of Technology Control uses the advanced technologies they have harvested over the decades to fulfill their mission.

I got my copy. Did you get yours? 🙂

This article from 2004, ten years ago, about Charles Stross’s then-upcoming Accelerando, and featuring bits of Stross and Cory Doctorow, along with Verner Vinge and the lobster researchers, was so much fun to read. Is Science Fiction About to Go Blind?

A small excerpt:

Stross and Doctorow are sitting outside the Chequers Hotel bar in Newbury, a small city west of London. The Chequers has been overrun this May weekend by a distinct species of science-fiction fan, members of a group called Plokta (Press Lots of Keys to Abort). The men are mostly stout and bearded, the women pedestrian in appearance but certainly not in their interests. During one session Stross mentions an early model of the Amstrad personal computer, and the crowd practically cheers. Stross is the guest of honor, and he and Doctorow have just emerged from a panel discussion on his work.

The two have met just four times, but they have the comfortable rapport of long-distance friends that is possible only in the e-mail age. (They have collaborated on several critically acclaimed short stories and novellas, one of them before they ever met in person.) Stross, 39, a native of Yorkshire who lives in Edinburgh, looks like a cross between a Shaolin monk and a video-store clerk—bearded, head shaved except for a ponytail, and dressed in black, including a T-shirt printed with lines of green Matrix code. Doctorow, a 33-year-old Canadian, looks more the hip young writer, with a buzz cut, a worn leather jacket and stylish spectacles, yet he’s also still very much the geek, G4 laptop always at the ready. 

They have loosely parallel backgrounds: Stross worked throughout the 1990s as a software developer for two U.K. dot-coms, then switched to journalism and began writing a Linux column for Computer Shopper. Doctorow, who recently moved to London, dropped out of college at 21 to take his first programming job, then went on to run a dot-com and eventually co-found the technology blog boingboing.net. 

Although both have been out of programming for a few years, it continues to influence—even infect—their thinking. In the Chequers, Doctorow mentions the original title for one of the novels he’s working on, a story about a spam filter that becomes artificially intelligent and tries to eat the universe. “I was thinking of calling it /usr/bin/god.” 

“That’s great!” Stross remarks.

Ramez Naam, author of Nexus and Crux (two books I enjoyed and recommend), has recently put together a few guest posts for Charlie Stross (another author I love). The posts are The Singularity Is Further Than It Appears and Why AIs Won’t Ascend in the Blink of an Eye.

They’re both excellent posts, and I’d recommend reading them in full before continuing here.

I’d like to offer a slight rebuttal and explain why I think the singularity is still closer than it appears.

But first, I want to say that I very much respect Ramez, his ideas and writing. I don’t think he’s wrong and I’m right. I think the question of the singularity is a bit more like Drake’s Equation about intelligent extraterrestrial life: a series of probabilities, the values of which are not known precisely enough to determine the “correct” output value with strong confidence. I simply want to provide a different set of values for consideration than the ones that Ramez has chosen.

First, let’s talk about definitions. As Ramez describes in his first article, there are two versions of singularity often talked about.

The hard takeoff is one in which an AI rapidly creates newer, more intelligent versions of itself. Within minutes, days, or weeks, the AI has progressed from a level 1 AI to a level 20 grand-wizard AI, far beyond human intellect and anything we can comprehend. Ramez doesn’t think this will happen for a variety of reasons, one of which is the exponential difficulty involved in creating successively more complex algorithm (the argument he lays out in his second post).

I agree. I don’t see a hard takeoff. In addition to the reasons Ramez stated, I also believe it takes so long to test and qualify candidates for improvement that successive iteration will be slow.

Let’s imagine the first AI is created and runs on an infrastructure of 10,000 computers. Let’s further assume the AI is composed of neural networks and other similar algorithms that require training on large pools of data. The AI will want to test many ideas for improvements, each requiring training. The training will be followed by multiple rounds of successively more comprehensive testing: first the AI needs to see if the algorithm appears to improve a select area of intelligence, but then it will want to run regressive tests to ensure no other aspect of its intelligence or capabilities is adversely impacted. If the AI wants to test 1,000 ideas for improvements, and each idea requires 10 hours of training, 1 hour of assessment, and averages 1 hour of regressive testing, it would take 1.4 years to complete a round of improvements. Parallelism is the alternative, but remember that first AI is likely to be a behemoth, require 10,000 computers to run. It’s not possible to get that much parallelism.

The soft takeoff is one in which an artificial general intelligence (AGI) is created and gradually improved. As Ramez points out, that first AI might be on the order of human intellect, but it’s not smarter than the accumulated intelligence of all the humans that created it: many tens of thousands of scientists will collaborate to build the first AGI.

This is where we start to diverge. Consider a simple domain like chess playing computers. Since 2005, chess software running on commercially available hardware can outplay even the strongest human chess players. I don’t have data, but I suspect the number of very strong human chess players is somewhere in the hundreds or low thousands. However, the number of computers capable of running the very best chess playing software is in the millions or hundreds of millions. The aggregate chess playing capacity of computers is far greater than that of humans, because the best chess playing program can be propagated everywhere.

So too, AGI will be propagated everywhere. But I just argued that those first AI will require tens of thousands computers, right? Yes, except thanks to Moore’s Law (the observation that computing power tends to double every 18 months), the same AI that required 10,000 computers will need a mere 100 computers ten years later and just a single computer another ten years after that. Or an individual AGI could run up to 10,000 times faster. That speed-up alone means something different when it comes to intelligence: to have a single being with 10,000 times the experience and learning and practice that a human has.

Even Ramez agrees that it will be feasible to have destructive human brain uploads approximating human intelligence around 2040: “Do the math, and it appears that a super-computer capable of simulating an entire human brain and do so as fast as a human brain should be on the market by roughly 2035 – 2040. And of course, from that point on, speedups in computing should speed up the simulation of the brain, allowing it to run faster than a biological human’s.”

This is the soft takeoff: from a single AGI at some point in time to an entire civilization of that AGI twenty years later, all running at faster than human intellect speeds. A race consisting of an essentially alien intelligence, cohabiting the planet with us. Even if they don’t experience an intelligence explosion as Verner Vinge described, the combination of fast speeds, aggregate intelligence, and inherently different motivations will create an unknowable future that likely out of our control. And that’s very much a singularity.

But Ramez questions whether we can even achieve an AGI comparable to a human in the first place. There’s this pesky question of sentience and consciousness. Please go read Ramez’s first article in full, I don’t want you to think I’m summarizing everything he said here, but he basically cites three points:

1) No one’s really sure how to do it. AI theories have been around for decades, but none of them has led to anything that resembles sentience.

This is a difficulty. One analogy that comes to mind is the history of aviation. For nearly a hundred years prior to the Wright Brothers, heavier than air flight was being studied, with many different gliders created and flown. It was the innovation of powered engines that made heavier than air flight practically possible, and which led to rapid innovation. Perhaps we just don’t yet have the equivalent yet in AI. We’ve got people learning how to make airfoils and control services and airplane structure, and we’re just waiting for the engine to show up.

We also know that nature evolved sentience without any theory of how to do it. Having a proof point is powerful motivation.

2) There’s a huge lack of incentive. Would you like a self-driving car that has its own opinions? That might someday decide it doesn’t feel like driving you where you want to go?

There’s no lack of incentive. As James Barrat detailed in Our Final Invention, there are billions of dollars being poured into building AGI, both in big profile projects like the US BRAIN project and Europe’s Human Brain Project, as well as countless smaller AI companies and research projects.

There’s plenty of human incentive, too. How many people were inspired by Star Trek’s Data? At a recent conference, I asked attendees who would want Data as a friend, and more than half the audience’s hands went up. Among the elderly, loneliness is a very real issue that could be helped with AGI companionship, and many people might choose an artificial psychologist for reasons of confidence, cost, and convenience. All of these require at least the semblance of opinions.

More than that, we know we want initiative. If we have a self-driving car, we expect that it will use that initiative to find faster routes to destinations, possibly go around dangerous neighborhoods, and take necessary measures to avoid an accident. Indeed, even Google Maps has an “opinion” of the right way to get somewhere that often differs from my own. It’s usually right.

If we have an autonomous customer service agent, we’ll want it to flexibly meet business goals including pleasing the customer while controlling cost. All of these require something like opinions and sentience: goals, motivation to meet those goals, and mechanisms to flexibly meet those goals.

3) There are ethical issues. If we design an AI that truly is sentient, even at slightly less than human intelligence we’ll suddenly be faced with very real ethical issues. Can we turn it off? 

I absolutely agree that we’ve got ethical issues with AGI, but that hasn’t stopped us from creating other technology (nuclear bombs, bio-weapons, internal combustion engine, the transportation system) that also has ethical issues.

In sum, Ramez brings up great points, and he may very well be correct: the singularity might be a hundred years off instead of twenty or thirty.

However, the discussion around the singularity is also one about risk. Having artificial general intelligence running around, potentially in control of our computing infrastructure, may be risky. What happens if the AI has different motivations than us? What if it decides we’d be happier and less destructive if we’re all drugged? What if it just crashes and accidentally shuts down the entire electrical grid? (Read James Barrat’s Our Final Invention for more about the risks of AI.)

Ramez wrote Infinite Resource: The Power of Ideas on a Finite Planet, a wonderful and optimistic book about how science and technology are solving many resource problems around the world. I think it’s a powerful book because it gives us hope and proof points that we can solve the problems facing us.

Unfortunately, I think the argument that the singularity is far off is different and problematic because it denies the possibility of problems facing us. Instead of encouraging us to use technology to address the issues that could arise with the singularity, the argument instead concludes the singularity is either unlikely or simply a long time away. With that mindset, we’re less likely as a society to examine both AI progress and take steps to reduce the risks of AGI.

On the other hand, if we can agree that the singularity is a possibility, even just a modest possibility, then we may spur more discussion and investment into the safety and ethics of AGI.

Here’s a scary paragraph from a longer article about Google’s acquisition of AI company Deep Mind:

One of DeepMind’s cofounders, Demis Hassabis, possesses an impressive resume packed with prestigious titles, including software developer, neuroscientist, and teenage chess prodigy among the bullet points. But as the Economist suggested, one of Hassabis’s better-known contributions to society might be a video game; a niche but adored 2006 simulator called Evil Genius, in which you play as a malevolent mastermind hell-bent on world domination.

That sounds just like the plot of Daniel Suarez’s Daemon:

When a designer of computer games dies, he leaves behind a program that unravels the Internet’s interconnected world. It corrupts, kills, and runs independent of human control. It’s up to Detective Peter Sebeck to wrest the world from the malevolent virtual enemy before its ultimate purpose is realized: to dismantle society and bring about a new world order.

I’m a big fan of Charles Stross’s science fiction. He’s absolutely brilliant (listen to some of his talks on YouTube if you get the chance, or go read his blog posts), and it always comes across in his fiction.

On one level, Neptune’s Brood is a classic space opera novel involving interstellar space travel, colonization, and space battles.

On another level, Neptune’s Brood is a careful study of what you get when you rigorously think about how economic principles, human uploading, transhumanism, the limitations of light speed, and the cost moving matter apply to developing an interstellar civilization.

In other words, it’s the type of very smart fiction you expect from Charles Stross.

The occasional pitfall of uber-smart fiction is that it can sometimes be a challenge to read. If the ideas come too fast or require too much effort to grok, the reader ends up working so hard to understand things that the reading loses its fun. Stross manages to avoid that pitfall here. It’s an enjoyable, straightforward read underlaid with a foundation of brilliance.

You can get Neptune’s Brood on Amazon, and I’m sure everywhere else as well.

I spent the last few days in bed with the flu. In addition to missing the company of visiting family, I also missed writing time.

During those couple of days, my friend Tac Anderson asked on Facebook about people’s goals for 2014, as opposed to resolutions.

That got me thinking. What I’d like to achieve in 2014 includes completing, editing, and publishing my next adult novel, editing and publishing my children’s novel, and rewriting Avogadro Corp. (Avogadro Corp is a great story, but it was my first written work, and it’s got some rough areas that could benefit from time and attention.)

One way or another, I will get those books done, but I’d prefer to do it with less stress than its taken to get some of my past books out. I balance a day job, a family, and writing, and although each book is a joy to write and publish, it’s also exhausting to do on top of an already full life.

So my goal for 2014 is to get my day job commitment down from 80% time to 60%. (Hi boss!) To do that, I’ll either need to bring in more book income, find alternate sources of income, reduce expenses, or some combination of all of the above.

I’ve been investigating foreign rights and traditional publishers, and I’ll think more about kickstarter campaigns. I’m open to ideas if you’ve got any.

What are your goals for 2014, and how do you hope to achieve them?

I’m reading Our Final Invention by James Barrat right now, about the dangers of artificial intelligence. I just got to a chapter in which he discussed that any reasonably complex artificial general intelligence (AGI) is going to want to control its own resources: e.g. if it has a goal, even a simple goal like playing chess, it will be able to achieve its goal better with more computing resources, and won’t be able to achieve its goal at all if its shut off. (Similar themes exist in all of my novels.)

This made me snap back to a conversation I had last week at my day job. I’m a web developer, and my current project, without giving too much away, is a RESTful web service that runs workflows composed of other RESTful web services.

We’re currently automating some of our operational tasks. For example, when our code passes unit tests, it’s automatically deployed. We’d like to expand on that so that after deployment, it will run integration tests, and if those pass, deploy up to the next stack, and then run performance tests, and so on.

Although we’re running on a cloud provider, it’s not AWS, and they don’t support autoscaling, so another automation task we need is to roll our own scaling solution.

Then we realized that running tests, deployments, and scaling all require calling RESTful JSON APIs, and that’s exactly what our service is designed to do. So the logical solution is that our software will test itself, deploy itself, and autoscale itself.

That’s an awful lot like the kind of resource control that James Barrat was writing about.