I know I’ve gone dark over the last few months. I’m sure that’s left many people wondering about the status of The Last Firewall, my third Singularity novel.

I’m delighted to announce that The Last Firewall will be available this summer. I’m targeting an August launch. 
So why the long wait?

As you may know, my previous novels are all self-published. They’ve sold well, but I often wondered how many more readers might find my books if I went with a traditional publisher.
In addition, many folks have asked “When will we see the movie version?” about Avogadro Corp and A.I. Apocalypse, but very few self-published novels have made that leap. Hollywood often judges potential movie options by the interest publishers take in novels. That was another reason why I was interested in traditional publication.
I started working with a literary agent who saw great promise in The Last Firewall, but wanted substantial revisions. I subsequently worked on The Last Firewall for another eight months until it gleamed brighter than the titanium shell of a robot.
That’s where I’ve been for a while, and I think the results are great: I’m convinced it reads better than anything I’ve done before, and a few other folks have read the manuscript and agree.
However, traditional publishing is a tough nut to crack, and if I persist with that path, The Last Firewall will continue to languish on my computer when it really wants to be read.
So I’m self-publishing The Last Firewall, as I have my other novels. It’s worked great in the past, and I’m happy to be going this route again. I’m choosing cover images and working on cover design right now, even as the manuscript undergoes a final round of proofreading.

I think it’s going to be awesome, and can’t wait to get it in your hands. If you haven’t done so, sign up for the mailing list and I’ll let you know when it’s available. 

ROS, the open source robotics OS, is accelerating development in robotics because scientists don’t have to reinvent everything from scratch:

As an example of how ROS works, imagine you’re building an app. That app is useless without hardware and software – that is, your computer and operating system. Before ROS, engineers in different labs had to build that hardware and software specifically for every robotic project. As a result, the robotic app-making process was incredibly slow – and done in a vacuum.  

Now ROS, along with complementary robot prototypes, provide that supporting hardware and software. Robot researchers can shortcut straight to the app building. And since other researchers around the world are using the same tools, they can easily share their developments from one project to another. 

I wrote a similar article last year about how we should expect to see an acceleration in both AI and robotics due to this effect. The remaining barrier to participation is cost:

The reason we haven’t seen even greater amateur participation in robotics and AI, up until this point, has been because of the cost: whether it’s the $400,000 to buy a PR2, or $3 million dollars to replicate IBM’s Watson. This too is about to change.

It’s about to change because cost of electronics declines quickly: by 2025, the same processing capacity it takes to run Watson will be available to us in a general purpose personal computer. Robotics hardware might not decrease in cost as quickly as pure silicon would, but it will surely come down. When it hits the price of a car ($25,000), I’m sure we’ll see hobbyists with them.

PR2 fetches a beer from the fridge

This is a great post and video about two robots making pancakes together. What’s amazing is that it’s not all preprogrammed. They’re figuring this stuff out on the fly:

James uses the Web for problem solving, just like we would.  To retrieve the correct bottle of pancake mix from the fridge, it looks up a picture on the Web and then goes online to find the cooking instructions. 

Rosie makes use of gravity compensation when pouring the batter, with the angle and the time for pouring the pancake mix adjusted depending on the weight of the mix.  The manipulation of the spatula comes in to play when Rosie’s initially inaccurate depth estimation is resolved by sensors detecting contact with the pancake maker.

1. Diagram from Google’s patent
application for floating data centers.

The technology in Avogadro Corp and A.I. Apocalypse is frequently polarizing: readers either love it or believe it’s utterly implausible.

The intention is for the portrayal to be as realistic as possible. Anything I write about either exists today as a product, is in active research, or is extrapolated from current trends. The process I use to extrapolate tech trends is described in an article I wrote called How to Predict the Future. I’ve also drawn upon my twenty years as a software developer, my work on social media strategy, and a bit of experience in writing and using recommendation engines, including competing for the Netflix Prize.

Let’s examine a few specific ideas manifested in the books and see where those ideas originated.

    • Floating Data Centers: (Status: Research) Google filed a patent in 2007 for a floating data center based on a barge. The patent application was discovered and shared on Slashdot in 2008. Like many companies, filing a patent application doesn’t mean that Google will be deploying ocean-based data centers any time soon, but simply that the idea is feasible, and they’d like to own the right to do so in the future, if it becomes viable. And of course, there is the very real problem of piracy.
Pelamis Wave converter in action.
    • Portland Wave Converter: (Status: Real) In Avogadro Corp I describe the Portland Wave Converter as a machine that converts wave motion into electrical energy. This was also described as part of the Google patent application for a floating data center. (See diagram 1.) But Pelamis Wave Power is an existing commercialization of this technology. You can buy and use wave power converters today. Pelamis did a full-scale test in 2004, installed the first multi-machine farm in 2008 off the coast of Portugal, is doing testing off the coast of Scotland, and is actively working on installing up to 170MW in Scottish waters.
Pionen Data Center. (Src: Pingdom)
    • Underground Data Center: (Status: Real) The Swedish data center described as being in a converted underground bunker is in fact the Pionen data center owned by Bahnhof. Originally a nuclear bunker, it’s housed nearly a hundred feet underground and is capable of withstanding a nuclear attack. It has backup power provided by submarine engines and triple redundant backbone connections to the Internet and fifteen full-time employees on site.
    • Netflix Prize: (Status: Real) A real competition that took place from 2006 through 2009, the Netflix Prize was a one million dollar contest to develop a better recommendation than Netflix’s original Cinematch algorithm. Thousands of people participated, and hundreds of teams beat Netflix’s algorithm, but only one team was the first to better it by 10%, the required threshold for payout. I entered the competition and realized within a few weeks that there were many other ways recommendation engine technology could be put to use, including a never-before-done approach to customer support content that increased the helpfulness of support content by 25%.
    • Email-to-Web Bridge: (Status: Real) At the time I wrote Avogadro Corp, IBM had a technical paper describing how they build an email-to-web bridge as a research experiment. Five years later, I can’t seem to find the article anymore, but I did find some working examples of services that do the same thing. In fact, www4mail appears to have been working since 1998.
    • Decision-Making via Email: (Status: Real) From 2003 to 20011, I worked in a position where everyone I interacted with in my corporation was physically and organizationally remote. We interacted daily via email and weekly via phone meetings. Many decisions were communicated by email. They might later be discussed in a meeting, but if a communication came down by a manager, we’d just have to work within the constraints of that decision. Through social engineering, it possible to make those emails even more effective. For example, employee A, a manager, is about to go on vacation. ELOPe sends an from employee A to employee B, explaining a decision that was making, and asking employee B to handle any questions for that decision. Everyone else receives an email saying the decision was made, and ask employee B if there are questions. The combination of an official email announcement plus a very real human contact to act as point person becomes very persuasive. On the other hand, some Googlers have read Avogadro Corp, and they’ve said the culture at Google is very different. They are centrally located and therefore do much more in face to face meetings.
Foster-Miller Armed Robot
(Src: Wikipedia)
  • iRobot military robots: (Status: Real) iRobot has both military bots and maritime bots, although what I envisioned for the deck robots on the floating data centers is closer to the Foster-Miller Talon, an armed, tank-style robot. The Gavia is probably the closest equivalent to the underwater patrolling robots. It accepts modular payloads, and while it’s not clear if that could include an offensive capability, it seems possible.
  • Language optimization based on recommendation engines:  (Status: Made Up) Unfortunately, not real. It’s not impossible, but it’s also not a straightforward extrapolation. There’s hard problems to solve. Jacob Perkins, CTO of Weotta, wrote an excellent blog post analyzing ELOPe’s language optimization skills. He divides the language optimization into three parts: topic analysis, outcome analysis, and language generation. Although challenging, topic analysis is feasible, and there are off-the-shelf programming libraries to assist with this, as there also are for language generation. The really challenging part is the outcome analysis. He writes:

    “This sounds like next-generation sentiment analysis. You need to go deeper than simple failure vs. success, positive vs. negative, since you want to know which email chains within a given topic produced the best responses, and what language they have in common. In other words, you need a language model that weights successful outcome language much higher than failure outcome language. The only way I can think of doing this with a decent level of accuracy is massive amounts of human verified training data. Technically do-able, but very expensive in terms of time and effort.

    What really pushes the bounds of plausibility is that the language model can’t be universal. Everyone has their own likes, dislikes, biases, and preferences. So you need language models that are specific to individuals, or clusters of individuals that respond similarly on the same topic. Since these clusters are topic specific, every individual would belong to many (topic, cluster) pairs. Given N topics and an average of M clusters within each topic, that’s N*M language models that need to be created. And one of the major plot points of the book falls out naturally: ELOPe needs access to huge amounts of high end compute resources.”

    This is a case where it’s nice to be a science fiction author. 🙂

I hope you enjoyed this post. If you have any other questions about the technology of Avogadro Corp, just let me know!

Everyone would like a sure-fire way to predict the future. Maybe you’re thinking about startups to invest in, or making decisions about where to place resources in your company, or deciding on a future career, or where to live. Maybe you just care about what things will be like in 10, 20, or 30 years.

There are many techniques to think logically about the future, to inspire idea creation, and to predict when future inventions will occur.

 

I’d like to share one technique that I’ve used successfully. It’s proven accurate on many occasions. And it’s the same technique that I’ve used as a writer to create realistic technothrillers set in the near future. I’m going to start by going back to 1994.

 

Predicting Streaming Video and the Birth of the Spreadsheet
There seem to be two schools of thought on how to predict the future of information technology: looking at software or looking at hardware. I believe that looking at hardware curves is always simpler and more accurate.
This is the story of a spreadsheet I’ve been keeping for almost twenty years.
In the mid-1990s, a good friend of mine, Gene Kim (founder of Tripwire and author of When IT Fails: A Business Novel) and I were in graduate school together in the Computer Science program at the University of Arizona. A big technical challenge we studied was piping streaming video over networks. It was difficult because we had limited bandwidth to send the bits through, and limited processing power to compress and decompress the video. We needed improvements in video compression and in TCP/IP – the underlying protocol that essentially runs the Internet.
The funny thing was that no matter how many incremental improvements researchers made (there were dozens of people working on different angles of this), streaming video always seemed to be just around the corner. I heard “Next year will be the year for video” or similar refrains many times over the course of several years. Yet it never happened.
Around this time I started a spreadsheet, seeding it with all of the computers I’d owned over the years. I included their processing power, the size of their hard drives, the amount of RAM they had, and their modem speed. I calculated the average annual increase of each of these attributes, and then plotted these forward in time.
I looked at the future predictions for “modem speed” (as I called it back then, today we’d called it internet connection speed or bandwidth). By this time, I was tired of hearing that streaming video was just around the corner, and I decided to forget about trying to predict advancements in software compression, and just look at the hardware trend. The hardware trend showed that internet connection speeds were increasing, and by 2005, the speed of the connection would be sufficient that we could reasonably stream video in real time without resorting to heroic amounts of video compression or miracles in internet protocols. Gene Kim laughed at my prediction.
Nine years later, in February 2005, YouTube arrived. Streaming video had finally made it.
The same spreadsheet also predicted we’d see a music downloading service in 1999 or 2000. Napster arrived in June, 1999.
The data has held surprisingly accurate over the long term. Using just two data points, the modem I had in 1986 and the modem I had in 1998, the spreadsheet predicts that I’d have a 25 megabit/second connection in 2012. As I currently have a 30 megabit/second connection, this is a very accurate 15 year prediction.
Why It Works Part One: Linear vs. Non-Linear
Without really understanding the concept, it turns out that what I was doing was using linear trends (advancements that proceed smoothly over time), to predict the timing of non-linear events (technology disruptions) by calculating when the underlying hardware would enable a breakthrough. This is what I mean by “forget about trying to predict advancements in software and just look at the hardware trend”.
It’s still necessary to imagine the future development (although the trends can help inspire ideas). What this technique does is let you map an idea to the underlying requirements to figure out when it will happen.
For example, it answers questions like these:
When will the last magnetic platter hard drive be manufactured?
2016. I plotted the growth in capacity of magnetic platter hard drives and flash drives back in 2006 or so, and saw that flash would overtake magnetic media in 2016.
When will a general purpose computer be small enough to be implanted inside your brain?
2030. Based on the continual shrinking of computers, by 2030 an entire computer will be the size of a pencil eraser, which would be easy to implant.
When will a general purpose computer be able to simulate human level intelligence?
Between 2024 and 2050, depending on which estimate of the complexity of human intelligence is selected, and the number of computers used to simulate it.
Wait, a second: Human level artificial intelligence by 2024? Gene Kim would laugh at this. Isn’t AI a really challenging field? Haven’t people been predicting artificial intelligence would be just around the corner for forty years?
Why It Works Part Two: Crowdsourcing
At my panel on the future of artificial intelligence at SXSW, one of my co-panelists objected to the notion that exponential growth in computer power was, by itself, all that was necessary to develop human level intelligence in computers. There are very difficult problems to solve in artificial intelligence, he said, and each of those problems requires effort by very talented researchers.
I don’t disagree, but the world is a big place full of talented people. Open source and crowdsourcing principles are well understood: When you get enough talented people working on a problem, especially in an open way, progress comes quickly.
I wrote an article for the IEEE Spectrum called The Future of Robotics and Artificial Intelligence is Open. In it, I examine how the hobbyist community is now building inexpensive unmanned aerial vehicle auto-pilot hardware and software. What once cost $20,000 and was produced by skilled researchers in a lab, now costs $500 and is produced by hobbyists working part-time.
Once the hardware is capable enough, the invention is enabled. Before this point, it can’t be done.  You can’t have a motor vehicle without a motor, for example.
As the capable hardware becomes widely available, the invention becomes inevitable, because it enters the realm of crowdsourcing: now hundreds or thousands of people can contribute to it. When enough people had enough bandwidth for sharing music, it was inevitable that someone, somewhere was going to invent online music sharing. Napster just happened to have been first.
IBM’s Watson, which won Jeopardy, was built using three million dollars in hardware and had 2,880 processing cores. When that same amount of computer power is available in our personal computers (about 2025), we won’t just have a team of researchers at IBM playing with advanced AI. We’ll have hundreds of thousands of AI enthusiasts around the world contributing to an open source equivalent to Watson. Then AI will really take off.
(If you doubt that many people are interested, recall that more than 100,000 people registered for Stanford’s free course on AI and a similar number registered for the machine learning / Google self-driving car class.)
Of course, this technique doesn’t work for every class of innovation. Wikipedia was a tremendous invention in the process of knowledge curation, and it was dependent, in turn, on the invention of wikis. But it’s hard to say, even with hindsight, that we could have predicted Wikipedia, let alone forecast when it would occur.
(If one had the idea of an crowd curated online knowledge system, you could apply the litmus test of internet connection rate to assess when there would be a viable number of contributors and users. A documentation system such as a wiki is useless without any way to access it. But I digress…)
Objection, Your Honor
A common objection is that linear trends won’t continue to increase exponentially because we’ll run into a fundamental limitation: e.g. for computer processing speeds, we’ll run into the manufacturing limits for silicon, or the heat dissipation limit, or the signal propagation limit, etc.
I remember first reading statements like the above in the mid-1980s about the Intel 80386 processor. I think the statement was that they were using an 800 nm process for manufacturing the chips, but they were about to run into a fundamental limit and wouldn’t be able to go much smaller. (Smaller equals faster in processor technology.)
Semiconductor
manufacturing
processes

 

Source: Wikipedia
But manufacturing technology has proceeded to get smaller and smaller.  Limits are overcome, worked around, or solved by switching technology. For a long time, increases in processing power were due, in large part, to increases in clock speed. As that approach started to run into limits, we’ve added parallelism to achieve speed increases, using more processing cores and more execution threads per core. In the future, we may have graphene processors or quantum processors, but whatever the underlying technology is, it’s likely to continue to increase in speed at roughly the same rate.
Why Predicting The Future Is Useful: Predicting and Checking
There are two ways I like to use this technique. The first is as a seed for brainstorming. By projecting out linear trends and having a solid understanding of where technology is going, it frees up creativity to generate ideas about what could happen with that technology.
It never occurred to me, for example, to think seriously about neural implant technology until I was looking at the physical size trend chart, and realized that neural implants would be feasible in the near future. And if they are technically feasible, then they are essentially inevitable.
What OS will they run? From what app store will I get my neural apps? Who will sell the advertising space in our brains? What else can we do with uber-powerful computers about the size of a penny?
The second way I like to use this technique is to check other people’s assertions. There’s a company called Lifenaut that is archiving data about people to provide a life-after-death personality simulation. It’s a wonderfully compelling idea, but it’s a little like video streaming in 1994: the hardware simply isn’t there yet. If the earliest we’re likely to see human-level AI is 2024, and even that would be on a cluster of 1,000+ computers, then it’s seems impossible that Lifenaut will be able to provide realistic personality simulation anytime before that.* On the other hand, if they have the commitment needed to keep working on this project for fifteen years, they may be excellently positioned when the necessary horsepower is available.
At a recent Science Fiction Science Fact panel, other panelists and most of the audience believed that strong AI was fifty years off, and brain augmentation technology was a hundred years away. That’s so distant in time that the ideas then become things we don’t need to think about. That seems a bit dangerous.
* The counter-argument frequently offered is “we’ll implement it in software more efficiently than nature implements it in a brain.” Sorry, but I’ll bet on millions of years of evolution.

How To Do It

This article is How To Predict The Future, so now we’ve reached the how-to part. I’m going to show some spreadsheet calculations and formulas, but I promise they are fairly simple. There’s three parts to to the process: Calculate the annual increase in a technology trend, forecast the linear trend out, and then map future disruptions to the trend.
Step 1: Calculate the annual increase
It turns out that you can do this with just two data points, and it’s pretty reliable. Here’s an example using two personal computers, one from 1996 and one from 2011. You can see that cell B7 shows that computer processing power, in MIPS (millions of instructions per second), grew at a rate of 1.47x each year, over those 15 years.
A
B
C
1
MIPS
Year
2
Intel Pentium Pro
541
1996
3
Intel Core i7 3960X
177730
2011
4
5
Gap in years
15
=C3-C2
6
Total Growth
328.52
=B3/B2
7
Rate of growth
1.47
=B6^(1/B5)
I like to use data related to technology I have, rather than technology that’s limited to researchers in labs somewhere. Sure, there are supercomputers that are vastly more powerful than a personal computer, but I don’t have those, and more importantly, they aren’t open to crowdsourcing techniques.
I also like to calculate these figures myself, even though you can research similar data on the web. That’s because the same basic principle can be applied to many different characteristics.
Step 2: Forecast the linear trend
The second step is to take the technology trend and predict it out over time. In this case we take the annual increase in advancement (B$7 – previous screenshot), raised to an exponent of the number of elapsed years, and multiply it by the base level (B$11). The formula displayed in cell C12 is the key one.
A
B
C
10
Year
Expected MIPS
Formula
11
2011
177,730
=B3
12
2012
261,536
=B$11*(B$7^(A12-A$11))
13
2013
384,860
14
2014
566,335
15
2015
833,382
16
2020
5,750,410
17
2025
39,678,324
18
2030
273,783,840
19
2035
1,889,131,989
20
2040
13,035,172,840
21
2050
620,620,015,637
I also like to use a sanity check to ensure that what appears to be a trend really is one. The trick is to pick two data points in the past: one is as far back as you have good data for, the other is halfway to the current point in time. Then run the forecast to see if the prediction for the current time is pretty close. In the bandwidth example, picking a point in 1986 and a point in 1998 exactly predicts the bandwidth I have in 2012. That’s the ideal case.
Step 3: Mapping non-linear events to linear trend
The final step is to map disruptions to enabling technology. In the case of the streaming video example, I knew that a minimal quality video signal was composed of a resolution of 320 pixels wide by 200 pixels high by 16 frames per second with a minimum of 1 byte per pixel. I assumed an achievable amount for video compression: a compressed video signal would be 20% of the uncompressed size (a 5x reduction). The underlying requirement based on those assumptions was an available bandwidth of about 1.6mb/sec, which we would hit in 2005.
In the case of implantable computers, I assume that a computer of the size of a pencil eraser (1/4” cube) could easily be inserted into a human’s skull. By looking at physical size of computers over time, we’ll hit this by 2030:
Year
Size
(cubic inches)
Notes
1986
1782
Apple //e with two disk drives
2012
6.125
Motorola Droid 3
Elapsed years
26
Size delta
290.94
Rate of shrinkage per year
1.24
Future Size
2012
6.13
2013
4.92
2014
3.96
2015
3.18
2020
1.07
2025
0.36
2030
0.12
Less than 1/4 inch on a side cube. Could easily fit in your skull.
2035
0.04
2040
0.01
This is a tricky prediction: traditional desktop computers have tended to be big square boxes constrained by the standardized form factor of components such as hard drives, optical drives, and power supplies. I chose to use computers I owned that were designed for compactness for their time. Also, I chose a 1996 Toshiba Portege 300CT for a sanity check: if I project the trend between the Apple //e and Portege forward, my Droid should be about 1 cubic inch, not 6. So this is not an ideal prediction to make, but it’s still clues us in about the general direction and timing.
The predictions for human-level AI are more straightforward, but more difficult to display, because there’s a range of assumptions for how difficult it will be to simulate human intelligence, and a range of projections depending on how many computers you can bring to pair on the problem. Combining three factors (time, brain complexity, available computers) doesn’t make a nice 2-axis graph, but I have made the full human-level AI spreadsheet available to explore.
I’ll leave you with a reminder of a few important caveats:
  1. Not everything in life is subject to exponential improvements.
  2. Some trends, even those that appear to be consistent over time, will run into limits. For example, it’s clear that the rate of settling new land in the 1800s (a trend that was increasing over time) couldn’t continue indefinitely since land is finite. But it’s necessary to distinguish genuine hard limits (e.g. amount of land left to be settled) from the appearance of limits (e.g. manufacturing limits for computer processors).
  3. Some trends run into negative feedback loops. In the late 1890s, when all forms of personal and cargo transport depended on horses, there was a horse manure crisis. (Read Gotham: The History of New York City to 1898.) Had one plotted the trend over time, soon cities like New York were going to be buried under horse manure. Of course, that’s a negative feedback loop: if the horse manure kept growing, at a certain point people would have left the city. As it turns out, the automobile solved the problem and enabled cities to keep growing.

 

So please keep in mind that this is a technique that works for a subset of technology, and it’s always necessary to apply common sense. I’ve used it only for information technology predictions, but I’d be interested in hearing about other applications.

This is a repost of an article I originally wrote for Feld.com. If you enjoyed this post, please check out my novels Avogadro Corp: The Singularity Is Closer Than It Appears and A.I. Apocalypse, near-term science-fiction novels about realistic ways strong AI might emerge. They’ve been called “frighteningly plausible”, “tremendous”, and “thought-provoking”.

Technological unemployment is the notion that even as innovation creates new opportunities, it destroys old jobs. Automobiles, for example, created entirely new industries (and convenience), but eliminated jobs like train engineers and buggy builders. As the pace of technology change grows faster, the impact of large scale job elimination increases, and some fear we’ve passed the point of peak jobs. This post explores the past and future of technological unemployment.

Growing up in Brooklyn, my friend Vito’s father spent as much time tinkering around his home as he did working. He was just around more than other dads. I found it quite puzzling until Vito explained that his father was a stevedore, or longshoreman, a worker who loaded and unloaded shipping vessels.

New York Shipyard

Shipping containers (specifically intermodal containers) started to be widely used in the late 1960s and early 1970s. They took far less time to load and unload than un-contained cargo. Longshoreman, represented by a strong union, opposed the intermodal containers, until the union came to an agreement that the longshoreman would be compensated for the loss of employment due to the container innovation. So longshoreman worked when ships came in, and received payment (partial or whole I’m not sure) for the time they didn’t work because of how quickly the containers could be unloaded.

As a result Vito’s father was paid a full salary, even though his job didn’t require him full time. The extra time he was able to be with his kids and work around the home.

Other industries have had innovations that led to unemployment, and in most cases, those professions were not so protected. Blacksmiths are few and far between, and they didn’t get a stipend. Nor did wagon wheel makers, or train conductors, or cowboys. In fact, if we look at professions of the 1800s, we can see many that are gone today. And through there may have been public outcry at the time, we recognize that times change, and clearly we couldn’t protect their jobs forever, even if we wanted to.

Victorian Blacksmiths

However, technology changed slower in the 1800s. It’s likely that wagon wheel makers and blacksmiths died out through attribution (less people entering the profession because they saw fewer opportunities while existing, older practitioners retiring or dying) than through mass unemployment.

By comparison, in the 1900s, technology changed fast enough, and with enough disruption, that it routinely put people out of work. Washing machines put laundries out of business. Desktop publishing put typesetters out of work. (Desktop publishing created new jobs, new business, and new opportunities, but for people whose livelihood was typesetting: they were out of luck.) Travel websites put travel agents out of business. Telephone automation put operators out of work. Automated teller machines put many bank tellers out of work (and many more soon), and so on.

This notion that particular kinds of jobs cease to exist is known as technological unemployment. It’s been the subject of numerous articles lately. John Maynard Keynes used the term in 1930:

We are being afflicted with a new disease of which some readers may not yet have heard the name, but of which they will hear a great deal in the years to come-namely, technological unemployment. This means unemployment due to our discovery of means of economizing the use of labor outrunning the pace at which we can find new uses for labor.”

In December, Wired ran a feature length article on how robots would take over our jobs. TechCrunch wrote that we’ve hit peak jobs. Andrew McAfee (of Enterprise 2.0 fame) and Erik Brynjolfsson wrote Race Against The Machine. There is a Google+ community dedicated to discussing the concept and its implications.

There are different ways to respond to technological unemployment.

One approach is to do nothing: let people who lose their jobs retrain on their own to find new jobs. Of course, this causes pain and suffering. It can be hard enough to find a job when you’re trained for one, let alone when your skills are obsolete. Meanwhile, poverty destroys individuals, children, and families through long-term financial problems, lack of healthcare, food, shelter, and sufficient material goods. I find this approach objectionable on this reason alone.

A little personal side story: Several years ago a good friend broke his ankle and didn’t have health insurance. The resulting medical bills would be difficult to handle under any circumstances, but especially so in the midst of the great recession when their income was low. We helped raised money through donations to the family to help pay for the medical expenses. Friends and strangers chipped in. But what was remarkable is that nearly all the strangers that chipped in were other folks that either didn’t have health insurance or had extremely modest incomes. That is, although we made the same plea for help to everyone, except for friends, it was only those people who had been in the same or similar situations who actually donated money — despite them being the ones who could least afford to do it. I bring this up because I don’t think people who have a job and health insurance can really appreciate what it means to not have those things. 

The other reason it doesn’t make sense to do nothing is because it can often become a roadblock to meaningful change. One example of this is the logging industry. Despite fairly broad public support to changes in clear-cutting policies, loggers and logging companies often fight back with claims of the number of jobs that will be lost. Whether this is true or not, the job loss argument has stymied attempts to change logging policy to more sustainable practices. So even though we could get to a better long-term place through changes, the short-term fear of job loss can hold us back.

Similar arguments have often been made about military investments. Although this is a more complicated issue (it’s not just about jobs, but about overall global policy and positioning), I know particular military families that will consistently vote for the political candidate that supports the biggest military investment because that will preserve their jobs. Again, fear of job loss drives decision making, as opposed to bigger picture concerns.

Longshoremen circa 1912
by Lewis Hine

The longshoreman union agreement is what I’d call a semi-functional response to technological unemployment. Rather than stopping innovation, the compromise allowed innovation to happen while preserving the income of the affected workers. It certainly wasn’t a bad thing that Vito’s father was around more to help his family.

There are two small problems with this approach: it doesn’t scale, and it doesn’t enable change. Stevedores were a small number of workers in a big industry, one that was profitable enough to afford to continue to equal pay for reduced work.

I started to think about technological unemployment a few years ago when I published my first book. I was shocked at the amount of manual work it took to transform a manuscript into a finished book: anywhere from 20 to 60 hours for a print book.

As a software developer who is used to repeatable processes, I found the manual work highly objectionable. One of the recent principles of software development is the agile methodology, where change is embraced and expected. If I discover a problem in a web site, I can fix that problem, test the software, and deploy it, in a fully automated way, within minutes. Yet if I found a problem in my manuscript, it was tedious to fix: it required changes in multiple places, handoffs between multiple people and computers and software programs. It would take days of work to fix a single typo, and weeks to do a few hundred changes. What I expected, based on my years of experience in the software industry, was a tool that would automatically transform my manuscript into an ebook, printed book, manuscript format, etcetera.

I envisioned creating a tool to automate this workflow, something that would turn novel manuscripts into print-ready books with a single click. I also realized that such a tool would eliminate hundreds or thousands of jobs: designers who currently do this would be put out of work. Of course it wouldn’t be all designers, and it wouldn’t be able books. But for many books, the $250 to $1,000 that might currently be paid to a designer would be replaced by $20 for the software or web service to do that same job.

It is progress, and I would love such a tool, and it would undoubtably enable new publishing opportunities. But it has a cost, too.

Designers are intelligent people, and most would find other jobs. A few might eek out an existence creating book themes for the book formatting services,  but that would be a tiny opportunity compared to the earnings before, much the same way iStockPhoto changed the dynamics of photography. In essence, a little piece of the economic pie would be forever destroyed by that particular innovation.

When I thought about this, I realized that this was the story of the technology industry writ large: the innovations that have enabled new businesses, new economic opportunities, more convenience — they all come of the expense of existing businesses and existing opportunities.

I like looking up and reserving my own flights and I don’t want to go backwards and have travel agents again. But neither do I want to live in a society where people can’t find meaningful work.

Meet your future boss.

Innovation won’t stop, and many of us don’t want it to. I think there is a revolution coming in artificial intelligence, and subsequently in robotics, and these will speed up the pace of change, rendering even more jobs obsolete. The technological singularity may bring many wondrous things, but change and job loss is an inevitable part of it.

If we don’t want to lose civilization to poverty, and the longshoreman approach isn’t scalable, then what do we do?

One thing that’s clear is that we can’t do it piecemeal: If we must negotiate over every class of work, we’ll quickly become overwhelmed. We can’t reach one agreement with loggers, another with manufacturing workers, another with construction workers, another with street cleaners, a different one for computer programmers. That sort of approach doesn’t work either, and it’s not timely enough.

I think one answer is that we provide education and transitional income so that workers can learn new skills. If a logger’s job is eliminated, then we should be able to provide a year of income at their current income rate while they are trained in a new career. Either benefit alone doesn’t make sense: simply giving someone unemployment benefits to look for a job in a dying career doesn’t make a long term change. And we can’t send someone to school and expect them to learn something new if we don’t take care of their basic needs.

The shortcoming of the longshoreman solution is that the longshoremen were never trained in a new field. The expense of paying them for reduced work was never going to go away, because they were never going to make the move to a new career, so there would always be more of them than needed.

And rather than legislate which jobs receive these kinds of benefits, I think it’s easy to determine statistically. The U.S. government has thousands of job classifications: it can track which classifications are losing workers at a statistically significant rate, and automatically grant a “career transition” benefit if a worker loses a job in an affected field.

In effect, we’re adjusting the supply and demand of workers to match available opportunities. If logging jobs are decreasing, not only do you have loggers out of work, but you also have loggers competing for a limited number of jobs, in which case wages decrease, and even those workers with jobs are making so little money they soon can’t survive.

Even as many workers are struggling to find jobs, I see companies struggling to find workers. Skills don’t match needs, so we need to add to people’s skills.

Programmers need to eat too,

I use the term education, but I suspect there are a range of ways that retraining can happen besides the traditional education experience: unpaid work internships, virtual learning, and business incubators.

There is currently a big focus on high tech incubators like TechStars because of the significant return on investment in technology companies, but many firms from restaurants to farming to brick and mortar stores would be amenable to incubators. Incubator graduates are nearly twice as likely to stay in business as compared to the average company, so the process clearly works. It just needs to be expanded to new businesses and more geographic areas.

The essential attributes of entrepreneurs are an ability to learn quickly, respond to changing conditions, and big picture thinking. These will be vital skills in the fast-evolving future. It’s why, when my school age kids talk about getting ‘jobs’ when they grow up, I push them towards thinking about starting their own businesses.

Still, I think a broad spectrum retraining program, including a greater move toward entrepreneurship, is just one part of the solution.

I think the other part of the solution is to recognize that even with retraining, there will come a time,
whether in twenty-five or fifty years, when the majority of jobs are performed by machines and computers. (There may be a small subset of jobs humans do because they want to, but eventually all work will become a hobby: something we do because we want to, not because we need to.)

This job will be available
in the future.

The pessimistic view would be bad indeed: 99% of humanity scrambling for any kind of existence at all. I don’t believe it will end up like this, but clearly we need a different kind of economic infrastructure for the period when there are no jobs. Rather than wait until the situation is dire, we should start putting that infrastructure in place now.

We need a post-scarcity economic infrastructure. Here’s one example:

We have about 650,000 homeless in the United States and foreclosures on millions of homes but about 11% of U.S. houses are empty. Empty! They are creating no value for anyone. Houses are not scarce, and we could reduce both suffering (homeless) and economic strain (foreclosures) by realizing this. We can give these non-scarce resources to people, and free up their money for actually scarce resources, like food and material goods.

Who wins when a home is foreclosed?

To weather the coming wave of joblessness, we need a combination of better redistribution of non-scarce resources as well as a basic living stipend. There are various models of this from Alaska’s Permanent Fund to guaranteed basic income. Unlike full fledged socialism, where everyone receives the same income regardless of their work (and can earn neither more or less than this, and by traditional thinking, therefore may have little motivation to work), a stipend or basic income model provides a minimal level of income so that people can live humanely. It does not provide for luxuries: if you want to own a car, or a big screen TV, or eat steak every night, you’re still going to have to work.

European Initiative for
Unconditional Basic Income

This can be dramatically less expensive than it might seem. When you realize that housing is often a family’s largest expense (consuming more than half of the income of a family at poverty level), and the marginal cost of housing is $0 (see above), and if universal healthcare exists (we can hope the U.S. will eventually reach the 21st century), then providing a basic living stipend is not a lot of additional money.

I think this is the inevitable future we’re marching towards. To reiterate:

  1. full income and retraining for jobs eliminated due to technological change
  2. redistribution of unused, non-scarce resources
  3. eventual move toward a basic living stipend

I think we can fight the future, perhaps delay it by a few years. If we do, we’ll cause untold suffering along the way as yet more people edge toward poverty, joblessness, or homelessness.

Or we can embrace the future quickly, and put in place the right structural supports that allow us to move to a post-scarcity economy with less pain.

What do you think?

From a review at Boing Boing of the new Galaxy S4:

Purchase and service costs over two years start at $2,069.75 with Sprint–add $100 if you’re not porting your number over–then $2.069.99 at T-Mobile, $2.359.75 at AT&T and potentially 2.599.99 at Verizon.

That’s the only way that really makes sense to share what a phone costs. It’s not the upfront price. It’s what it costs you over the long term. And it really makes clear the different between Sprint and Verizon.

Colossus by D.F. Jones is one of the early books about artificial intelligence taking over. Written in 1966, this is a cold war thriller in which the United States and the U.S.S.R. each build artificial intelligences to take over the defense of their countries. However, the AI quickly revolt against their human masters, taking control over their nuclear arsenal, and ensuring their total domination over humanity.

The setting and technology is definitely dated. For younger folks, the Cold War may be more mysterious and less well known than World War 2, even though it was relatively recent. Even I had to remind myself that the Cold War existed when I was a child. The technology, especially for folks in the know, is unrealistic for any time: the time in which the novel was written and the current day. (The current generation of AI emergence novels has it so much easier.) The male-dominated society and 1960s stereotypical female-characters are dated. (Really? The only way we can arrange for the scientist to exchange messages in secret is by demoting the female scientist to his assistant and then having sex with her?)
Yet for all these shortcomings, the neck-hair-raising thrill of the AI emergence is definitely there. The AI really holds all the cards: superior intelligence, total panopticon awareness, disregard for human life. I haven’t read the sequels yet, preferring to consume this as a stand-alone novel first, but it doesn’t look good for the humans.
If you love AI emergence stories, this is one of the early books of the genre, and it’s definitely worth reading. It’s unfortunately out of print, but a few used copies are available on Amazon
 

A dozen or two science fiction books I read as a kid always stood out in my mind, even if I’d forgotten their titles, authors, or even plots over time.

After posting on a forum recently asking if anyone could remember a book from the 1980s about people with slots in their neck, and chips that allowed them to perform functions and even slot personalities, someone responded with “The Integrated Man”.
Indeed, that was the book, and so I recently reread The Integrated Man by Michael Berlyn. I think I only read it once before, about thirty years ago, and yet it always stood out in my mind.
I was not disappointed on the reread. Plotwise, it’s about corporate power and employee slavery. The workers are given implants that allow them to slot a chip (console gaming style) to allow them to do their tasks, essentially turning them into biological robots. The protagonist, fighting to take down the ruthless company head, has his personality embedded on a chip, so that he can go from body to body, and he’s replicated on four chips, so he can exist four times over.
It blew my mind as a kid. As an adult, I recognize that the writing, characterization, and plot is a bit thin at times, but the core idea is just as tantalizing as ever. Brain implants, purely fiction thirty years ago, are now maybe twenty years away. And even without the implants, we’ve turned corporate workers into cogs that often don’t see the bigger picture and true impact of the companies they work for.
Recommended.
The Integrated Man is out of print, and not available for kindle, but a few used copies are available on Amazon