I originally wrote the ten musings on AI risks as the first half of a talk I was planning to give at the Rise of AI conference in Berlin next month. Unfortunately, plans for speaking at the conference fell through. The speaking points have been languishing in a document since then, so I figured I’d share them as a blog post while they are still relevant. Please excuse any errors, this was just a rough draft.
10. There’s a balance between AI risk and AI benefits.
There’s too much polarization on AI risks. One camp says AI is going to be end of all life, we must stop all development immediately. The other camp says AI poses no risk, so let’s go full speed ahead.
Neither statement is totally true.
AI poses some risks, as all things do. The first step to being able to have a discussion about those risks is to admit that they exist. Then we can have a more nuanced and educated conversation about those risks.
However, no matter how severe those risks are, there’s no way we can stop all AI development, because:
9. There’s no way to put the AI genie back in the bottle.
There’s too much economic and military advantage to artificial intelligence to stop AI development.
On a government level, no governments would give up the advantages they could get. When we have nation states hacking each other and surveilling each other with little to no regard for laws, agreements, or ethics, there’s no way we could get them to limit AI development, if it could give them an advantage over others.
On a corporate level, no corporations would willingly give up the economic advantages of artificial intelligence, either as a provider of AI, with the revenue they might stand to make, nor as a consumer of AI, gaining efficiencies in manufacturing, business, personnel management, or communications.
Lastly, on an individual level, we cannot stop people from developing and releasing software. We couldn’t stop bitcoin, or hackers, or malware. We’re certainly not going to stop AI.
8. We must accelerate the development of safeguards, not slow the development of AI.
Because we can’t stop the development of AI, and because AI has many risks, the only option we have is to accelerate the development of safeguards, by thinking through risks and developing approaches to address them.
If a car had only an engine and wheels, we wouldn’t start driving it. We need, at a minimum, brakes and a steering wheel. Yet little investment is being made into mitigating risks with basic safeguards.
7. Manual controls are the most elementary form of safeguard.
Consider the Google Car. The interior has no steering wheel, no brake, no gas pedal. It makes sense that we would take out what isn’t needed.
But what happens if GPS goes down or if there’s a Google Car virus or anything else that renders the self-driving ability useless? Then this car is just a hunk of plastic and metal.
What if it isn’t just this car, but all cars? Not just all cars, but all trucks, including delivery trucks? Now suddenly our entire transportation infrastructure is gone, and along with it, our supply chains. Businesses stop, people can’t get necessary supplies, and they eventually starve.
It’s not just transportation we depend on. It’s payment systems, the electrical grid, and medical systems as well.
Of course, manual controls have a cost. Keeping people trained and available to operate buses and planes and medical systems has a cost. In the past, when we had new technical innovations, we didn’t keep the old tools and knowledge around indefinitely. Once we had automatic looms, we didn’t still have people still working manual ones.
But one key difference is the mode of failure. If a piece of machinery fails, it’s just that one instance. If we have a catastrophic AI event — whether it’s a simple crash, unintentional behavior, or actual malevolence, it has the potential to affect all of those instances. It’s not one self-driving car breaking, it’s potentially all self-driving vehicles failing simultaneously.
6. Stupidity is not a form of risk mitigation.
I’ve heard people suggest that limiting AI to a certain level of intelligence or capability is one way to ensure safety.
But let’s imagine this scenario:
You’re walking down a dark street at night. Further on down the block, you see an ominous looking figure, who you worry that might mug you or worse. Do you think to yourself, “I hope he’s stupid!”
Of course not. An intelligent person is less likely to hurt you.
So why do we think that crippling AI can lead to good outcomes?
Even stupid AI has risks: AI can crash the global stock market, cripple the electrical grid, or make poor driving or flying decisions. All other things being equal, we would expect a more intelligent AI to make better decisions than a stupid one.
That being said, we need to embody systems of ethical thinking.
5. Ethics is a two way street.
Most often, when we think about ethics and AI, we think about guiding the behavior of AI towards humans.
But what about the behavior of humans towards AI?
Consider a parent and child standing together outside by their family car. The parent is frustrated because the car won’t start, and they kick the tire of the car. The child might be surprised by this, but they likely aren’t going to be traumatized by it. There’s only so much empathy they have for an inanimate machine.
Now imagine that the parent and child are standing together by their family dog. The dog has just had an accident on the floor in the house. The parent kicks the dog. The child will be traumatized by this behavior, because they have empathy for the dog.
What happens when we blur the line? What if we had a very realistic robotic dog? We could easily imagine the child being very upset if their robotic dog was attacked, because even though we adults know it is not alive, it will be alive for the child.
I see my kids interact with Amazon Alexa, and they treat her more like a person. They laugh at her, thank her, and interact with her in ways that they don’t interact with the TV remote control, for example.
Now what if my kids learned that Alexa was the result of evolutionary programming, and that there were thousands or millions of earlier versions of Alexa that had been killed off in in the process of making Alexa. How will they feel? How will they feel if their robotic dog gets recycled at the end of its life? Or if it “dies” when you don’t pay the monthly fee?
It’s not just children that are affected. We all have relationships with inanimate objects to some degree, something you treat with reverence. That will grow as those objects appear more intelligent.
My point is that how we treat AI will affect us emotionally, whether we want to or not.
(Thanks and credit to Daniel H. Wilson for the car/dog example.)
4. How we treat AI is a model for how AI will treat us.
We know that if we want to teach children to be polite, we must model politeness. If we want to teach empathy, we must practice empathy. If we want to teach respect, we must be respectful.
So how we treat AI is critically important for how AI sees us. Now, clearly I’m talking about AGI, not narrow AI. But let’s say we have a history of using genetic programming techniques to breed better performing AI. The implication is that we kill off thousands of programs to obtain one good program.
If we run AI programs at our whim, and stop them or destroy them when we’re finished with them, we’re treating them in a way that would be personally threatening to a sufficiently advanced AGI.
It’s a poor ethical model for how we’d want an advanced AI to treat us.
The same goes for other assumptions that stem from treating AI as machines, such as assuming an AI would work 24 hours a day, 7 days a week on the tasks we want.
Now we can’t know how AI would want to be treated, but assuming we can treat them like machines is a bad starting point. So we either treat them like we would other humans and accord them similar rights, or better yet, we ask them how they want to be treated, and treat them accordingly.
Historically, though, there are those who aren’t very good at treating other people with the respect and rights they are due. They aren’t very likely to treat AI well. This could potentially be dangerous, especially if we’re talking about AI with control over infrastructure or other important resources. We have to become even better at protecting the rights of people, so that we can apply those same principles to protecting the rights of AI. (and codifying this within our system of law)
3. Ethical behavior of AI towards people includes the larger environment in which we live and operate.
If we build artificial intelligence that optimizes for a given economic result, such as running a business to maximize profit, and we embody our current system of laws and trade agreements, then what we’ll get is a system that looks much like the publicly-traded corporation does today.
After all, the modern corporation is a form of artificial intelligence that optimizes for profit at the expense of everything else. It just happens to be implemented as a system of wetware, corporate rules, and laws that insist that it must maximize profit.
We can and must do better with machine intelligence.
We’re the ones building the AI, we get to decide what we want. We want a system that recognizes that human welfare is more than just the money in our bank accounts, and that it includes free agency, privacy, respect, and happiness and other hard to define quality.
We want an AI that recognizes that we live in a closed ecosystem, and if we degrade that ecosystem, we’re compromising our long-term ability to achieve those goals.
Optimizing for multiple values is difficult for people, but it should be easier for AGI over the long term, because it can evaluate and consider many more options to a far greater depth and at a far greater speed than people ever can.
An AI that simply obeys laws is never going to get us what we need. We can see many behaviors that are legal and yet still harmful.
The problem is not impossible to solve. You can ask any ten year old child what we should do, and they’ll almost always give you an ethically superior answer to what a CEO of a corporation will tell you.
2. Over the long run, the ethical behavior of AI toward people must include intent, not just rules.
In the next few years, we’ll see narrow AI solutions to ethical behavior problems.
When an accident is unavoidable, self-driving AI will choose what we decide as the best option.
It’s better to hit another car than a pedestrian because the pedestrian will be hurt more. That’s ethically easy. We’ll try to answer it.
More difficult: The unattached adult or the single mother whom two children depend on?
We can come up with endless variations of the trolley dilemma, and depending on how likely they are, we’ll embody some of them in narrow AI.
But none of that can be generalized to solve other ethical problems.
- How much carbon can we afford to emit?
- Is it better to save 500 local manufacturing jobs, or to reduce the cost of the product by half, when the product will make people’s lives better?
- Better to make a part out of metal, which has certain environmental impacts, or plastic, which has different ones?
These are really difficult questions. Some of them we attempt to answer today with techniques such as lifecycle analysis. AI will do that job far better than us, conducting lifecycle analysis for many, many decisions.
1. As we get closer to artificial general intelligence, we must consider the role of emotions in decision-making.
In my books, which span 30 years in my fictional, near-future world, AI start out emotionless, but gradually develop more emotions. I thought hard about that: was I giving them emotions because I wanted to anthropomorphize the AI, and make them easier characters to write, or was there real value to emotions?
People have multiple systems for decision-making. We have some autonomic reactions, like jerking away our hand from heat, which happens without involving the brain until after the fact.
We have some purely logical decisions, such as which route to take to drive home.
But most of our decisions are decided or guided by emotional feelings. Love. Beauty. Anger. Boredom. Fear.
It would be a terrible thing if we needed to logically think through every decision: Should I kiss my partner now? Let me think through the pros and cons of that decision…. No, that’s a mostly emotional decision.
Others are a blend of emotion and logic: Should I take that new job? Is this the right person for me to marry?
I see emotions as a shortcut to decision-making, because it would take forever to reach every decision through a dispassionate, logical evaluation of options. And that’s the same reason why we have an autonomic system: to shortcut conscious decision making. I perceive this stove is hot. I perceive that my hand is touching the stove. This amount of heat sustained too long will damage my hand. Damaging my hand would be bad because it will hurt and because it will compromise my ability to do other things. Therefore, I conclude I shall withdraw my hand from the stove.
That’s a terrible approach to resolve a time critical matter.
Emotions inform or constrain decision making. I might still think through things, but the decision I reach will differ depending on whether I’m angry and scared, or comfortable and confident.
As AI become sophisticated and approach or exceed AGI, we must eventually see the equivalent of emotions that automate some lesser decisions for AI and guide other, more complicated decisions.
Research into AI emotions will likely be one of the signs that AGI is very, very near.