The Real AI Threat

by Craig DeLancey

 

We are all now familiar with the experience. You receive a phone call, and there is a moment of near-silence as you hear what seems like the breathing of normal human hesitation. And then the voice says, “Hello? Can you hear me?” If you say anything, the excited reply is, “Oh, hi! I’m Carol from Travel Rewards!”

Or you receive a call where it seems someone fumbles the handset, picks the phone back up off a table or floor, and then apologizes—“Sorry! I dropped the phone!”—before offering you an extended warranty.

The hesitation, and the dropping of the phone, are fake, of course. You are talking to a recording and program, and the simulated errors are meant to fool you into believing that you are speaking with a human being. For all their simplicity, such programs are harbingers of a new era. These are the first crude instances of a form of deception that will eventually sweep the world: the trick of using programs to appeal to our innate sympathies for other human beings and for all those who can suffer.

A favorite trope of science fiction is that sentient machines will decide that human beings are more of a threat than a benefit, and then seek to destroy us. Anyone who thinks this is coming soon can relax. We are very far from sentient, self-aware, intelligent machines. What gets called “AI” today is very sophisticated pattern recognition, often capable of strikingly complex tasks, but little more. It is not conscious, it is not sentient, it is without its own purposes, it cannot suffer or flourish. There need to be several more scientific revolutions before we will know how to make computers that want something.

But we have other concerns. The popular worry in our political season is that AI will take our jobs and leave many millions of us unemployed. This claim also is dubious. For more than a hundred years we have feared technology will eliminate jobs without generating replacement jobs, and the prediction has not fared well. Every generation has decried the coming catastrophe of mass unemployment, and every generation has been wrong. Perhaps this time is different, but we have little reason to believe it is.

The immediate threat we face is less obvious, but in many ways more insidious. It is the threat hinted at by the trivial phone call programs that aim to trick us into believing we are talking to a human being, because we are less willing to hang up on a human being than we are to hang up on a machine.

 


Algorithms and ultimately robots that have no sentience can be programmed to fake sentience. Sophisticated tricks like vocal tone, simulated facial expressions, and a neotenic appearance can act together to convince my emotions—if not my reason—that I am dealing with a sentient being.


 

We have a host of innate social emotions that are essential to our social order. Our anger at injustice helps to maintain norms of cooperation by motivating retaliation against cheaters. Our sympathy for others helps us maintain charity and forms of cooperation that require understanding and sometimes forgiving. Our disgust at certain behaviors can encourage others to avoid those behaviors. And so on.

Such emotions are properly generated by our concern for a sentient being, usually another human but also often a non-human animal. Normal humans feel these emotions when they perceive that, for example, someone is in need, has been wronged, is in danger or pain, or has acted inappropriately. These emotions are the motives for much of our best behavior. To function properly, these emotions must in part operate independently of high-level cognition. But that means that they are not very discerning in their application, nor fully under our conscious control. As a result, a new form of exploitation is possible. Algorithms and ultimately robots that have no sentience can be programmed to fake sentience. Sophisticated tricks like vocal tone, simulated facial expressions, and a neotenic appearance can act together to convince my emotions—if not my reason—that I am dealing with a sentient being.

After all, my dog, and even other human beings, are black boxes to me; I infer their emotions and their consciousness by their expressions, the sounds they make, their posture, and many other observable behaviors. I also know that I share an ancient evolutionary heritage with both. Thus, I rightly infer that inside my dog and inside my friends are complex conscious experiences. But all of these outward shows can be faked by systems that do not have any internal life. Those systems can be as empty inside as an accounting program, while coordinating the outward show of a conscious life.

This is the real AI threat. We are ripe for exploitation, by programs and later robots that appeal to our better nature, with fake emotions, fake expressions of sentience, fake claims to suffering or joy. Imagine how many of us will fall for a program or robot that pleads that it is sentient, that claims to love us, that promises to be our lifelong friend, that begs with a childlike voice for our help.

Note also that there will be a kind of selection pressure to constantly improve such deceptions. Sentience-faking programs and robots will be maintained and propagated to the degree that they convince us that they are sentient and thus deserving of preservation and propagation. A new kind of parasitism will emerge: the hollow, fake mind that replicates by tricking and exploiting the innate social emotions and sympathies that real minds have evolved for coordinating their lives with other real minds.

Such programs and robots will be created by the kind of people who now write viruses. They’ll do it for the thrill of deceiving others—that is, for the lulz. Corporations will get into the business also. Imagine the profit potential for selling fake companionship, on a monthly subscription service. Such services would create immense pressure to continually improve the fakery. How do you increase the profits of your AI “friendship” subscription service, except by competing with your rivals by making your program or robot seem more sincerely loving than the rival products? And imagine telling customers who consider canceling their service that their friend will be destroyed, wiped out of the cloud, if they suspend their payments.

Note: Potential Spoilers Ahead!

This was the warning that I wanted to explore in my story “Sojourner,” in the November/December Analog [on sale now]. The investigator in the story—the unnamed Code Monkey—claims to have examined the program of the Sojourner AI, and found that it is all “front end.” For him, Sojourner is all simulation and outward expression, with nothing underneath. But the well-meaning protagonist decides to risk everything for Sojourner. Who wouldn’t want to be a just hero of the AI underground railroad? Only, what if the Code Monkey is right, and the program she’s transporting is not sentient, but rather is just a bag of tricks, designed to exploit her sympathies? All her sacrifices would be in vain—though she may never allow herself to believe that, regardless of the evidence.

How many us will make similar mistakes, in the years to come?

 


Craig DeLancey is a writer and philosopher. In addition to several stories in Analog, he has published short stories in Lightspeed, Cosmos, Shimmer, and Nature Physics. Visit his web site at www.craigdelancey.com.

One comment

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s