By Nick Wolven
Author’s Note: This post discusses technical domains in which I am far from being an expert. Read at your own risk!
A longstanding idea in science fiction is the hope—or fear—that we might one day learn to scan human minds into machines. Because the brain is taken to be the seat of the mind, this process is commonly called whole brain emulation, or WBE. It has lately begun to attract attention from serious scientists, who hope that whole brain emulation will provide a shortcut to genuine artificial intelligence. After all, if you have a computer with a human mind inside, you’ve created, by definition, an intelligent machine.
After all, if you have a computer with a human mind inside, you’ve created, by definition, an intelligent machine.
What happens next is anyone’s guess. (My story “Lab B-15,” in the March/April issue of Analog on sale now, examines one frightening possibility.) As an untutored layman, I frankly doubt we’ll ever get whole brain emulation to work. My reasons are similar to those put forward by another set of scientists, who argue that our mental states depend in fundamental ways on our bodily states. The details can be dauntingly complex, but the essential insight is that bodily states give rise the experiences we call “feelings,” and that these feelings are essential to our thinking processes. This way of describing the mind is often given the label “embodied cognition.”
The rigorous case for embodied cognition is subtle, and I won’t try to gloss it here. By way of a rough introduction we might look at hunger. In crude terms, hunger can be considered a signal sent by the body to the brain. But we experience hunger in subtler ways—as fatigue, as irritation, as anticipation, as a motivational state, as a craving for certain kinds of food, and so on. The same goes for the absence of hunger, which can manifest as revulsion to food, satiety, or simple indifference.
Either way, these feelings affect our thoughts and decisions, including not only the foods we choose to eat, but how we structure our time, our social lives, our relations with other animals, and so on. Even when we aren’t hungry, we remember, in a faded way, what it’s like to be hungry, and this also biases our decisions.
The state of hunger might be grounded in nutrient depletion. But the feeling of hunger pervades everything we do.
What would it be like to lack hunger? Certainly it would cause us to think in different ways. Now imagine lacking all such feelings—anger, pleasure, pain, fatigue, excitement—and you begin to glimpse the case for embodied cognition.
To a hardline believer in whole brain emulation, embodied cognition doesn’t seem like much of an obstacle. After all, these various bodily signals are just input to certain parts of the brain, which is where the real thinking happens—including that mysterious phenomenon we call consciousness. Some of the signals come in chemical form, via the blood, and some arrive via the nerves as electric charges (as always, the details are complex). Whatever the case, we have two options.
We can try eliminating a given signal and see what happens. Lots of people get by without eyesight, or hearing, or even hunger. They’re still conscious, still fully human, still people. If you’re a virtual person with no body, you might be better off without a sense of hunger!
Or, if a bodily signal proves essential to mental functioning, we can bite the bullet and add it to our simulation. In the view of WBE enthusiasts, this second approach isn’t terribly taxing. Events taking place inside the brain are much more involved, they argue, than the signals going in and out of the brain. And computers are improving by leaps and bounds. If we ever scale the mountainous challenge of whole brain emulation, surely we’ll climb the foothill of spinal-cord emulation, too. And once we have a working brain simulation chugging away, how hard can it be to add in a module for tracking energy needs—a “hunger app,” in effect?
Personally, I think it will be pretty hard. The basic reasoning of the WBE boosters doesn’t bother me. The trouble comes when we start to think of the mind as a dynamic system instead of just a complex structure. The sensation of hunger, as we experience it, isn’t a crude counter at the edge of consciousness, like a battery gauge in the corner of a phone screen. It’s a system of flows and interactions, affected by blood glucose, body temperature, conditions in the gut, and other factors.
Even if we focus on the bottlenecks where signals flow between body and brain—the spinal cord, the optic nerve, the major blood vessels, etc.—we still have to vary the signals over time in a way that mimics bodily behavior. And since the body interacts, in turn, with the environment, that adds a new layer of complications: the simulated bodily states ought to represent plausible environmental conditions.
Such systemic challenges can quickly become overwhelming. A simple example will show why.
Let’s look at an elementary system: a thermostat regulating temperature in a room. Say our thermostat has a couple of mechanical switches that carry out simple operations: if the temperature is above 76 degrees, turn off the heat; if the temperature below 70 degrees, turn on the heat; if the temperature is between 70 and 76 degrees, do nothing.
So far, so basic. The design of such a thermostat can be summed up in three simple rules!
But that doesn’t give us a working simulation. To get one, we need to supply input over time. A real thermostat has a sensor that tracks ambient temperature changes. So we can either find a way to hook our virtual thermostat to a real thermometer, or we can try to simulate the information a thermometer provides.
Option number one is an engineering challenge. But option number two is a creative challenge. How do we go about simulating a realistic thermometer?
One way is to add a clock to the simulation, then use the clock to regulate the behavior of both the sensor and the control mechanism. We can set up our simulation so that the thermostat checks the thermometer at every tick of the clock, switching on or off according to the rules above. The virtual thermometer then obeys these rules: if the heat is set to “on,” temperature rises over time. If the heat is set to “off,” temperature falls.
This gives us a rough approximation of the behavior of a real thermostat. But two points need to be made.
First, the resulting simulation is absurdly crude. Once implemented, it does nothing but run through a simple cycle. That might be fine as an approximation, but real temperature changes are more various.
Second, the behavior of the simulation depends on the numbers we plug in. If we set the starting temperature to two million degrees, have the virtual clock count off one minute for every day in the real world, and have the virtual temperature drop by one degree every minute, we’ll be dead before we get to see our virtual thermostat do anything. If we have the clock count off one hour for every real-world millisecond, set the starting temperature to 69 degrees, and have the temperature change eight degrees every hour, the system will oscillate between two states at a rate too fast for human perception.
These numbers sound silly. But that’s because we already understand how thermostats are supposed to work. Even so, building a more realistic simulation—a model of detailed, second-to-second temperature changes in a typical house in Fairbanks, Alaska, say—would be hugely challenging.
The key point is that in trying to create a convincing simulation, we’ve moved from writing a few rules to entering a set of data. And that changes the nature of the game, because every variable has a potentially infinite range of values. The full set of all values, projected over every state of the simulation, is enormous.
There are ways to cut it down, but those methods bring challenges of their own. If we supply our simulation with real-world data—using a real thermostat in Fairbanks, Alaska as a starting point, for instance—we have to know what kind of data to collect, how to go about collecting it, and how much to gather.
If we fall back on experimentation—punching random numbers into the simulation, seeing what happens, and trying again—we have to know what kind of behavior we want to see, how to check if we’re getting close, and what kinds of changes to make if things go wrong.
And if we try to figure out the rules that govern the behavior of the whole system—how temperature varies with the day/night cycle, time of year, weather, local environment, etc.—we have to do a lot of old-fashioned science.
Studying the realtime flow of information between a living brain, its body, and its environment isn’t just methodologically challenging. It can be ethically troubling.
And we’re still talking about a dinky virtual thermostat! For a system as complicated as the human mind, these problems balloon to alarming proportions.
Let’s jump back to our starting point. We noted that there are two basic ways to get a virtual thermostat to work: by incorporating a real temperature sensor, or by building a virtual temperature sensor.
As a thermometer supplies input to our imaginary thermostat’s control switch, so a body supplies input to a brain. If we somehow manage to build a virtual brain, then, we’re faced with a difficult choice. We either need to either plug it into some kind of real body, or simulate input provided by the body.
The first option presents a massive technical challenge. I’ll leave it to readers to meditate on the details. Significantly, this solution isn’t especially appealing to boosters of whole brain emulation, because having a conventional body cancels out a lot of the advantages of having a simulated mind. What’s the point of decocting the spirit out of the flesh if we’re just going to pour it right back in?
But the second option is, in my view, even less promising. The problem isn’t just that our understanding of the mind is so limited. The problem is that improving our understanding of the mind depends on types of expertise I don’t think we’re close to having. Studying the realtime flow of information between a living brain, its body, and its environment isn’t just methodologically challenging. It can be ethically troubling. And how much information will we have to collect? How sensitive do our observations have to be? I’m not convinced we have good ways of estimating the scale of the problem.
Some members of the WBE crowd, if I understand their view, think that with enough computing power, we can use controlled experimentation to make up for our deficits in understanding. So what if we don’t know what kind of input to give our virtual brains? No biggie: we’ll just code our simulations to run very quickly, or run a huge number of them side by side, and refine our models until we get the results we’re looking for, somewhat as the virtual humans do in Cory Doctorow’s novel Walkaway.
This kind of thinking comes close to treating Moore’s law as a magic spell. It’s true that computers are getting better all the time. And it may be that the amount of information flowing through a typical mind isn’t overwhelming in absolute terms. But if we don’t know the rules for structuring that information—if we lack the requisite scientific knowledge—the idea that we can use raw computation to guess our way to success strikes me as wishful. It would be like trying to open a corrupted file, then systematically changing bits in the hope of getting better results—but without knowing quite what results we wanted to see.
This isn’t to say whole brain emulation will never happen. Only that its feasibility depends on a giant leap in scientific knowledge, not a steady increase in processing power. What could bring about such an epochal scientific advance? One possibility is the development of genuine artificial intelligence, a significant challenge in its own right. But that’s a puzzler for another post.