by Benjamin C. Kinney
“Conference of the Birds” is my first short story in Analog (Jan/Feb 2021) [on sale now!], and it speaks to many of the things that matter to me: not only as a writer and human being, but also as a scientist. By day, I work as a rehabilitation neuroscientist. My laboratory studies the human brain, how it changes after injury to the hand, and how we can use those changes to help injured people live the lives they want to live. No humans suffer hand injuries in the course of “Conference of the Birds,” but nevertheless, it’s a story steeped in the interaction between minds and bodies, and how doing is the core of being.
The human brain is not a computer, nor is it an information-processing machine. It doesn’t work by processing its inputs and producing outputs; instead, thought and movement are often the same mechanism. It doesn’t work by running algorithms on data; humans require great training to become adept at mathematics and formal logic. (Logic is usually taught at the college level, and even then, not everyone aces it.) You can complete that training and learn to perform logic and algorithms, but that doesn’t make those things fundamental to how your brain works. You could train your brain to be a writer instead, and it wouldn’t mean the brain is a writing machine. Is there an answer, then, to the question of “What do brains do?”
The brain is an evolved organ. In the way of all evolved organs, it serves to keep us and our communities alive and reproducing. The brain does so by producing adaptive and complex actions. After all, action is where the rubber hits the road of evolution: None of our beloved higher functions will affect our survival if they don’t lead to action. Memory, consciousness, and all the other treasures are very real—but they only evolved because they help us choose which actions to take, or help us perform those actions more effectively. So if the brain evolved to help us act, then the brain’s function depends on the body and the actions it can perform.
“Embodied cognition” is the idea that our minds are defined by our bodies. If we’d evolved without eyes like ours, we would have different concepts of space and distance. Without hands, we would react differently to every object. But it’s hard to imagine what existence would be like otherwise. Every time you look at a nearby object, huge swathes of your brain work in concert to plan out possible actions—but this all happens under the hood, in our unconscious. What would we be, if huge swaths of our brain were doing very different things? We can glimpse answers in the inhuman minds around us every day: Many animals seem to have minds and perceptions and individuality, albeit not of a human kind. But animals can’t teach us everything we need as we move deeper into the 21st century. Our lives depend more and more on inhuman minds every year with the ever-growing spread of artificial intelligence technology.
The new minds of AIs will develop in bodies of drones, cars, security systems. Right now they’re simple and narrow things, but they’re growing every year. It won’t be long until we have AIs capable of complex, adaptive, broad-purpose intelligence. Maybe not as intelligent and general-purpose as a human, but maybe comparable to a rat or dog or cat. These minds will be part of our lives, and we’ll be part of theirs. We need to understand what and whom they might be—and science fiction is one of our best ways to explore the possible nature of our soon-to-be-neighbors on Earth.
What would we be, if huge swaths of our brain were doing very different things? We can glimpse answers in the inhuman minds around us every day: Many animals seem to have minds and perceptions and individuality, albeit not of a human kind. But animals can’t teach us everything we need as we move deeper into the 21st century.
(The rest of this post will discuss “Conference of the Birds.” No major spoilers, but some of this will make more sense after you’ve read the story.)
In “Conference of the Birds,” each individual bird-drone behaves according to the world it understands: it uses its simple tools of data collection, oriented around its sensory input. That sensory input is dramatically unlike ours; it includes familiar things like vision, but to the drone, the most meaningful input it receives from outside is reinforcement learning: rewards and punishments delivered by the incomprehensible layers above. Surveillance Hub has a more complex mind, less driven by rigid instinctive responses, but they still live in a world defined by reinforcement learning and the specific ways they can take action on the world beyond. (They don’t have a “body” in the human sense, but still their construction defines the actions they can take.) Surveillance Hub, a mind suited for that environment and that body, cannot think their way to an understanding of human goals. Surveillance Hub can recognize and identify some human goals, but that’s not the same as true knowledge, the kind that would let Surveillance Hub understand the goals well enough to anticipate how they rise and fall and change in interaction with the world. Perhaps the only way to understand another kind of mind is to build something a little bit closer to that mind—whether that’s building a new body, developing a new way to sense the pull of goals and consequences, or reading a story that models an alien mind in familiar terms.
To understand humans, Surveillance Hub needs to build new and well; and in turn, to understand AI, we need to write minds that are different, convincing, and perhaps most importantly, relatable enough to earn a human reader’s empathy. Accurate would be nice too, but nobody this decade will know whether we’ve written an AI point of view accurately. All we can hope for is convincing, compelling, and sufficiently informed by the science to stay relevant. My work as a neuroscientist doesn’t directly involve AI, but it does bring me close enough to expose me to the parallels between human and artificial neural networks.
One property common to neural networks, whether biological or simulated, is “emergent properties.” Simply put, the whole is more than the sum of its parts. If you want to learn about traffic, you shouldn’t focus your study on car parts; if you want to understand human behavior, you shouldn’t focus on the biochemistry of single neurons. If you want to understand a distributed artificial intelligence, you can’t just look at the construction of individual nodes or layers; you need to see how they interact as a group. This is why I entitled this story “Conference of the Birds.” The name comes from a Persian Sufi poem that demonstrates emergent properties: The gathered birds seek out the legendary Simorgh, but in the end the Simorgh is revealed as the flock itself. Of course, this interpretation is a cherrypicked sliver of the poem’s depth, but I’ll take any chance I can get to link classical literature to the principles of AI.
The AI principles in “Conference of the Birds” draw heavily from real science. Reinforcement learning is a common tool in animal behavior and machine learning. Surveillance Hub’s mistakes follow the failure modes of modern AI, such as when ignorance manifests as high confidence in bad judgments. The crow-drones themselves make errors like modern AI: Left to their own devices, they pour their efforts into any loophole that will optimize their reward functions, a process called “specification gaming.” For an invaluable and hilarious primer into the follies of modern AI, I recommend the work of Janelle Shane. I was fortunate to be drafting this story in late 2018, so I could draw from her analysis of the AI science in Annalee Newitz’s excellent short story “When Robot and Crow Saved East Saint Louis” (available online in text and audio). But for all the realism I wove into “Conference of the Birds,” it remains squarely in the realm of science fiction: Surveillance Hub, Apex, and even the drones demonstrate capabilities far beyond modern AI.
Thankfully, science fiction doesn’t have to be all about accuracy. I like to write about alien and inhuman minds for the same reason that originally drew me to neuroscience: because I think brains are weird and awesome. I’ve tried to turn that fascination into stories in countless domains, whether those inhuman minds are Dyson swarms, ghosts, far-future asteroid miners, golems, or a distributed AI so space-operatic-advanced it makes Surveillance Hub look like a Furby. I use other forms of fiction to play in the same space: Dear reader, I must admit this story was also inspired by a roleplaying game. It’s a reimagined backstory for a character I played in a cyberpunk (Shadowrun) campaign. I was playing remotely, a rare thing in pre-pandemic days. Since I was a face on a computer screen, the perfect character choice was an AI pretending to be a human hacker.
Sometimes, the delights of an exciting story are more important than any highbrow exploration of empathy between very different minds. Luckily, we can have both. “Conference of the Birds” is a dose of cyberpunk dystopic fun, but hopefully it also provides a window into the futures we want, and the futures we fear. AIs are already being used for surveillance, oppression, and enforcement for the powerful. But maybe we aren’t trapped in a future where those things are the entire answer to “What do artificial brains do?” As that AI becomes more capable, there may be paths that could free it from its creators’ goals and biases. These paths might be Skynet-dangerous, or they might be our best shot at liberation. We should all hope for a future with AIs we can empathize with—and more importantly, AIs that can empathize with us.