by David Ebenbach
For David Ebenbach, the key to retaining our humanity under threat of AI is to embrace the unique traits that make us human to begin with—our critical thinking, empathy, and creativity—just like humans have done in a number of classic science fiction stories, as well as the characters in David’s latest story, “The Underappreciation of Danny White” from our [November/December issue, on sale now!]
In my story “The Underappreciation of Danny White,” published in Analog’s November/December 2025 issue, the Earth has been taken over by aliens that look like roiling blankets and who keep humans in laser cages and herd us around to make us do heavy labor and observe fake Thanksgivings, all of which has deeply shaken our grip on our humanity.
Frankly, I think the story says a lot about the lives we’re living right now.
Okay—we’re not in laser cages, and most blankets are benign. But I do think we’re already walking the path toward some slower and subtler dehumanizing apocalypses. Certainly in the United States the politics these days are actively designed to make us see each other as sub-humans. And the tough choices starting to pile up as a result of climate change are not likely to bring out our best. But those aren’t even really the issues I’m thinking about right now. What I’m thinking about is generative AI.
I’ve written about this at greater length elsewhere, so I won’t get into all the details, but suffice it to say: the science is increasingly showing that the increasingly widespread use of generative AI poses serious dangers to our intelligence, social skills, and creativity. (This shouldn’t surprise anyone; if you let something or something else do important things for you, of course there’s going to be atrophy.) And AI also threatens the environment, employment, privacy, truth itself, etc., etc., etc., but what I’m most concerned about is how ready we are to offload and abandon some of our most fundamental human traits. This way, friends, lies serious trouble.
If there’s one thing apocalyptic science fiction teaches us, it’s that the way to fend off (or come back from) disaster is not to abandon our humanity but in fact to double down on it. In Octavia Butler’s Parable of the Sower, only the protagonist’s keen ability to feel points the way to something better. Compassion and cooperation are saving graces in Jose Saramago’s Blindness. Anna North’s America Pacifica shows us how facing the complexity of the truth, rather than allowing oneself to slip into complacent inertia, can change everything. In Anthony Doerr’s Cloud Cuckoo Land, people facing disasters across time are buoyed by creativity in the form of an enduring human story. On the other hand, when the world becomes overwhelmingly inhumane—think Cormac McCarthy’s The Road or Orwell’s 1984—there’s no escape at all.
My story tries to take on these same issues, albeit with roiling-blanket space aliens called Overdominators who have enslaved us. Early in the story, the narrator, Aaron, says, “Some days I can only think of myself as a rock-shifter. Not an Aaron at all. Like, who or what is an Aaron now?” He, like the other characters in the story, is now just a worker, a pawn in the Overdominators’ opaque plans for the planet and beyond. They’ve lost themselves. The Earth’s ecosystems are stripped bare, and people lack basic information, control, the ability to distinguish truth from falsehood or to generate meaningful ideas. (All of which, as it happens, are potential outcomes of generative AI in our actual world.)
If there’s one thing apocalyptic science fiction teaches us, it’s that the way to fend off (or come back from) disaster is not to abandon our humanity but in fact to double down on it.
Is there any possibility of hope in a scenario like this? I’ll let you read the story to find out what I think. But I can tell you one thing that doesn’t work: abandoning our own minds to follow someone else’s (or something else’s) script.
Whether their intention is to pacify the humans or manipulate them—their motives are always mysterious—the Overdominators in my story force humans to, among other things, enact staged versions of Thanksgiving over and over again, watching old recordings of football games and eating counterfeit versions of familiar foods. This scenario is in fact analogous to the promise of AI: a vast landscape of recycled language and images, plus old ideas that are distorted by bias or polarizing frameworks or discredited theories. And yet, according to recent articles, people are taking advice from AI apps that have been designed to act like therapists, clergy, and even G-d. In other words, instead of relying on our human gifts to work things out ourselves, we are increasingly outsourcing our engagement with meaning and purpose—and we’re outsourcing it to a profoundly unoriginal source.
This way, friends, lies oblivion.
There is, however, one important difference between the apocalypse in “The Underappreciation of Danny White” and the one we’re facing if we throw ourselves into the AI pit: the apocalypse in my story happened all of a sudden one day; the one we’re flirting with will take time to unfold. Let’s use that time to fight back. And let’s fight back by rejecting AI’s offer to make things easier and smoother and dumber. Let’s fight back by holding as tight as we possibly can to our critical thinking, our creativity, our empathy—to everything that makes us human.