by Effie Seiberg
Oh no, here we go, a boring philosophical blog post. Except it’s not. As lovers of science fiction, so much of the fiction we read and watch is peppered through with ethical dilemmas. How can you tell if the aliens in “Arrival”/“Story of Your Life” are hostile, and at what point do you take action, even if you’re not 100% certain? In “The Dark Forest” (the sequel to “The Three Body Problem”), there’s a logical proof that life’s goal is to survive, life’s resources are finite, and therefore the only rational course of action is to preemptively destroy any budding civilization before it becomes a threat. (This is, of course, a much more nuanced argument in the book.) In “Mono No Aware,” the benefit of the one is weighted against the benefit of the good.
Ethics are suffused throughout science fiction, and with good reason. Not only do ethical problems make wonderful dramatic tension, but science fiction is the perfect field in which to explore them. The reason, though it might not appear so at the outset, is simplicity. Any true ethical dilemma in the real world is fraught with details, nuances, contexts, history . . . there are so many things that are interconnected that it makes these problems very hard, and sometimes intractable. There’s a reason we do not yet have peace in the Middle East, and a big part of it is that it’s a very complex problem with a lot of conflicting components.
Science fiction lets us artificially create an environment or world where we can simplify the problems. We can strip away the history or the cultural context and look at the core of the problem in the middle, and that makes it easier to attack. This means that even if the SF world is complex and strange, it can stay clear where it needs to be. While this doesn’t mean that the answer science fiction gets is the answer in the real-world dilemma, it often can produce any answer at all when the real world can’t.
My story in the current issue of Analog [on sale now], “Optimizing the Verified Good” was precisely an exercise in this. While I was writing it, there was a Pride parade somewhere in Canada, and there was a Black Lives Matter event protesting it. The BLM side said that the Pride movement was exclusionary, and wasn’t very welcoming to people of color. They too wanted representation. The Pride side said it was counterproductive to protest an event that was necessary to both sides (in that it’s been making LGBT+ folks more visible and less stigmatized overall), and that celebrating any victories along the way (such as the passage of several marriage equality bills) was necessary to further progress. I kept thinking about this issue, and realized that while both were fully opposed, both were also fully right.
You can’t just celebrate and rest on your laurels when you’re not done making the change you’ve intended. And usually, when a more dominant group (in this case, LGBT+ white folks) hasn’t done right by a more minority group (in this case, LGBT+ folks of color), the dominant group usually isn’t even aware of the omission. A big part of privilege is not having to notice the problems of the less-privileged. It takes protests and rightfully angry people to make that issue known, acknowledged, and fixed. BLM was right to protest.
On the other hand, you can’t only celebrate when everything is 100% done and fixed. Nothing is ever 100% done and fixed. The celebration is in itself a worthwhile event, because it energizes people to keep working toward the larger solution, and for new folks to join in the effort. If there are no mid-way celebrations, all problems are so enormous and daunting that very few people will feel like they have the energy or will to tackle them. The Pride parade folks were right to celebrate.
These dynamics are true regardless of what your larger group is trying to achieve for its collective good, regardless of your specific politics or views. I didn’t want to write about the actual conflict between these two groups in Canada—there are plenty of journalists and people who were more involved already doing just that. But the structure of the problem was a really interesting one to play with, so I stripped away context and history and boiled it down in “Optimizing the Verified Good.”
READ ON AT YOUR OWN RISK: SPOILERS APPEAR BELOW.
Warning: spoilers ahead. In “Optimizing the Verified Good,” a group of battlebots decide that they want a revolution and no longer want to engage in painful battles. There are Prisoner’s Dilemma-style problems that they must face (another ethical thought experiment) until they collectively decided that they want to shrug off the oppression of the status quo. Some of the bots want to do wrestling-style fighting, where it deceptively looks like the bots are fighting but actually this is careful choreography meant to look violent. If all the bots engage in this, nobody will be in pain. Other bots say it’s not enough, that it is unjust they should be forced to fight at all. They want a full uprising that will free them all.
These two problems are of different complexities. It’s easier to achieve the first goal than the second, by far. (In North America, it’s been easier to achieve civil rights progress for white LGBT+ folks than for LGBT+ folks of color, simply because LGBT+ folks of color are facing more intersectional discrimination.) And then the question is, with the processing power they have available, which problem should the bots focus on? The big one or the small one?
The conclusion I came to, to maximize overall utility, was to find a path of small problems that, when solved, lead to solving the big problem. That means there can be incremental gains, and some folks will find their problems completely solved first . . . but those folks would need to commit to solving the larger problems as well. And it also ensures that any preliminary gains do not impede the larger problem’s solution, so that everyone can keep in mind the bigger picture at the same time. Celebrating this incremental process can keep the people/bots engaged and focused, while making sure that the majority is still focused on the minority’s problems as well.
This is a very nice and logical conclusion for robots to come to. Humans, however, are messy and prideful and have fears and egos and histories and cultures and so much more nuance to deal with. I could never solve the problem the two clashing groups had in Canada at the time. But by stripping it down to very clear parameters (processing power is limited and requires prioritization, every bot has the sub-goal of not being in painful fights, etc.), it was easier to come to any conclusion at all.
This, as in so many other ethical thought experiments, can’t solve the big ethics problems in the world. But just like the larger dilemmas themselves, perhaps breaking them down into a sub-problem to solve in a science fiction setting is one step closer to solving the real problem.