by Jo Miles
Jo Miles has a love/hate relationship with algorithms. They know they shape their everyday life in useful ways, but they also know algorithms can express harmful flaws, because their creators—humans—are flawed too. The inherent limitations of algorithms inform Miles’s new story “From the Maintenance Reports of Perserverance Colony, Year 12” [in our November/December issue, on sale now!]
Algorithms are all around us. If you use social media, or own a smart phone, or ever search for anything on the internet, there are algorithms and AIs making decisions about what you see and experience.
That’s not a bad thing. To the contrary: we can all be grateful for the algorithms that hide email spam from our inboxes, and I’ve been known to pull up directions on my phone even for familiar destinations so an algorithm can route me around traffic.
Will the app really give me the fastest route? I have no way of knowing for sure, but it certainly as access to more data than I do.
In my story “From the Maintenance Reports of Perseverance Colony, Year 12”, two AIs are genuinely trying their best to save their colony from disaster, but they’re limited by their programming. They literally can’t do the jobs they need to do, jobs their programmers never anticipated. In the real world, algorithms and AIs (words I’m going to use semi-interchangeably here because the lines, especially in SF, are blurry) aren’t benevolent, and they aren’t always beneficial—their shortcomings can do real harm.
In 2014, Facebook rolled out a feature called Year in Review, which prompted people to share a slideshow of the past year with the message: “It’s been a great year! Thanks for being a part of it.” Presumably that team at Facebook did have a great year, but lots of Facebook users hadn’t, which made the message seem glib at best, cruel at worst. Facebook quickly backpedaled to fix it.
That wasn’t an issue with an algorithm. It was a careless but seemingly well-intentioned mistake by a small group of humans at one tech company, and it had an unfortunately large reach. It’s only human to make mistakes. But algorithms are created by humans, too, and without special precautions, they reflect the biases of their creators—even those with the best intentions.
(I’m making a big assumption here, talking about software developers doing their best to make a good, beneficial algorithms. Of course, not all technology companies have good intentions, as Facebook whistleblower Frances Haugen showed recently through the massive set of documents she leaked, revealing how Facebook repeatedly and knowingly made profit-driven decisions at the expense of the common good. Deliberate misuse or ill use of technology is a whole other can of worms, one which plenty of science fiction has delved into.)
Consider facial recognition, a technology used for everything from unlocking our phones to law enforcement. Numerous studies have found that across the board, major facial recognition algorithms have far lower accuracy when identifying people with darker skin, especially darker-skinned women. That’s not surprising considering that these algorithms are trained on data sets of faces that skew white and male, developed by predominately white, male teams. But when those biased, unreliable algorithms are used for surveillance or policing in communities of color, the implications are alarming.
But algorithms are created by humans, too, and without special precautions, they reflect the biases of their creators—even those with the best intentions.
Algorithms are being used to drive decision-making across the economy. They help identify the best candidates to hire, who gets to buy a home or rent an apartment, and even who needs (and gets) access to health care. Not only is there bias from the programmers developing these algorithms, but bias in the underlying data used to train them, based on decades of systemic bias in these fields. Where bias exists against women or people of color, algorithms will reinforce that bias unless they’re specifically and carefully designed not to.
Increasingly, algorithms are even framing our perceptions of reality by deciding what content we see. Whether a social media feed shows you a picture of your cousin’s new baby, a personality quiz, an insightful long-form news article, or a conspiracy theory, it’s all driven by algorithms. Getting your news custom-tailored to us can help us find information we’re interested in, but it can also isolate us in what Eli Pariser calls “filter bubbles,” in which the news we see increasingly reflects and reinforces our own biases. It can be hard to break out of that bubble even when we know it’s happening to us—and worse, we may not even be aware of when content is being filtered for us. That’s not the only reason social media platforms are having a horrendous time rooting out conspiracies and misinformation, but it’s a big one, and it affects us all.
Perhaps the subtlest danger here is that algorithms give us a false sense of impartiality. It’s easy to put trust in the answers a computer gives us, even when we know those computers aren’t perfect. I have a soft spot for the warm-and-fuzzy science fictional AIs like the ones in my story, but real-life AIs aren’t sentient, and they’re only as helpful and well-intentioned as we make them. And when discrimination is buried in algorithms, behind the scenes, it can be hard to find and root out.
We can build better, less biased algorithms and AIs. Being aware of the problem is a good first step, but it’s not enough. Diversifying the tech industry to include people of all genders, races, and ethnicities, in every department and at every level, is key, as is in-depth training and robust processes within tech companies to identify and eliminate bias. Groups like the ACLU are also advocating for better oversight of AI to protect marginalized groups.
If we’re going to use AIs and algorithms to drive our society and economy, let’s make sure they’re equitable, unbiased, and actually helping us in the ways we want.