We like to believe that math is objective. We’ve been conditioned to think that if an answer comes from a computer—whether it’s a search result, a “suggested friend,” or a credit score—it must be the cold, hard truth. We treat the algorithm like a transparent window into reality. But as danah boyd argues in The Structuring Work of Algorithms, that window is actually a mirror. It doesn’t just show us the world; it reflects the biases, values, and messy human decisions of the people who programmed it in the first place.
There is a “ghost” in the machine, but it’s not the spooky one that haunts cemeteries or abandoned psych wards. It’s us—you and me.
In the world of digital rhetoric, we often talk about the “rhetor”—the person making the argument. Traditionally, we look for a speaker at a podium or a writer with a pen. But boyd pushes us to see the algorithm itself as a rhetorical agent. When an algorithm decides what is “relevant” enough to appear at the top of your feed, it is making a value judgment. It is persuading you that certain information matters more than others. This isn’t just a technical calculation; it’s a rhetorical act of gatekeeping. By choosing what to show and what to hide, the algorithm structures our reality before we even have a chance to engage with it.
The danger, boyd points out, lies in the “black box.” Because we can’t see the code, we assume it’s “magical” or impartial—made specifically for us.
But code is just a set of instructions written by humans who live in a biased world. When we automate a process—like hiring, policing, or content moderation—we aren’t removing bias; we are scaling it. We are taking a single human’s “shorthand” for what a “good” candidate or a “safe” neighborhood looks like and turning it into an invisible, unquestionable rule. This is what boyd refers to as “techno-legal solutionism”: the idea that we can fix deep-seated social problems with a few lines of code and a new regulation. It sounds efficient, but it often ends up being a sophisticated way of ignoring the problem altogether.
So, how do we fight a ghost we can’t see?
boyd’s answer is algorithmic accountability. We have to stop treating technology like a miracle and start treating it like a sociotechnical system. This means demanding to know who built the system, what their goals were, and who gets hurt when the system “works” exactly as intended. It requires a new kind of literacy—one that looks past the sleek interface and asks: What is this code trying to make me believe?
As a student of literature, I’ve spent years deconstructing the “unreliable narrator.” In the digital age, I’ve realized that the most unreliable narrator of all might be the algorithm. It tells us a story about our world every time we unlock our phones, and it does so with a confidence that masks its own flaws. Standing in the “shrine” of danah boyd’s work, the message is clear: the machine may be powerful, but the ghost—the human responsibility behind the code—is what actually matters. We cannot let the “magic” of technology excuse us from the hard work of being critical, rhetorical, and deeply, intuitively, perfectly human.
Leave a Reply