Dr. Steve Bellovin is professor of computer science at Columbia University, where he researches “networks, security, and why the two don’t get along.” He is the author of Thinking Security and the co-author of Firewalls and Internet Security: Repelling the Wily Hacker. The opinions expressed in this piece do not necessarily represent those of Ars Technica.
Newly elected Rep. Alexandria Ocasio-Cortez (D-NY) recently stated that facial recognition “algorithms” (and by extension all “algorithms”) “always have these racial inequities that get translated” and that “those algorithms are still pegged to basic human assumptions. They’re just automated assumptions. And if you don’t fix the bias, then you are just automating the bias.”
She was mocked for this claim on the grounds that “algorithms” are “driven by math” and thus can’t be biased—but she’s basically right. Let’s take a look at why.