Seriously, it's enough to make researchers cry.

Enlarge / Seriously, it’s enough to make researchers cry. (credit: Getty | Peter M Fisher)

Dr. Steve Bellovin is professor of computer science at Columbia University, where he researches “networks, security, and why the two don’t get along.” He is the author of Thinking Security and the co-author of Firewalls and Internet Security: Repelling the Wily Hacker. The opinions expressed in this piece do not necessarily represent those of Ars Technica.

Newly elected Rep. Alexandria Ocasio-Cortez (D-NY) recently stated that facial recognition “algorithms” (and by extension all “algorithms”) “always have these racial inequities that get translated” and that “those algorithms are still pegged to basic human assumptions. They’re just automated assumptions. And if you don’t fix the bias, then you are just automating the bias.”

She was mocked for this claim on the grounds that “algorithms” are “driven by math” and thus can’t be biased—but she’s basically right. Let’s take a look at why.

Read 23 remaining paragraphs | Comments

index?i=c7UDjk_Jw1s:FQwRpt3Kq2I:V_sGLiPB index?i=c7UDjk_Jw1s:FQwRpt3Kq2I:F7zBnMyn index?d=qj6IDK7rITs index?d=yIl2AUoC8zA

Leave a Reply

Your email address will not be published. Required fields are marked *