Tuesday 29 January 2019

Yes, “algorithms” can be biased. Here’s why

an article by Steve Bellovin in ars technica [with grateful thanks to Tara at ResearchBuzz: Firehose]

Seriously, it's enough to make researchers cry.


Newly elected Rep. Alexandria Ocasio-Cortez (D-NY) recently stated that facial recognition "algorithms" (and by extension all "algorithms") "always have these racial inequities that get translated" and that "those algorithms are still pegged to basic human assumptions. They're just automated assumptions. And if you don't fix the bias, then you are just automating the bias."

She was mocked for this claim on the grounds that "algorithms" are "driven by math" and thus can't be biased—but she's basically right. Let's take a look at why.

First, some notes on terminology—and in particular a clarification for why I keep putting scare quotes around the word "algorithm." As anyone who has ever taken an introductory programming class knows, algorithms are at the heart of computer programming and computer science. (No, those two are not the same, but I won't go into that today.) In popular discourse, however, the word is widely misused.

Let's start with the Merriam-Webster dictionary definition, which defines "algorithm" as:
[a] procedure for solving a mathematical problem (as of finding the greatest common divisor) in a finite number of steps that frequently involves repetition of an operation broadly: a step-by-step procedure for solving a problem or accomplishing some end
It is a step-by-step procedure. The word has been used in that sense for a long time, and extends back well before computers. Merriam-Webster says it goes back to 1926, though the Oxford English Dictionary gives this quote from 1811:
It [sc. the calculus of variations] wants a new algorithm, a compendious method by which the theorems may be established without ambiguity and circumlocution.
Continue reading


No comments: