The White House Wants To End Racism In Artificial Intelligence:
“Artificial intelligence can often be just as unintentionally prejudiced as its human creators, with potentially disastrous consequences. The US government thinks educating future...

The White House Wants To End Racism In Artificial Intelligence:

Artificial intelligence can often be just as unintentionally prejudiced as its human creators, with potentially disastrous consequences. The US government thinks educating future programmers on AI ethics will help solve our computers’ fairness problem.

The White House released its report on the future of artificial intelligence research in the US on Wednesday, and it contains a slew of recommendations. In a section on fairness, the report notes what numerous AI researchers have already pointed out: biased data results in a biased machine.

For example, artificial intelligence is being used by law enforcement across North America to identify convicts at risk of re-offending and high-risk areas for crime. But recent reports have suggested that AI will disproportionately target or otherwise disadvantage people of colour.

The Inherent Bias of Facial Recognition

accessnow:

Facial recognition systems are all over the place: Facebook, airports, shopping malls. And they’re poised to become nearly ubiquitous as everything from a security measure to a way to recognize frequent shoppers. For some people that will make certain interactions even more seamless. But because many facial recognition systems struggle with non-white faces, for others, facial recognition is a simple reminder: once again, this tech is not made for you.

There are plenty of anecdotes to start with here: We could talk about the time Google’s image tagging algorithm labeled a pair of black friends “gorillas,” or when Flickr’s system made the same mistake and tagged a black man with “animal” and “ape.” Or when Nikon’s cameras designed to detect whether someone blinked continually told at least one Asian user that her eyes were closed. Or when HP’s webcams easily tracked a white face, but couldn’t see a black one.

There are always technical explanations for these things. Computers are programmed to measure certain variables, and to trigger when enough of them are met. Algorithms are trained using a set of faces. If the computer has never seen anybody with thin eyes or darker skin, it doesn’t know to see them. It hasn’t been told how. More specifically: the people designing it haven’t told it how.

The Inherent Bias of Facial Recognition

Facial recognition systems are all over the place: Facebook, airports, shopping malls. And they’re poised to become nearly ubiquitous as everything from a security measure to a way to recognize frequent shoppers. For some people that will make certain interactions even more seamless. But because many facial recognition systems struggle with non-white faces, for others, facial recognition is a simple reminder: once again, this tech is not made for you.

There are plenty of anecdotes to start with here: We could talk about the time Google’s image tagging algorithm labeled a pair of black friends “gorillas,” or when Flickr’s system made the same mistake and tagged a black man with “animal” and “ape.” Or when Nikon’s cameras designed to detect whether someone blinked continually told at least one Asian user that her eyes were closed. Or when HP’s webcams easily tracked a white face, but couldn’t see a black one.

There are always technical explanations for these things. Computers are programmed to measure certain variables, and to trigger when enough of them are met. Algorithms are trained using a set of faces. If the computer has never seen anybody with thin eyes or darker skin, it doesn’t know to see them. It hasn’t been told how. More specifically: the people designing it haven’t told it how.

The experiment was simple: Take a diverse group of undecided voters, let them research the candidates on a Google-esque search engine, then tally their votes — never mentioning that the search was rigged, giving top link placement to stories supporting a selected candidate.

The researchers expected the bias would sway voters, but they were shocked by just how much: Some voters became 20 percent more likely to support the favored candidate.

And almost none of the voters caught onto how the results were being skewed. In fact, those who did notice the preferential treatment, the researchers said, felt even more validated that they’d made the right choice.