Tay was designed to engage with people ages 18 to 24, and it burst onto social media with an upbeat “hellllooooo world!!” (the “o” in “world” was a planet earth emoji). But within 12 hours, Tay morphed into a foul-mouthed racist Holocaust denier that said feminists “should all die and burn in hell.” Tay, which was quickly removed from Twitter, was programmed to learn from the behaviors of other Twitter users, and in that regard, the bot was a success. Tay’s embrace of humanity’s worst attributes is an example of algorithmic bias—when seemingly innocuous programming takes on the prejudices either of its creators or the data it is fed.

How to Keep Your AI From Turning Into a Racist Monster