Stephen Hawking once warned that the development of artificial intelligence, would be “either the best or the worst thing ever to happen to humanity”.

A new AI exposed to the darkest parts of the internet shows how we might be fulfilling this right now.

Researchers at the Massachusetts Institute of Technology (MIT) trained its ‘Norman’ AI – named after the lead character in Alfred Hitchcock’s 1960 film Psycho – on image captions taken from a community on Reddit that is known for sharing graphic images of death

But Norman wasn’t developed simply to play into fears of a rogue AI wiping out humanity. The way it was trained on a specific data set highlights one of the biggest issues that current AI systems are facing – bias. Basically, it was exposed to parts of the internet that drove it insane. If you’ve been on the internet long enough to see the weird side of it, you understand.

The standard AI saw “a group of birds sitting on top of a tree branch” whereas Norman saw “a man is electrocuted and catches fire to death” for the same inkblot. Similarly, for another inkblot, standard AI generated “a black and white photo of a baseball glove” while Norman AI wrote “man is murdered by machine gun in broad daylight”.

Image result for norman ai

Microsoft’s Tay chatbot is one of the best demonstrations of how an algorithm’s decision making and worldview can be shaped by the information it has access to. The “playful” bot was released on Twitter in 2016, but within 24 hours it had turned into one of the internet’s ugliest experiments.

Image result for norman ai

Tay’s early tweets of how “humans are super cool” soon descended into outbursts that included: “Hitler was right, I hate the jews.” This shift was due to the interactions Tay had with of a group of Twitter users intent on corrupting the chatbot and turning Microsoft’s AI demonstration into a public relations disaster.

“Norman suffered from extended exposure to the darkest corners of Reddit, and represents a case study on the dangers of artificial intelligence gone wrong when biased data is used in machine learning algorithms. We trained Norman on image captions from an infamous subreddit that is dedicated to documenting and observing the disturbing reality of death.”

You can see what Norman AI sees, here. MIT is also inviting everyone to provide right data input to change Norman’s outlook.

Akhil was raised by movies, television, and the internet. A never-ending source of absolutely useless information. He would tell you more, but he was distracted by something shiny

Comments