Scientists at MIT have created an AI psychopath trained on images from a particularly disturbing thread on Reddit. Norman
is designed to illustrate that the data used for machine learning can
significantly impact its outcome. “Norman suffered from extended
exposure to the darkest corners of Reddit, and represents a case study
on the dangers of Artificial Intelligence gone wrong when biased data is
used in machine learning algorithms,” writes the research team.
Norman is trained on image captioning, a form of
deep learning that lets AI generate text descriptions of an image.
Norman learned from image captions of a particularly disturbing
subreddit, dedicated to images of gore and death. Then, the team sent
Norman to take a Rorschach inkblot test, a well known psychological test
developed in 1921 designed to interpret subjects’ psychological states
based on what they see in the image. Scientists compared Norman’s
responses on a standard image captioning neural network.
When
a standard AI sees “a group of birds sitting on top of a tree branch,”
Norman sees “a man is electrocuted and catches to death. Normal AI sees
“a black and white photo of a baseball glove,” psychopathic AI sees “man
is murdered by machine gun in broad daylight.”
Previously, the team at MIT developed an AI called Shelly who writes horror stories, and a Nightmare Machine AI
that turns ordinary photographs into haunted faces and haunted places.
While MIT unveiled Norman on April Fool’s day, what Norman demonstrates
is no joke: “when people talk about AI algorithms being biased and
unfair, the culprit is often not the algorithm itself, but the biased
data that was fed to it. The same method can see very different things
in an image, even sick things, if trained on the wrong (or, the right!)
data set."
Comments
Post a Comment