Norman: The AI Psychopath (2018)
A central idea in machine learning is that the data we use to teach an algorithm can significantly influence its behavior. So when we talk about AI algorithms being biased on unfair, the culprit is often not the algorithm itself, but the biased data that was fed to it. This project, launched on April Fools' Day, highlights the role of data in algorithmic bias by pushing the idea to the extreme. When given an inkblot image (a kind of psychoanalytic test), a Deep Learning-based image captioning algorithm can see very different things, even sick things, if trained on the wrong data set. Our team created an AI image captioning system that always scenes of death, and contrast it with a ‘healthy’ system that sees benign objects. This project aims to stimulate public awareness and discussion of the power of data to shape the algorithms that power the world.
Web site: http://norman-ai.mit.edu/
Team: Pinar Yanardag, Manuel Cebrian, Iyad Rahwan
Selected media: BBC, CNN, NY Post, The Times, La Repubblica, Fortune, Fast Company, Wired (Germany), Rolling Stone, Vice, USA Today, Tech Crunch, The Telegraph, Fast Company