Derp Learning

“Derp Learning” is a categorization of the mistakes that “deep learning” techniques in artificial intelligence tend to make. It’s a typo, but also a deep insight into how complex systems fail.

Some examples to illustrate:

Deep Neural Networks are Easily Fooled: High Confidence Predictions for Unrecognizable Images, 2014. Nguyen describes “fooling images” that deep neural networks misclassify after being trained with a training set. These images are straightforward to construct and will fool the network with high confidence.

Microsoft’s racist robot and the problem with AI development, The Daily Dot, 2016. Microsoft’s conversational chatbot “Tay” undergoes some unsupervised learning from Twitter, and emerges as a racist Holocaust denier.

Convict-spotting algorithm criticised, BBC, 2016. A paper from Wu and Zhang purports to find that Chinese faces can be automatically classified as criminal based on facial characteristics such as the curvature of the upper lip. This “research” hearkens back to the bad old days of phrenology when criminologists predicted behavior based on the bumps on people’s heads.

Artificial intelligence doesn’t kill people; training data kills people.