Prejudice AI

By Alexander Pan

Artificial intelligence research has taken another step towards creating machines that have human emotions and biases. In a new study, researchers from Princeton University have determined artificial intelligence to possess cultural biases when learning different languages. When trained to learn a specific human language, the AI are able to detect the cultural biases within the pattern and nature of the wording. As machines are used more frequently for communication purposes, the researchers realize the problem of having internal biases within the machine. Words such as “roses” and “flowers” were associated with pleasant emotions of care and love. In contrast, words such as “ant” and “moth” are associated with unpleasant emotions of disgust and filth. Then, the researchers from Princeton tested whether the AI associated certain words with race and gender. In the results, the AI were more likely to associate words like “engineer, scientist, programmer” with the male gender. On the other hand, words such as “nurse, marriage, wedding” were more likely to be associated with the female gender by the AI. The researchers wished to eliminate these innate, cultural biases within the AI’s communication system. To do so, programmers are starting to look towards mathematical ways to program the AI to think more objectively and not retain cultural biases. 

 

Princeton University, Engineering School. (2017, April 13). Biased bots: Human prejudices sneak into artificial intelligence systems. ScienceDaily. Retrieved April 16, 2017 from www.sciencedaily.com/releases/2017/04/170413141055.htm