Artificial intelligence is full of prejudices

When artificial intelligence is studying the language text data at the same time accepts the stereotypes they contain. Association test revealed: a computer program demonstrates the racist preconceptions and gender clichés that are common to many people. In the future this may become a problem because the artificial intelligence performs a more and more functions in everyday life.

Computer systems that mimic the human mind unique skills: machine intelligence self-learns the language, images, texts or writes them. Also, these systems are able to learn and to perform complex tasks. Captures the fact that recently, artificial intelligence has won person poker game and quiz show “Jeopardy!”.

That is, the machine can achieve the same success as the man, but first she needs to learn this. For this program provide a huge amount of data that becomes the basis for machine recognition and simulation of intelligent behavior. Chat bots and programs for the “transfer feed” oral and written, that enables them to create links between words and expressions.

Algorithms programs like “GloVe” learning in the so-called clavularia. They’re looking for related words and reflect the relationship between mathematical quantities. So the algorithms can understand the semantic similarity between “scientist” and “scientist” and to recognize that they relate to like “man” and “woman”.

Scientists headed Caliskan Eileen (Aylin Caliskan) from Princeton University have tested thus acquired the ability GloVe and found out that her language knowledge is Packed with cultural stereotypes and prejudices.

For the study used a technique known in psychology as a test of implicit associations. It aims to identify unconscious stereotypical expectations. For this test needs to pair with the phrases (or words) which are suitable and which are not suited to each other. So found out that the word “flower” is associated with many adjectives “pleasant” and the word “insect” with “unpleasant”.

Caliskan and her colleagues have set up this test for the study of artificial intelligence and checked, what constitutes the program. The results showed that the ethical attitudes and prejudices of the person who regularly appear through the test of implicit associations, learned and GloVe. For example, the usual African-American environment the name of the program interpreted as rather unpleasant, but common among the white name, how pleasant. Also female names the program was more likely linked to art, and men’s — with math.

It became clear to scientists: during training, the system has absorbed the expressed and latent social stereotypes. This result was not surprising: “it is Not surprising, because the texts are written by people who are certainly not devoid of stereotypes,” says the linguist Charlotte Joachim (Joachim Scharloth) from the Technical University of Dresden.

“Training artificial intelligence unilateral and predubezhdenie data, it is not surprising that he set a one-sided view of the world.

In recent years, there have already been examples like: Microsoft Chatbots Tay, which the Internet trolls was able to teach racist hate, or the Google Photos app, which thought that black users are the gorillas,” says Christine Bouhadi (Christian Bauckhage) from the Institute of intelligent systems of analysis and information behalf of Fraunhofer.

Robotic mind with the racist and discriminatory attitude may become in the future a real problem: then the program will perform more and more functions in our everyday life — and, for example, on the basis of the language analysis to accept the preliminary decision, which candidate to interview, and what to ignore.

Scientists debating on how to eliminate these distortions from databases and computer algorithms. At the same time, if the machine is to learn from our biases, it will be a chance for us to look in the mirror: “Machine learning can detect the stereotypes, and this is an achievement for the understanding of society,” said Charlotte.

Leave a Reply