Indian-origin scientist discovers, why Artifical Intelligence can be racist and sexistSci-Tech

April 15, 2017 11:34
Indian-origin scientist discovers, why Artifical Intelligence can be racist and sexist

An Indian-origin scientist, along with a team of scientists have found that ‘Artificial Intelligence systems’ can acquire cultural, racial or gender biases, when trained with ordinary human language that are available online.

Many experts think that new artificial intelligence systems can be coldly logical and objectively rational.

However, researchers have demonstrated how the new systems can be reflections of people, their creators, in potentially problematic ways.

Researchers have found that a common ‘AI’ machine learning programs, when trained with ordinary human language available online, can obtain cultural biases embedded in the patterns of wording.

The biases can range from the morally neutral, like a preference for flowers over insects, to the objectionable views of race and gender.

Arvind Narayanan, who is an assistant professor at Princeton Univeristy in the United States said, "Questions about fairness and bias in machine learning are tremendously important for our society."

He said that identifying and addressing possible discrimination in machne learning will be very important as people increasingly turn to systems for processing the natural language we use to communicate, such as online text searches, image categorization and automated translations.

"We have a situation where these artificial intelligence systems may be perpetuating historical patterns of bias that we might find socially unacceptable and which we might be trying to move away from," said Narayanan.

Researchers have now developed an algorithm called ‘GloVe’, which can represent the co-occurrence statistics of words in, say, a 10 -word window of text. Words that often appear near one another have a stronger association than those words that seldom do.

They used this algorithm on a huge trawl of contents from the WWW (World Wide Web) containing about 840 billion words.

Researchers examined sets of target words within this large sample of written human culture, like like "programmer, engineer, scientist" and "nurse, teacher, librarian" with two sets such as man, male" and "woman, female," looking for proof of the kinds of biases people can unwittingly possess.

They found inoffensive biases like for flowers over bugs, but so did of gender and race appeared.

The AI machine of Princeton managed to replicate the broad substantiations of human bias.

The machine associated female names more with familial attribute words, like "parents" and "wedding," than male names.

The male names had stronger associations with career attributes such as "professional" and "salary."

The distinguished bias about occupations can end up having sexist effects.

This study was published in the journal Science.

Fungal pesticides offer alternative to traditional chemicals

AMandeep

If you enjoyed this Post, Sign up for Newsletter

(And get daily dose of political, entertainment news straight to your inbox)

Rate This Article
(0 votes)