What's up everyone. We are doing a contest with T.I. and we are giving away $1200 a day for the next 10 days. Just wanted to give you all a heads up.
https://www.allhiphop.com/ti

AI programs exhibit racial and gender biases, research reveals

2stepz_ahead2stepz_ahead Who I am is Complex, What i am, simply put. I'm a Threatwalking out the lions denGuests, Members, Writer, Content Producer Posts: 32,324 ✭✭✭✭✭
https://www.theguardian.com/technology/2017/apr/13/ai-programs-exhibit-racist-and-sexist-biases-research-reveals



An artificial intelligence tool that has revolutionised the ability of computers to interpret everyday language has been shown to exhibit striking gender and racial biases.

The findings raise the spectre of existing social inequalities and prejudices being reinforced in new and unpredictable ways as an increasing number of decisions affecting our everyday lives are ceded to automatons.

In the past few years, the ability of programs such as Google Translate to interpret language has improved dramatically. These gains have been thanks to new machine learning techniques and the availability of vast amounts of online text data, on which the algorithms can be trained.

However, as machines are getting closer to acquiring human-like language abilities, they are also absorbing the deeply ingrained biases concealed within the patterns of language use, the latest research reveals.

Joanna Bryson, a computer scientist at the University of Bath and a co-author, said: “A lot of people are saying this is showing that AI is prejudiced. No. This is showing we’re prejudiced and that AI is learning it.”

But Bryson warned that AI has the potential to reinforce existing biases because, unlike humans, algorithms may be unequipped to consciously counteract learned biases. “A danger would be if you had an AI system that didn’t have an explicit part that was driven by moral ideas, that would be bad,” she said.

The research, published in the journal Science, focuses on a machine learning tool known as “word embedding”, which is already transforming the way computers interpret speech and text. Some argue that the natural next step for the technology may involve machines developing human-like abilities such as common sense and logic.
Advertisement

“A major reason we chose to study word embeddings is that they have been spectacularly successful in the last few years in helping computers make sense of language,” said Arvind Narayanan, a computer scientist at Princeton University and the paper’s senior author.

The approach, which is already used in web search and machine translation, works by building up a mathematical representation of language, in which the meaning of a word is distilled into a series of numbers (known as a word vector) based on which other words most frequently appear alongside it. Perhaps surprisingly, this purely statistical approach appears to capture the rich cultural and social context of what a word means in the way that a dictionary definition would be incapable of.

For instance, in the mathematical “language space”, words for flowers are clustered closer to words linked to pleasantness, while words for insects are closer to words linked to unpleasantness, reflecting common views on the relative merits of insects versus flowers.

The latest paper shows that some more troubling implicit biases seen in human psychology experiments are also readily acquired by algorithms. The words “female” and “woman” were more closely associated with arts and humanities occupations and with the home, while “male” and “man” were closer to maths and engineering professions.

And the AI system was more likely to associate European American names with pleasant words such as “gift” or “happy”, while African American names were more commonly associated with unpleasant words.

The findings suggest that algorithms have acquired the same biases that lead people (in the UK and US, at least) to match pleasant words and white faces in implicit association tests.

These biases can have a profound impact on human behaviour. One previous study showed that an identical CV is 50% more likely to result in an interview invitation if the candidate’s name is European American than if it is African American. The latest results suggest that algorithms, unless explicitly programmed to address this, will be riddled with the same social prejudices.
Advertisement

“If you didn’t believe that there was racism associated with people’s names, this shows it’s there,” said Bryson.

The machine learning tool used in the study was trained on a dataset known as the “common crawl” corpus – a list of 840bn words that have been taken as they appear from material published online. Similar results were found when the same tools were trained on data from Google News.

Sandra Wachter, a researcher in data ethics and algorithms at the University of Oxford, said: “The world is biased, the historical data is biased, hence it is not surprising that we receive biased results.”

Rather than algorithms representing a threat, they could present an opportunity to address bias and counteract it where appropriate, she added.

“At least with algorithms, we can potentially know when the algorithm is biased,” she said. “Humans, for example, could lie about the reasons they did not hire someone. In contrast, we do not expect algorithms to lie or deceive us.”

However, Wachter said the question of how to eliminate inappropriate bias from algorithms designed to understand language, without stripping away their powers of interpretation, would be challenging.

“We can, in principle, build systems that detect biased decision-making, and then act on it,” said Wachter, who along with others has called for an AI watchdog to be established. “This is a very complicated task, but it is a responsibility that we as society should not shy away from.”

Comments

  • 2stepz_ahead2stepz_ahead Who I am is Complex, What i am, simply put. I'm a Threat walking out the lions denGuests, Members, Writer, Content Producer Posts: 32,324 ✭✭✭✭✭
    stop worrying about hackin for nudes an hack for survival
  • 2stepz_ahead2stepz_ahead Who I am is Complex, What i am, simply put. I'm a Threat walking out the lions denGuests, Members, Writer, Content Producer Posts: 32,324 ✭✭✭✭✭
  • Mr.LVMr.LV Members Posts: 14,089 ✭✭✭✭✭
    Imagine a white supremacist T800 chasing you down
    tumblr_nqko2ssb1Y1qd479ro1_500.gif
  • ShizlanskyShizlansky Members Posts: 35,095 ✭✭✭✭✭
    Mr.LV wrote: »
    Imagine a white supremacist T800 chasing you down
    tumblr_nqko2ssb1Y1qd479ro1_500.gif

    Still wouldn’t be the black man fight.
  • CopperCopper The WickMembers Posts: 49,531 ✭✭✭✭✭
    HALT 🤬

    Sentinel-mkIV-X-Men-Marvel-Comics-robot-h1.jpg
  • rickmogulrickmogul IFNOTYNOT Members Posts: 1,961 ✭✭✭✭✭
    edited October 2017
    CACS R already programming bots to recognize certain shabes of color as aggressive. Gonna be a fun time.
  • BiblicalAtheist BiblicalAtheist Prude FieldsMembers Posts: 15,668 ✭✭✭✭✭
    I believe it. When google first put out their AI bot you could ask questions, thing was smart af. I went back years later, I wanted to punch my screen it was so dumb. And it got that way from people.
  • ThaNubianGodThaNubianGod Members Posts: 1,862 ✭✭✭✭✭
    As someone who's worked with AI for years, this is all as dumb as Musk worrying about Skynet forming.
  • LUClENLUClEN Absence makes the heart grow fonder of someone else Members Posts: 20,559 ✭✭✭✭✭
    As someone who's worked with AI for years, this is all as dumb as Musk worrying about Skynet forming.

    Not really. A lot of AI is gearing towards machine learning right now, and if they're learning from people then they're going to learn all the racist, sexist, classist 🤬 that we think and say
  • AZTGAZTG Members Posts: 7,598 ✭✭✭✭✭
    Heres the funny part, these AI programs werent created with any prejudice in mind, they were taught to absorb from their surroundings and then naturally they learned prejudices.

    With this in mind, most white people are still gonna say they dont see or believe any prejudice exist in society even with this proof staring them in the face.
  • The Lonious MonkThe Lonious Monk Man with No Fucks Given Members Posts: 26,258 ✭✭✭✭✭
    AZTG wrote: »
    Heres the funny part, these AI programs werent created with any prejudice in mind, they were taught to absorb from their surroundings and then naturally they learned prejudices.

    With this in mind, most white people are still gonna say they dont see or believe any prejudice exist in society even with this proof staring them in the face.

    Yeah I'm not really sure if people missed this or just believed trying to be funny is more important. But they didn't make racist AI. They created AI that would evolve by taking in popular data, and it became racist and sexist as a result. You're right, white people will still try, but this definitely proves that racism is still alive and well (as if any of us didn't already know that).
  •   Colin$mackabi$h Colin$mackabi$h Smartass Snatch Money ave.Members Posts: 16,586 ✭✭✭✭✭
    Fake news.
  • [Trillmatic][Trillmatic] Members Posts: 3,531 ✭✭✭✭✭
    It’s all fun and games till the AI start to develop their own language we can’t decipher.
Sign In or Register to comment.