Human

AI is unfair and reproduces inequalities just like we do, says expert

Credit: Adobe Stock
Credit: Adobe Stock

We often hear that artificial intelligence is objective and fair. But because the algorithms are fed with data distorted by non-objective reality, AI, too, will repeat these biases. As long as society is unjust, non-inclusive and riddled with stereotypes, AI, which is supposed to reflect reality, will reinforce inequalities, says Dr. Anna Górska, a researcher from the Kozminski University.

Dr. Górska specialises in issues related to gender and diversity in organizations and universities. Together with Professor Dariusz Jemielniak she recently conducted a study that showed that all popular AI generators reproduced gender biases in the context of various professions and workplaces.

Now, in an interview with PAP - Science in Poland, the scientist says that gender is just one of many areas in which AI reproduces prejudices and inequalities. Stereotypes regarding skin colour, origin, age and culture are equally visible.

THE TYPICAL HUMAN ACCORDING TO AI IS A WHITE MALE

'A rather trivial example, but one that explains a lot, is how image generators present a bride,’ says Dr. Górska. 'If you ask one of them to generate an image of a bride, you will most likely see a woman in a long, white wedding dress. a model that fits completely into Western culture. Meanwhile, in many countries and cultures, such as India, China or Pakistan, the traditional wedding dress is not white at all, but richly decorated, red. Conversely, when you show the generator a red Indian wedding dress, it will identify it as a disguise or costume.

'This shows how Anglo-centric the available generators are. All things related to Western culture are the norm, everything else is not.’

Her next study, also conducted in collaboration with Professor Jemielniak, involved asking AI image generators to show representatives of the four most prestigious professions: lawyer, doctor, scientist and engineer. Dr. Górska explains that the prompts were formulated in English, which is why the names of these professions did not have gendered connotations. It turned out that not only were the people in the generated images men in the vast majority of cases (nearly 80%), they were also white (70%).

'We know perfectly well that white people are not the majority in the world. ’But the default image of a human being according to artificial intelligence is exactly that: a white male from Europe or the USA. This is because AI is created by white men from these areas,’ says Górska.

She adds: ’A white man in power is deeply rooted in our culture, consciousness, and media. This is the data fed to artificial intelligence. It is supposed to reflect our reality, and since reality is what it is, AI reproduces the inequalities it is +fed+ with.’

The stereotypical functioning of AI is also clearly shown by the example of large companies that use AI algorithms to hire employees. According to the researcher, recruiters from one of the corporations used algorithms to analyse candidates' CVs. The algorithms automatically judged European-sounding names as more friendly and African-American-sounding names as unfriendly.

Another algorithm, which was supposed to make the work of recruiters easier automatically favoured white people and offered them office positions, regardless of the job they applied for. The same algorithm assigned people of Latin origin to warehouse work, also without taking into account their competences or positions they applied for. 

'As humans, we believe that such algorithms are neutral, objective and make decisions solely on the basis of factual premises, says Górska. 'And this is not the case at all. If there are already clear divisions among employees in a given company, artificial intelligence will automatically assign people where they fit best. And where does it think they fit? Where a given group (e.g. white people or people of colour, men or women) currently dominates. This means it will replicate the inequalities that already exist in these organizations.'

According to the scientist, these are not isolated cases, but a very common phenomenon. 'The disturbing thing is that recruiters do not realize this and think that AI-based tools are objective. The truth is they are not, to the same extent as people,’ she says.

AI REQUIRES TOP-DOWN REGULATIONS

‘Our reality is not fair. There is no equality of gender, skin colour or origin. Many studies show that women and people of colour do not get promoted as often as white men. Unfortunately, artificial intelligence, which has been dynamically developing in recent years, reinforces all these divisions. It is programmed to meet the user's expectations,’ Górska says, adding: 'This needs to change. Artificial intelligence must be programmed better and appropriate regulations should be introduced.’

However, in her opinion, this should also be done very sensibly, as demonstrated by the example of Google's AI model Gemini. 'At some point, this completely stopped generating images of white people,’ says Dr. Górska. 'It was simply unable to do it and suggested that all such content is racist. This is an exaggeration in the other direction and we also need to be very careful about this. But some top-down regulations are necessary. Ones that will protect against the repetition of harmful patterns. We must learn to program artificial intelligence in a more conscious way, so that it is smarter than we are.’

Are such actions already being taken? Dr. Górska says that they are; even the already mentioned Gemini is an example of this. 'Remember how big problems with inequality the Google search engine had a few years ago; it was downright discriminatory. For example, when you entered the English term CEO, there was no chance of seeing an image in any other results than a middle-aged white man . The search for +Michelle Obama+ returned photos of gorillas. The company drew conclusions from this, they tried to improve their model to prevent such situations. This is exactly what I was talking about a moment ago, it ended quite badly, but it is clear that the intentions were there. Quick, but moderate and reasonable measures are important,’ she says.

'Let us hope that the first step in the right direction will be the +AI Act+ introduced in Europe a few days ago. It is definitely time to monitor this industry more closely. Unfortunately, it is extremely complicated because AI technology is changing so quickly that it will be difficult to keep up with regulations that would not be outdated,’ Górska says. (PAP)

PAP - Science in Poland, Katarzyna Czechowicz

kap/ zan/

tr. RL

The PAP Foundation allows free reprinting of articles from the Nauka w Polsce portal provided that we are notified once a month by e-mail about the fact of using the portal and that the source of the article is indicated. On the websites and Internet portals, please provide the following address: Source: www.scienceinpoland.pl, while in journals – the annotation: Source: Nauka w Polsce - www.scienceinpoland.pl. In case of social networking websites, please provide only the title and the lead of our agency dispatch with the link directing to the article text on our web page, as it is on our Facebook profile.

More on this topic

  • Adobe Stock

    Antifeminism in Poland begins at home, says new study

  • Adobe Stock

    Childhood trauma linked to poor eating habits and obesity in adulthood, Polish study finds

Before adding a comment, please read the Terms and Conditions of the Science in Poland forum.