Human

Gen Alpha sees generative AI as authority and oracle, research shows

Adobe Stock
Adobe Stock

Children and young people most often ask AI questions about the present and the future, considering the answers an oracle. This leads to perceiving it as an authority and giving AI subjectivity, Dr. Ada Florentyna Pawlak from SWPS University points out.

The expert in the area of anthropology of technology is a team member at the Centre for Artificial Intelligence and Cybercommunication Research at the University of Lodz and co-author (together with the head of the Centre, Dr. Artur Modliński) of research on the online behaviour of children and young people. The researchers wanted to check if the questions children asked generative AI were related to their well-being.

The results of the research - conducted on a group of over 400 aged 9 to 14 - show that young people suffering from anxiety disorders or generally low mental well-being often look for solutions to their problems online, and that includes using generative AI tools for this purpose.

'The study showed that children who chose questions related to psychological needs and safety were characterized by significantly higher levels of anxiety and depression than those who asked about daily activities and passions. In turn, people who chose questions about love and belonging tended to have greater mental problems than the group more interested in physiological needs. We also observed that children trying to find answers to questions related to love and belonging and those related to respect and recognition from generative AI had a significantly lower level of psychological well-being than children who asked about self-fulfilment and personal interests', the researchers said.

They add that the results will be used to design and test conversational systems for young people. Such systems would signal a potential decrease in their well-being.

In addition, Dr. Pawlak reports that the study shows how representatives of Generation Alpha (born in the years 2010-2024) perceive artificial intelligence itself.

'Members of Gen Alpha begin their education in a dynamically changing technocultural landscape. This generation is growing up in a transhumanist world, often among synthetic companions. We increasingly interact with agents that have an undefined ontological status. Sometimes we lose track of whether it is a real person or something generated in the image of a person, created to deceive us, people', she says.

And because of this - she adds - the perception of concepts such as authenticity and credibility is changing.

That is why in their study, the scientists also investigated other issues.

'Our study also showed that children and young people from Generation Alpha most often asked chat bots questions about the present or the future. They also asked generative AI not only about facts, but also about opinions on issues such as nutrition, sports, fashion. They asked questions about the +long-range+ future, e.g. who would become the world champion in football in the next championship or who would perform at the Eurovision Song Contest in the next edition of the competition. And yet these are things that cannot be known today', Dr. Pawlak says.

In her opinion, this shows that young people treat AI as a kind of oracle, endowing it with authority. 'First of all, the Millennials and older generations treat artificial intelligence with greater caution, usually knowing that it is not a reliable source of information. On the other hand, we mostly ask about something that has already happened, about the past, so we treat AI a bit like a Wikipedia 2.0', says Pawlak.

She adds that by perceiving AI as an authority or oracle, users give it subjectivity, which leads to further challenges. 'In our study among young people, there was no difference between trust built with humans and trusting artificial systems. We must be aware of the possible effects of this phenomenon, because in some time, authority given to representatives of professions such as professors, doctors or teachers will not be greater than trust in artificial intelligence', she adds.

These results are also consistent with the results of the recent version of the famous Milgram experiment. This time, scientists from SWPS University put a robot in the role of the authority giving orders instead of a human, and achieved very high levels of obedience - 90 percent of participants followed all instructions given to them. (PAP)

PAP - Science in Poland, Agnieszka Kliks-Pudlik

akp/ zan/ kap

tr. RL

The PAP Foundation allows free reprinting of articles from the Nauka w Polsce portal provided that we are notified once a month by e-mail about the fact of using the portal and that the source of the article is indicated. On the websites and Internet portals, please provide the following address: Source: www.scienceinpoland.pl, while in journals – the annotation: Source: Nauka w Polsce - www.scienceinpoland.pl. In case of social networking websites, please provide only the title and the lead of our agency dispatch with the link directing to the article text on our web page, as it is on our Facebook profile.

More on this topic

  • Credit: Adobe Stock

    Music therapy supports parents of premature babies, says study

  • Credit: Adobe Stock

    Remote and hybrid work may worsen sleep patterns, study shows

Before adding a comment, please read the Terms and Conditions of the Science in Poland forum.