Technology

Learning about AI is like studying global warming, says expert

Credit: Adobe Stock
Credit: Adobe Stock

Prepare for the moment when no one can distinguish the voice of artificial intelligence (AI) from the voice of a human. The model of security and trust will change. The level of complexity of potential problems that AI may create is similar to analysing the phenomenon of global warming, says Dr. Teodor Buchner, a physicist from the Warsaw University of Technology.

For years, artificial intelligence has been improving work in many areas, including medicine, cybersecurity, automation. In short, it performs work that humans are not capable of doing, for example it performs calculations on a huge scale, examines similarities or searches for facts hidden in a huge collection of information.

“The year 2022 brought a breakthrough in the field of artificial intelligence. Deepfake technology allows us to create fake videos that are also so realistic that it is very easy to believe they are real,” said Dr. Teodor Buchner, a physicist from the Warsaw University of Technology, who creates AI-based solutions for medicine.

The New York Times journalist Kevin Roose showed how much cognitive dissonance artificial intelligence can cause. The day after Valentine's Day, he had an hours-long conversation with a bot created by Microsoft, during which he was told that he doesn't love his wife at all, and she doesn't love him, so they should get a divorce. The conversation turned out to be so engaging that the journalist admitted that he felt emotionally broken and could not sleep at night.

Buchner believes that the journalist's story is not surprising. The AI assumed the role of a psychologist because it was probably trained on dialogues with people who experienced similar problems and used a specific set of keywords.

He said: “Of course, there is no intention on the part of the machine, because it is stupid. In the conversation with the journalist, it simply used an analogy to the process it had been trained on previously. It identified the keywords provided by the journalist and easily continued the conversation. and if it encountered a new situation and new data, at some point it would get lost in the answers.”

In Buchner’s opinion, a different part of this story is more important. The journalist lost his distance to the message and trusted the machine as one would trust a human, which is why he had trouble falling asleep. Buchner believes that this is a harbinger of problems that will become a huge challenge in the future. The model of security and trust will change, which will affect social behaviours.

Buchner said: “Today, we trust the message because we identify the speaker by their voice. The media have already reported on isolated cases when AI successfully imitated the voice of a loved one on the phone and the person did not realize they were talking to a machine. When this starts to happen on a massive scale, it will have serious consequences for social life. It will be abused by cybercriminals or by governments and their services. The changes will go a long way and we must prepare for them.”

The expert believes that we will gather knowledge about AI similarly to that concerning climate change. The world of science will not be immediately able to connect the facts and see the potential side effects.

He continued: “The challenges that artificial intelligence will create in the future will resemble those related to global warming. Decades passed between the first signals and the first definitive conclusions. But we must act to minimize the degree of surprise by an unexpected turn of events.”

In his opinion, an attempt to forecast and name potential problems generated by AI should be undertaken jointly by scientific institutions, business and politicians. It will be crucial to formulate new ethics that will help define what is permitted and what is not permitted for artificial intelligence. Currently, technologies such as ChatGPT work differently than search engines. They present a ready-made answer right away, without providing sources like a list of links, which gives much greater opportunities to influence public opinion. Many recipients will accept such an answer as objective.

He said: “In order to go beyond two extreme emotions - admiration and fear, which AI arouses today - one should think about building an intellectual, institutional and program base. We must devote conferences and debates to it, new research programs and scientific competitions, built in a community between engineering, natural sciences and the humanities. The focus should be on humans and the diagnosis of our current situation, independence and autonomy, needs and modelling our situation in the future. Problems generated by AI should be constantly present in public discourse.”

The expert points out that in the 21st century, company budgets are larger than state budgets. Companies have the exclusive right to decide on technology and the directions of its development, which significantly weakens state structures.

Buchner said: “State structures are no longer efficient. Of course, they provide us with physical security against a conventional conflict, but their operation in the area of information security is very limited.”

As an example of the destructive impact of tech companies on societies around the world, Buchner mentions the construction of algorithms whose task is to fuel emotions.

He said: “Social polarization is a side effect of an algorithm. The fact that we have woken up to a world where every society has been divided shows that the situation has gotten out of hand. The trick is just to pick topics that divide. And it's not just about specific, commissioned actions. Divided societies evoke a high level of emotion, and emotions fuel traffic and generate huge profits for tech companies that prey on our attention. It is not in their interest to stop this process.”

He added that regulations are needed. “It is said mockingly that the only product that comes out of the European Union are regulations. I am far from regulating everything, but in the case of artificial intelligence, it is needed at the national and supranational level,” he said.

The expert added that the most dangerous thing is to do nothing.

“In physics, we have the second law of thermodynamics, which, translated into human language, means that corruption occurs by itself - regardless of the will of man. Problems don't fix themselves. You need to devote attention to them,” he said. (PAP)

PAP - Science in Poland, Urszula Kaczorowska

uka/ agt/ kap/

tr. RL

The PAP Foundation allows free reprinting of articles from the Nauka w Polsce portal provided that we are notified once a month by e-mail about the fact of using the portal and that the source of the article is indicated. On the websites and Internet portals, please provide the following address: Source: www.scienceinpoland.pl, while in journals – the annotation: Source: Nauka w Polsce - www.scienceinpoland.pl. In case of social networking websites, please provide only the title and the lead of our agency dispatch with the link directing to the article text on our web page, as it is on our Facebook profile.

More on this topic

  • Credit: ESA - P. Carril, 2013

    New space mission to reveal new details about the Sun's inner corona

  • Adobe Stock

    Wrocław University of Science and Technology's gripper successfully passes space test

Before adding a comment, please read the Terms and Conditions of the Science in Poland forum.