08.05.2023 change 08.05.2023

Professor Justyna Król-Całkowska: We do not know who is liable for AI errors

Credit: Adobe Stock Credit: Adobe Stock

One of the challenges of using artificial intelligence (AI) in medicine is the lack of regulation. This can cause a number of problems, says Professor Justyna Król-Całkowska, head of Department of International and European Law at Lazarski University in Warsaw, member of the Council of Experts at the Ombudsman for Patients' Rights.

Artificial intelligence is widely used in medicine. It can be an effective way to accelerate and improve the process of diagnosing diseases, and to improve the quality of medical care. There is, however, a problem - 'issues regarding artificial intelligence are not regulated at all in the Polish law', says Professor Justyna Król-Całkowska.

'This is a fundamental problem, because we draw on +EU regulations+, but we do not have regulations that would directly address artificial intelligence and errors during its operation', she adds.

According to the expert, in medicine we deal with artificial intelligence when a complex algorithm, similar to a human neural network, is able to make an individual decision based on the variables provided to it. It is used primarily in diagnostics, especially in imaging. 'The artificial intelligence program views X-ray images, for example, and, based on the data, on what it has learnt, determines whether the image is normal or pathological', the professor explains. In addition, AI is also used in radiology, psychiatry, dermatology and cancer diagnosis.

The medical profession is burdened with a lot of responsibility, but what if an AI system makes a mistake? 'We can assume that we have to blame the person who programs this artificial intelligence and fails to predict that, based on the variables provided to it, it might make a wrong decision', the expert says. On the other hand, in her opinion, the responsibility of the operator who has programmed the AI seems to be extremely broad. AI is designed to make autonomous decisions, which obviously involves some risk. The second concept is to attribute responsibility for errors to the AI user, but this will limit the level of trust and willingness to use AI in healthcare.

'It is difficult to say today who is ultimately responsible and which of these concepts is optimal', says Król-Całkowska. We do know is that the law is one of the tools that will ensure that accountability exists. In addition, it can reduce both doctors' and patients' fear associated with using AI.

Poland is not the only country that does not have specific legal regulations in the area of artificial intelligence. The Member States of the European Union are facing the same problem. Solutions are proposed, especially in the regulation of the European Parliament, 'but what if they are proposed and some are really quite good, when they are not implemented in the legal systems of individual countries', the professor points out.

Trust in artificial intelligence is still limited, 'because on the one hand there is a lot of talk about AI, and on the other - it is completely unknown what regulates it and how', the expert says. In addition, people are afraid of the lack of human participation in the treatment process, because they consider humans to be less fallible. According to Król-Całkowska, this is an illusory belief. Studies have shown that the average young radiologist is three times more likely to make mistakes than an artificial intelligence algorithm that has been learning for only two months. 'Legal regulations will give a sense of stability to people who will use AI', she adds.

'On the part of patients, this fear should not exist, but doctors or medical staff should have this fear, because if artificial intelligence makes a mistake and at the same time there is no final verification by a human - we could talk about failure to exercise due diligence, and this is the basis of liability in medical law', she concludes.

When can we expect the law to change? According to Professor Justyna Król-Całkowska, only when errors become noticeable and problematic. 'People tend to react after something happens. They introduce regulations when there is already a problem with something, and unfortunately, I think this will be the case with artificial intelligence', she says. The way to prevent a tragedy is to propose unambiguous national regulations that will cover responsibility for the actions of AI. 'Without them, there may be a situation in which we react too late', she says.

'We already know what can happen and delaying the introduction of regulations is a huge mistake', the expert warns. (PAP)

PAP - Science in Poland, Delfina Al Shehabi

del/ mir/ kap/

tr. RL

Przed dodaniem komentarza prosimy o zapoznanie z Regulaminem forum serwisu Nauka w Polsce.

Copyright © Foundation PAP 2024