Technology

Even intelligent machines can be manipulated like puppets

Adobe Stock
Adobe Stock

Even an intelligent machine can be manipulated like a puppet. Cyberneticists call this a 'malicious antagonistic attack'. That is why scientists, including researchers at the Military University of Technology, conduct research on antagonistic activities, said the guest of Studio PAP, Col. Rafał Kasprzyk, PhD, from the Military University of Technology.

'Artificial intelligence in the humanistic sense is the +breath of warmth+ of algorithms into cold integrated circuits. Algorithms warm up these integrated circuits and a new entity emerges: artificial intelligence. Some see it as light, others as shadow, the darkness of the future that is reflected in the present. I see it as a tool that amplifies our intelligence', the engineer says.

He adds that this tool is quickly becoming more and more autonomous. The fuel of artificial intelligence algorithms is data, and its power - the computing power of modern computers. It also leaves open the question of whether this new entity will start asking itself the question: Who am I?

The evolution of artificial intelligence algorithms is taking place at an exponential pace.

'Its classification can be compared to trying to draw a map of a continent that we have not yet discovered. What's more, we do not realize that there may be other continents', the researcher adds. As an engineer, he divides AI into two types. The first type is the one taught by an expert, a human who understands the machine and transfers the known rules 'into its head'. The second type is a system that does not need a human to learn, because it uses the data provided to it. This is how Software 2.0 is created, which currently finds increasingly interesting applications, and which does much better than humans in various areas.

The problem is that 'we do not fully understand the rules of this machine's behaviour. Of course, because we know how to build them, we understand the algorithm. We can guess, but we are not able to answer with 100% certainty what decision a machine trained in this way will make, and what rules are behind this decision', Kasprzyk admits.

He adds that there are many areas - especially the military - where machines cannot be freely used if it is not known on what principles they operate.

MEANS OF COMBAT PACKED IN SENSORS

The huge popularity of AI is related to large language models that learn by themselves, from 'dictionary' data available on the Internet. Multimodal models learn not only from text, but also from images, sound, all data that sensors can collect - similarly to how humans collect data from the environment through their senses. Kasprzyk reveals that engineers of such models 'sensor' everything, collect all possible data, and only then will they think about what these data can be used for.

At the same time, they are developing algorithms for existing applications, especially in the military, where AI helps to gain an advantage over a potential opponent, not only on the battlefield. For example, it allows to achieve the ability to do reconnaissance, i.e. build situational awareness.

'Machine learning algorithms are able to precisely and very quickly identify suspicious objects, recognize them and track them. This is one of the +thick applications+ also in civilian life, for example in autonomous vehicles. In the military, on the other hand, it is about collecting data and building a picture of the situation', the colonel gives an example and justifies that the ability to fuse data from a huge number of sensors, with which modern combat means are packed, improves command.

IMPROVES, ADVISES, DOES NOT REPLACE

Varianting, i.e. indicating the best directions of action to commanders, is another area of AI application on the battlefield. The commander may, but does not have to, use such a recommendation. 'I hope we will never go in this direction', the colonel says. However, he talks with great conviction about the area of defence and protection of troops.

'Artificial intelligence can be used for all activities that are dirty, boring, dangerous, such as mine clearance. Autonomous or controlled vehicles supported by artificial intelligence algorithms can do this instead of people. Robots can also mask the area, confuse enemy missiles', the engineer says and adds that drones can be used for masking, and a whole area of challenges related to controlling a very large number of small objects opens up for scientists. Military applications include logistics, supply, analysis of the needs of both people and machines.

ONLY HUMANS SHOOT

The most emotions in the context of AI applications are associated with the area of impact, indispensable for conducting warfare. After all, the army, apart from reconnaissance, protection and supply, must first and foremost destroy targets.

'Fortunately, at least for now, there are no implemented autonomous, lethal weapon systems', Kasprzyk comments. Doubts remain as to whether this will be the case in the future. Perhaps one of the parties to a conflict will decide to take such a step, or perhaps - in the absence of communication with the machine - the machine will make the decision itself.

In cyberspace (i.e. on the Internet and wherever there is communication), AI can recognize the vulnerability of the enemy's ICT infrastructure. Armed forces can also use it to analyse their own systems in order to 'harden' them, i.e. find and close all 'back doors' through which the enemy could get in - so-called vulnerabilities.

ALGORITHM WARS

Kasprzyk explains that introducing software to the battlefield strengthens forces, but increases the attack surface. Gaps sometimes arise in an uncontrolled manner, but sometimes engineers deliberately introduce a back door to the system at the construction stage.

The case of artificial intelligence algorithms is complicated, the expert points out. 'Since we do not fully understand the latest large models built using deep neural network architectures, for example, they may have vulnerabilities that are not visible to us. They are also not visible to the opponent, at least at first. However, they can be searched for and found with other machine learning algorithms. And here something that can be called machine warfare appears', he says.

He explains that an intelligent machine can be manipulated like a puppet. In the language of cyberneticists, this is a 'malicious antagonistic attack'. Artificial intelligence can be fooled by other algorithms, 'noise' can be prepared that allows malicious control of the machine, special 'patches' to be placed on objects that it recognizes. For now, the advantage of humans is still large: what a human classifies without a problem may be misleading for the device, e.g. a road sign with a malicious sticker, or painted with special paint. According to the engineer, this is why autonomous cars are not yet widely used on our roads.

'Computer vision models are extremely resistant to hooligan attacks, such as random, accidental destruction of road signs or painting over them. When a +patch+ is prepared for such a sign (...), it can cause an autonomous car to see a +stop+ sign as a +road with priority+; that is how wrong it can be. Imagine what would happen if we used such algorithms in the army. Such patches prepared by the enemy could cause our systems to start attacking our own units. This is dangerous, common sense should be maintained here', the researcher says.

Scientists, also at the Military University of Technology, conduct a lot of research, including the area of antagonistic activities. According to Kasprzyk, who is the deputy dean of the Faculty of Cybernetics at the Military University of Technology, there are good development prospects in Poland for young people interested in artificial intelligence from the construction and scientific side. For the best - those who make it through the 'intellectual training ground' and can handle mathematics, statistics, probabilism - good jobs are guaranteed.

'If someone wants to develop their skills in the area of artificial intelligence and work in the military, they should start military studies. Especially since now, as part of the Cyberspace Defence Forces, the Artificial Intelligence Implementation Centre (CISI) is being created, where our graduates will go. And if someone is interested in artificial intelligence and wants to work in a corporation or a state institution, they should start civilian studies', the scientist says. He emphasises that graduates of the Faculty of Cybernetics at the Military University of Technology earn well and have the opportunity to implement projects that combine laboratory and implementation work.

Back to the issue of the military, the colonel mentions scientific work on the use of AI in command and control systems or in autonomous drones. Research is also being conducted on operations in cyberspace (Cyber ops) and in the information sphere (Info ops). Young Polish researchers can participate in the so-called information warfare and build effective methods to counteract so-called deepfakes, for which generative AI is used. It creates content that is not supported by facts, which can be used to manipulate individuals, social groups or entire societies. 'But of course we have the sword-shield principle [in the military – PAP], which means that tools based on AI algorithms are also being created to identify synthetic content that has the characteristics of disinformation', the engineer reassures.

The video interview is available here. 

Karolina Duszczyk (PAP)

kol/ zan/

The PAP Foundation allows free reprinting of articles from the Nauka w Polsce portal provided that we are notified once a month by e-mail about the fact of using the portal and that the source of the article is indicated. On the websites and Internet portals, please provide the following address: Source: www.scienceinpoland.pl, while in journals – the annotation: Source: Nauka w Polsce - www.scienceinpoland.pl. In case of social networking websites, please provide only the title and the lead of our agency dispatch with the link directing to the article text on our web page, as it is on our Facebook profile.

More on this topic

  • Author: Adam Klimowski, source: Wikipedia, CC BY-SA 4.0, https://commons.wikimedia.org/w/index.php?curid=124145165

    'Polish hat' for any language model was created as part of PLLuM

  • A several-millimetre cell with rubidium atoms that can serve as a Rydberg sensor. Credit: Michał Parniak (UW)

    Polish researchers build quantum antennas for ESA

Before adding a comment, please read the Terms and Conditions of the Science in Poland forum.