Misuse of generative artificial intelligence tools in education can weaken independent thinking and hinder knowledge development, particularly among students without a strong foundation in a subject, research and experiments show.
‘Research findings increasingly suggest that language models are not only suboptimal for the learning process, they can even have a negative impact on the development of knowledge and skills for individuals who lack their own knowledge in a given field', said Joanna Mytnik, PhD, head of the Center for Modern Education at the Gdańsk University of Technology.
She added that using generative AI tools can lead to overreliance on technology and weaken the ability to solve problems independently.
Mytnik highlighted a clear link between critical thinking and AI dependence. ‘People with the lowest levels of critical thinking skills are often the most dependent on AI,’ she said.
EEG experiments analysing the brain activity of young people writing essays with ChatGPT support showed weakened neural connections. ‘They demonstrated the weakest neural connection patterns, had difficulty recalling what they had just written, and reported a lower sense of ownership of their work', Mytnik said.
However, different effects were observed when AI was used after independent work. ‘In the group of people who used GenAI tools only after the independent work stage, a significant increase in neural connections was observed. These results suggest that re-engagement supported by GenAI tools induces high levels of cognitive integration and memory reactivation', she said.
Mytnik also warned about behavioral risks from AI tools. ‘They are designed to be both helpful and engaging, constantly offering flattery and meeting user expectations, and this is an addictive combination', she said, citing the phenomenon of sycophancy in large language models. ‘This is a key ethical and cognitive problem with LLMs (an LLM is a language model trained on large amounts of data that enables text generation – ed. PAP)’. She added that more “polite” models are often less critical and less truthful.
The rise of virtual companion platforms among teenagers also raises concerns. Citing Pew Research, Mytnik said more than half of 13–17-year-olds regularly use platforms like ‘character.ai’. ‘We should be aware that if we increasingly rely on chatbots, over time we lose the ability to build deep connections with people, accept differing opinions, and collaborate,’ she said.
Despite the risks, Mytnik noted AI can support learning if used appropriately. ‘I want to make it clear that AI tools can be excellent support in the learning process, do many tasks for us, but they should not replace the process of understanding and internalising knowledge. We need to use them consciously', she added. She compared knowledge-building to effortful practice: 'It is like trampling a path through the weeds: the more you walk it, the more durable it becomes, and if you do not use it, it disappears’.
Delegating effort to AI may create “apparent understanding.” ‘Relying on external tools bypasses the process of creating and strengthening memory traces, which leads to apparent understanding. We have a false sense of ownership of the process, but it is an illusion', Mytnik said.
She suggested ways to use AI constructively in learning. ‘Chatbots are excellent for simulating dialogue - you can let AI play the role of a sceptic who asks difficult questions', she said. Students can also use AI to analyse perspectives on problems or plan revisions using spaced repetition.
Mytnik called for changes in student assessment. ‘Traditional grading models focused on the end result are losing their developmental and diagnostic function', she said, noting that AI-generated answers create an ‘illusion of deep thinking’ and make authorship verification difficult. 'You can no longer be certain of authorship', she emphasises.
‘It is time to include students in this process and add assessment of the path to the goal, reflection, and ethical use of AI as part of the cognitive process', she said. In practice, this could include documenting work with AI, keeping decision journals, and preparing reflective summaries.
She also highlighted evolving teacher roles. ‘The teacher is no longer a provider of information (...) but is becoming a mentor, a designer of educational experiences, a guide in the world of (dis)information', she said. ‘GenAI can replace some of our work, but it will not replace the relational role associated with building motivation, group process, a sense of safety and belonging', she emphasises.
The Gdańsk University of Technology has prepared students and staff for generative AI use through the EDUxAI strategy, which promotes responsible technology use. ‘It is crucial for us to not only teach tools, but above all to teach thinking about AI, responsible use, and the ability to decide whether and when to use the technology', Mytnik explained.
‘The university does not prohibit the use of AI tools in the learning process or in writing papers, but we do talk about ethics, risks, and forms of support', she added. Key competencies include critical thinking, metacognition, and the ability to question AI outputs. ‘AI enhances empowers experts. You need to have your own knowledge to benefit from this AI enhancement', she said. (PAP)
pm/ agt/
tr. RL