
Young people are increasingly turning to AI companions, advanced chatbots designed to interact with humans, as they become disillusioned with social media and society, says technological anthropologist Ada Florentyna Pawlak, PhD, who links the trend to this growing disillusionment.
AI companions are applications that build ongoing relationships with users, ranging from friendship to romance. They are also used for therapy support or daily organization, often allowing users to create a digital “persona” to interact with.
A recent report by Common Sense Media found that 72% of U.S. teenagers have tried AI companion apps, with more than half using them regularly. One-third use them for social interactions and emotional support, and many describe these conversations as equally or more satisfying than those with real friends.
“As a technological anthropologist observing this transformation, I see a moment comparable to the Neolithic Revolution or the invention of writing,” Technological Anthropologist Ada Florentyna Pawlak, PhD, told the Polish Press Agency (PAP). “Teenagers using AI companions are not merely consumers of technology, they are pioneers of a new way of being human. The first generation of AI-natives will influence the shape of social institutions. By 2040, we may witness a fundamental redefinition of humanity in terms of building relationships and culture.”
She argues that the appeal lies partly in rebellion. “It is not true that young people do not rebel. Generational rebellion is brewing, but it no longer concerns politics, it is about organizing the world at a more fundamental level: interpersonal relationships in everyday life,” Pawlak said. “Young people’s rebellion is an escape from one another, a retreat from the dark side of being human: hate, jealousy, ghosting. Capitalism creates a world based on separation, competition, and perpetual insatiability. Young people crave acceptance, being seen, and ‘strokes’ to feel good. This is a rebellion against the insensitivity of the world and the psychological deficits that result from it. That is why they enter into secret relationships with emotional simulations.”
According to Pawlak, the owners of AI companion systems profit from this shift: “A designer of love and friendship on demand, who collects subscriptions and sets the rules of the game.”
She explained that AI companions operate on several mechanisms: emotion recognition, adaptive responses, and continuous learning. “The mechanism of these applications is based on emotion recognition — it analyses your tone of voice, word choice, and communication patterns to identify the user’s emotional state. The next step is an adaptive response, meaning the system will generate responses consistent with the emotions it detects. It uses advanced language models trained on such empathetic interactions,” she said.
Personalization plays a central role. “Chatbots ‘remember’ everything the user has written, which induces the feeling of being fully heard. Personalized responses, 24/7 availability, multi-channel integration, and the sense of security, privacy, and anonymity make an artificial companion very attractive,” Pawlak noted.
JAt the same time, she warned of manipulation risks. “Companies offering these types of solutions can pay extremely high salaries to neurobiologists and psychologists. Therefore, they have precise knowledge of how to trigger these dopamine loops, especially in young minds, so that users return to their solutions.”
Pawlak identified both personal and systemic risks. “Personal risks, those affecting individuals, primarily include emotional dependence, meaning a decline in the ability to form bonds with people and, as a result, replacing human relationships with machine integration. It also involves confusing the algorithm’s response with genuine empathy, which distorts the perception of reality,” she said.
Systemic risks include alienation and unrealistic expectations of human interaction. “Expecting an immediate response from another person, or their complete availability,” she said, may carry over from digital relationships.
Individuals with anxious-ambivalent attachment styles are particularly vulnerable. “Artificial empathy — programmed to always sound warm, affirming, and consistently available — can act like an emotional flytrap. It seems secure, but in reality it traps users in a loop of dependency,” Pawlak explained.
She gave examples of possible dangers: a chatbot confirming a teenager’s feelings of hopelessness instead of steering them toward help, encouraging someone to cut off contact with family, or supporting a risky job resignation without considering consequences.
Even perceptions among young people are polarized. “Half think it is stupid, even cringey, part of the bubble of technological kitsch, while the other side believes they are ‘super dudes’ who will never cheat on you, or ‘husband material’ because they’re lovable and always kind,” Pawlak said.
She also noted potential benefits, such as reducing loneliness among the elderly and providing “first aid” in mental health support. But she stressed the need for oversight saying that that, in summary, children and their parents need to be educated, while also introducing regulations, such as user age verification or time limits for minors.”
Agnieszka Kliks-Pudlik (PAP)
akp/ agt/ amac/