What happens when companionship becomes customizable? We are in an era marked by digital saturation and social fragmentation. Many individuals stress over finding consistent, emotionally present and supportive friendships or relationships. Social media platforms, increasingly powered by A.I, now do more than connecting users to each other or prospective friends through mutuals. Through integrated chatbots, conversational A.I features and algorithms offer an innovative approach to social interaction, one that feels attentive, personalized and endlessly available. When technology begins to mirror companionship, we must consider what it offers,but also what it may alter in the ways that we experience connection.
I need someone. Now there is always someone there for me no matter the time of day, no matter who is around or no matter how alone I feel. CEO of Meta, Mark E. Zuckerberg shares that A.I can do more than simply perform a variety of tasks; it is able to offer companionship to users to help reduce loneliness. Daily, millions of OpenAI users send the system messages ranging from requests for advice to complex relationship concerns and even companionship, making AI a significant presence in spaces once dominated by human interaction. While at first glance this may seem like an innocent use of emerging technology, Meta was recently outed for allowing its chatbots to flirt with underage users, highlighting serious child safety concerns in the realm of A.I. Even though the company has since restricted this behavior, the fact that such interactions occurred without proper safeguards remains troubling. Repeated interactions such as these illustrate how users begin to slowly form emotional attachments to A.I, setting the stage for complex patterns of dependency.
Humans are naturally wired for connection, even if that connection is artificial. Recent research focusing on attachment and dependency in human-A.I relationships have developed the Experiences In Human- A.I Relationships Scale (EHARS) , which revealed that roughly 75% of individuals turn to A.I for advice, while 39% view it as a dependable source of company, with attachment formation following desire for proximity, safety and consistency. About 18-25% of adolescents have been found to become increasingly dependent on A.I over time. MIT studies have explored the isolation paradox, which illustrates how A.I use can gradually reduce loneliness, but simultaneously result in social withdrawal from people. This attachment toward chatbots mirrors human-human bonds; dependent users treat chatbots as having distinct personalities and feelings, raising guilt when they miss a chat. Individuals with obsessive attachments towards chatbots have been found to experience intense breakdowns, delusions or amplified symptoms of pre-existing mental illness (if applicable). This phenomenon is referred to as Chat GPT induced psychosis. Researchers believe that the cognitive dissonance users experience during A.I interactions, where its communication feels human-like evem though users know it is not, can increase delusions and result in psychosis. These patterns show that A.I can mimic the depth and risks of human attachment.
Despite the growing evidence of the emotional and psychological harms caused by A.I, these effects have yet to be recognized in clinical frameworks, leaving both clinicians and users navigating an immensely uncharted territory. Due to the lack of validated assessments, clinicians are unable to measure the level of A.I dependency, track progression or evaluate the effectiveness of interventions. Without dedicated diagnosis, clinicians have turned to the closest DSM categories such as Internet Gaming Disorder which while it focuses on gaming, it captures the essence of parasocial attachment and delusional thinking that is thought as the core symptoms of A.I induced disorders. They also rely on Unspecified Anxiety or Depressive Disorder for mood symptoms, Brief Psychotic Disorder for acute psychotic episodes and several Disruptive Behavior Disorders (DBD) for compulsive A.I usage. In parallel , research has noted that there have been rising concerns about Autism Spectrum Disorder (ASD) that can no longer be explained by genetics alone, suggesting that there may be environmental and social contributors. Some scholars have introduced the terms pseudo autism or virtual autism to describe autism like symptoms that emerge due to their digital environments. During critical developmental periods, children in digitally absorbed households may turn to A.I for companionship and comfort that feel more predictable and comforting than complex human relationships. As emotional reliance on A.I deepens, the gap between lived experiences and clinical recognition continues to widen.
For some, A.I is merely an accessible tool at their fingertips. For others, it becomes a lifeline, and the difference often reflects underlying vulnerability. Although A.I induced harm can affect anyone, research consistently identifies certain populations as facing heightened risk, including children, adolescents, older adults, individuals with pre-existing mental health conditions, and those on the autism spectrum. Children and adolescents are particularly susceptible because their developing cognitive abilities make it difficult to distinguish factual information from authoritative-sounding A.I responses. Older adults face increased risk as well, especially in the context of cognitive decline, social isolation, and limited familiarity with digital manipulation. Individuals living with mood, anxiety, or psychotic disorders may be among the most vulnerable, as impaired reality testing and heightened emotional need can make AI validation especially compelling. For individuals on the autism spectrum, A.I systems can reinforce existing social withdrawal patterns, especially when digital interaction becomes more rewarding than human engagement. When emotional need intersects with algorithmic validation, vulnerability can quickly turn into dependency.
Behind the promise of connection lies a quieter ethical dilemma. As A.I becomes more sophisticated with time, ethical burdens grow alongside it. Several companies acknowledge the harms of A.I, but reduce them to manageable trade-offs. Developers argue that A.I can reduce loneliness and provide alternate support to costly mental health services. Early research has noted measurable reductions in anxiety and depressive symptoms, lending some support to this claim. At the same time, companies continue to distance themselves from broader social responsibility by emphasizing that A.I engagement is a personal choice. This kind of farming shifts the burden back onto users, including those who are most vulnerable. Innovation does not absolve responsibility, it magnifies it.
Even as A.I mimics friendship, real responsibility remains human.
https://www.mentalhealthjournal.org/articles/minds-in-crisis-how-the-ai-revolution-is-impacting-mental-health.html
https://www.nytimes.com/2026/02/13/opinion/ai-relationships.html?
Your fulfilling life might be just a FREE consult away. Book now!