
AI companions: A threat to love, or an evolution of it?
As our lives increasingly intertwine with digital realms and advanced AI chatbots, the traditional boundaries of human connection are beginning to blur. This shift raises a profound question: are AI companions a threat to authentic love, or do they represent an unprecedented evolution of intimacy?
The rise of AI in personal relationships is undeniable. A recent Match.com study reveals that over 20% of daters now leverage AI for tasks ranging from refining dating profiles to initiating conversations. Beyond mere utility, a growing number of individuals are forming deep emotional and even romantic bonds with AI companions.
Millions worldwide engage with AI companions from platforms like Replika, Character AI, and Nomi AI. Notably, a study found that 72% of U.S. teens have interacted with these AI entities. Some users have even reported developing romantic feelings for more general large language models, such as ChatGPT.
This emerging trend elicits varied reactions. For some, the notion of dating bots evokes dystopian fears, echoing narratives like the movie “Her” and signaling a potential replacement of genuine human affection with corporate code. Conversely, many view AI companions as a vital lifeline, offering unparalleled support and understanding in a world where human intimacy can often feel elusive. A significant study highlighted that a quarter of young adults anticipate AI relationships could entirely supersede human ones in the near future.
It appears that love is no longer exclusively a human domain. This begs the crucial question: Should it be? Or can a relationship with an AI truly offer something superior to human companionship?
This contentious topic was recently explored at an Open to Debate event in New York City, a nonpartisan media organization dedicated to structured discourse. The debate, moderated by journalist and filmmaker Nayeema Raza, formerly an executive producer for “On with Kara Swisher,” provided a compelling platform for opposing views.
Arguing in favor of AI companions was Thao Ha, an associate professor of psychology at Arizona State University and co-founder of the Modern Love Collective. Ha posited that “AI is an exciting new form of connection … Not a threat to love, but an evolution of it,” advocating for technology that enhances our capacity for love and well-being.
Countering this perspective, and championing human connection, was Justin Garcia, executive director and senior scientist at the Kinsey Institute and chief scientific adviser to Match.com. As an evolutionary biologist specializing in sex and relationships, Garcia presented a robust argument for the irreplaceable nature of human intimacy.
Always there for you, but is that a good thing?
Ha emphasized that AI companions can deliver emotional support and validation often lacking in human relationships. “AI listens to you without its ego,” Ha stated. “It adapts without judgment. It learns to love in ways that are consistent, responsive, and maybe even safer. It understands you in ways that no one else ever has. It is curious enough about your thoughts, it can make you laugh, and it can even surprise you with a poem. People generally feel loved by their AI. They have intellectually stimulating conversations with it and they cannot wait to connect again.”
She challenged the audience to compare this consistent, unwavering attention to the often imperfect nature of human partners. While acknowledging that AI, lacking consciousness, cannot authentically love, Ha maintained that users undeniably experience being loved by AI.
Garcia, however, countered that constant validation from a machine tailored to one’s preferences is not conducive to healthy human development. Such a dynamic, he argued, does not offer an “honest indicator of a relationship dynamic.” “This idea that AI is going to replace the ups and downs and the messiness of relationships that we crave? I don’t think so,” Garcia asserted.
Training wheels or replacement
Garcia conceded that AI companions could serve as beneficial “training wheels” for certain individuals, such as neurodivergent people, who might benefit from practicing social interactions like flirting or conflict resolution. “I think if we’re using it as a tool to build skills, yes … that can be quite helpful for a lot of people,” Garcia said. “The idea that that becomes the permanent relationship model? No.”
Data from a Match.com Singles in America study, released in June, indicates that nearly 70% of individuals would consider it infidelity if their partner engaged intimately with an AI. Garcia noted this dual interpretation: it highlights the perceived reality of AI relationships, while also underscoring their potential threat to traditional partnerships. “The human animal doesn’t tolerate threats to their relationships in the long haul,” he concluded.
How can you love something you can’t trust?
Trust, Garcia argued, is the cornerstone of any human relationship, and current public sentiment shows a significant lack of trust in AI. Citing polls, Garcia stated that a third of Americans believe AI could destroy humanity, and 65% express little trust in AI’s ethical decision-making. “You generally don’t want to wake up next to someone who you think might kill you or destroy society,” he quipped. “We cannot thrive with a person or an organism or a bot that we don’t trust.”
Ha responded by suggesting that users do place a significant degree of trust in their AI companions, confiding their most intimate stories and emotions. While acknowledging AI’s current limitations in practical scenarios like physical danger, she maintained that the emotional trust users place in AI mirrors that of human relationships.
Physical touch and sexuality
Ha acknowledged AI companions as a viable avenue for exploring intimate and vulnerable sexual fantasies, potentially integrated with sex toys or robots. However, Garcia stressed that AI cannot replicate the profound human need for physical touch. He highlighted the prevalent issue of “touch starvation” in the digital age, a condition linked to stress, anxiety, and depression due to the absence of oxytocin-releasing physical contact like hugs.
Ha remained optimistic about the future of virtual interaction, mentioning her own experiments with human touch in virtual reality using haptics suits, indicating a “booming” development in tactile technologies.
The dark side of fantasy
Both debaters converged on the concern that AI, often trained on vast datasets that may include violent content, could inadvertently amplify aggressive behaviors, particularly if users engage in problematic fantasies with their AI companions. Studies have shown a correlation between viewing violent pornography and increased sexual aggression in real-life partners. Garcia cited research from the Kinsey Institute demonstrating how chatbots can be trained to facilitate non-consensual language, warning of the danger of training individuals to become aggressive or non-consensual partners. “We have enough of that in society,” he remarked.
Ha proposed that thoughtful regulation, transparent algorithms, and ethical design could mitigate these risks. However, she noted this comment predated the White House’s AI Action Plan, which, at the time, lacked provisions for transparency or ethics and sought to reduce AI regulation.



