Get all your news in one place.
100’s of premium titles.
One app.
Start reading
International Business Times
International Business Times
Callum Turner

The AI Paradox: Dr. Rachel Wood on Protecting Human Connection While Advancing Transformative Technology

Artificial intelligence has become one of the most polarizing subjects of recent times. The public conversation often swings between extremes: AI as a miracle or AI as a menace, a revolution or a threat, something that will transform society for the better or erode everything we value. But as Dr. Rachel Wood, a cyberpsychology researcher, advisor, speaker, and therapist, notes, "This binary framing already places us on unstable ground. It's not helpful to be alarmist or evangelical. We have to get comfortable with tension and uncertainty if we are going to move forward responsibly."

According to Dr. Wood, that uncertainty becomes even more consequential when the lens is shifted from the technology itself to its psychological impact. "AI is no longer just embedded in our workflows," she states. "It is woven into our emotional lives, our decision-making, and our most intimate spaces." Dr. Wood has spent years studying how technology influences human connection. What concerns her most is not whether AI is good or bad, but what happens when people treat it as one or the other. "If we ignore the nuance, we risk overlooking the real mental and relational consequences," she says.

This is not a hypothetical future. Massive financial investment underscores that the shift is already here. Billions of dollars are being directed toward AI development across nearly every sector, and projections for AI within mental health alone place the market potential in the multi-billion-dollar range by 2030. These numbers reflect momentum happening today. As Dr. Wood explains, "This is not a conversation for 10 years from now. The way people interact with AI is shaping their relational patterns right now."

Dr. Wood has observed that when people use AI companions, sometimes for extended emotional support, those interactions can reshape the skills required for human connection. "AI doesn't require the same things a human relationship requires," she explains. "Listening, patience, compromise, negotiation, those skills can decline when you replace human relationships with AI."

There is also the issue of actions, according to Dr. Wood. As some individuals increasingly look to general-purpose AI for guidance, decision-making can gradually shift outward. "If someone relies too heavily on an AI to tell them what to do, it can erode self-efficacy over time," Dr. Wood says. "People may not realize how easily they can become dependent on that dynamic."

Another challenge, she explains, is role fluidity, the way general-purpose AI can become everything at once: advisor, confidant, therapist, even flirtatious companion. This fluidity is profoundly different from the ethical boundaries that define real therapeutic relationships. "In therapy, the roles are clear. A therapist is not your friend or your romantic partner, and they aren't there to tell you exactly what you want to hear," she says. "But a general-purpose chatbot can be all of those things in a single conversation, and that can be very disorienting."

Yet Dr. Wood is equally passionate about the potential benefits of AI when it is developed intentionally. The same tools that risk replacing relational skills can, when built with guardrails, help people practice them. She distinguishes between general-purpose chatbots and specialized, fit-for-purpose tools with integrated safety protocols.

Getting this right might require a coordinated effort far beyond the technology sector. It demands what Dr. Wood calls cross-disciplinary collaboration, technologists, clinicians, researchers, ethicists, and community voices working together from the ground up. "We need our AI builders, but we also need mental health professionals, researchers, and people who deeply understand how relationships work," she explains. "When those groups can cross-pollinate, that's when we design tools that are a bridge to connection rather than a replacement for relationship."

This belief in collaboration is exactly why she founded the AI Mental Health Collective, a community of vetted professionals working at the intersection of psychology, ethics, and emerging technology. Its purpose is to break down silos and bring diverse expertise into one place, a practical model for the kind of cooperation needed across the industry.

For Dr. Wood, this mission is deeply personal. Her career has been shaped by a lifelong commitment to human connection, and she has seen firsthand how meaningful relationships anchor people through every stage of life. "I know the depth of what it means to have excellent, connected relationships," she says. "I don't want younger generations to think a chatbot's version of intimacy is the real thing."

As the world stands on the edge of unprecedented technological change, her message is not one of fear or uncritical enthusiasm, but responsibility. AI will continue to evolve; investment will continue to accelerate; and adoption will continue to expand. But with thoughtful leadership, interdisciplinary collaboration, and a commitment to human well-being, the future does not have to be a choice between innovation and mental health. In fact, it can be the moment where both strengthen each other.

Sign up to read this article
Read news from 100’s of titles, curated specifically for you.
Already a member? Sign in here
Related Stories
Top stories on inkl right now
One subscription that gives you access to news from hundreds of sites
Already a member? Sign in here
Our Picks
Fourteen days free
Download the app
One app. One membership.
100+ trusted global sources.