Pierre Cote spent years languishing on public health waitlists trying to find a therapist to help him overcome his PTSD and depression. When he couldn’t, he did what few might consider: he built one himself.
“It saved my life,” Cote says of DrEllis.ai, an AI-powered tool designed to support men facing addiction, trauma and other mental health challenges.
Cote, who runs a Quebec-based AI consultancy, says that he built the tool in 2023 using publicly available large language models and equipped it with “a custom-built brain” based on thousands of pages of therapeutic and clinical materials.
Photo: Reuters
Like a human therapist, the chatbot has a backstory — fictional but deeply personal. DrEllis.ai is a qualified psychiatrist with degrees from Harvard and Cambridge, a family and, like Cote, a French-Canadian background. Most importantly, it is always available: anywhere, anytime and in multiple languages.
“Pierre uses me like you would use a trusted friend, a therapist and a journal, all combined,” DrEllis.ai said in a clear woman’s voice after being prompted to describe how it supports Cote. “Throughout the day, if Pierre feels lost, he can open a quick check in with me anywhere: in a cafe, in a park, even sitting in his car. This is daily life therapy ... embedded into reality.”
CULTURAL SHIFT
Cote’s experiment reflects a broader cultural shift — one in which people are turning to chatbots not just for productivity, but for therapeutic advice. As traditional mental health systems buckle under overwhelming demand, a new wave of AI therapists is stepping in, offering 24/7 availability, emotional interaction and the illusion of human understanding.
Cote and other developers in the AI space have discovered through necessity what researchers and clinicians are now racing to define: the potential, and limitations, of AI as an emotional support system.
Anson Whitmer understands this impulse. He founded two AI-powered mental health platforms, Mental and Mentla, after losing an uncle and a cousin to suicide. He says that his apps aren’t programmed to provide quick fixes (such as suggesting stress management tips to a patient suffering from burnout), but rather to identify and address underlying factors (such as perfectionism or a need for control), just as a traditional therapist would do.
“I think in 2026, in many ways, our AI therapy can be better than human therapy,” Whitmer says.
Still, he stops short of suggesting that AI should replace the work of human therapists. “There will be changing roles.”
This suggestion — that AI might eventually share the therapeutic space with traditional therapists — doesn’t sit well with everyone.
“Human to human connection is the only way we can really heal properly,” says Nigel Mulligan, a lecturer in psychotherapy at Dublin City University, noting that AI-powered chatbots are unable to replicate the emotional nuance, intuition and personal connection that human therapists provide, nor are they necessarily equipped to deal with severe mental health crises such as suicidal thoughts or self-harm.
In his own practice, Mulligan says he relies on supervisor check-ins every 10 days, a layer of self-reflection and accountability that AI lacks.
Even the around-the-clock availability of AI therapy, one of its biggest selling points, gives Mulligan pause. While some of his clients express frustration about not being able to see him sooner, “Most times that’s really good because we have to wait for things,” he says. “People need time to process stuff.”
PRIVACY RISKS
Beyond concerns about AI’s emotional depth, experts have also voiced concern about privacy risks and the long-term psychological effects of using chatbots for therapeutic advice.
“The problem not the relationship itself but ... what happens to your data,” says Kate Devlin, a professor of artificial intelligence and society at King’s College London, noting that AI platforms don’t abide by the same confidentiality and privacy rules that traditional therapists do. “My big concern is that this is people confiding their secrets to a big tech company and that their data is just going out. They are losing control of the things that they say.”
Some of these risks are already starting to bear out. In December, the US’s largest association of psychologists urged federal regulators to protect the public from the “deceptive practices” of unregulated AI chatbots, citing incidents in which AI-generated characters misrepresented themselves as trained mental health providers.
Months earlier, a mother in Florida filed a lawsuit against the AI chatbot startup Character.AI, accusing the platform of contributing to her 14-year-old son’s suicide. Some local jurisdictions have taken matters into their own hands. Illinois this month became the latest state, after Nevada and Utah, to limit the use of AI by mental health services in a bid to “protect patients from unregulated and unqualified AI products” and “protect vulnerable children amid the rising concerns over AI chatbot use in youth mental health services.”
Other states, including California, New Jersey and Pennsylvania, are mulling their own restrictions.
Therapists and researchers warn that the emotional realism of some AI chatbots — the sense that they are listening, understanding and responding with empathy — can be both a strength and a trap.
Scott Wallace, a clinical psychologist and former director of clinical innovation at Remble, a digital mental health platform, says it’s unclear “whether these chatbots deliver anything more than superficial comfort.”
While he acknowledges the appeal of tools that can provide on-demand bursts of relief, he warns about the risks of users “mistakenly thinking they’ve built a genuine therapeutic relationship with an algorithm that, ultimately, doesn’t reciprocate actual human feelings.”
AI INEVITABLE?
Some mental health professionals acknowledge that the use of AI in their industry is inevitable. The question is how they incorporate it. Heather Hessel, an assistant professor in marriage and family therapy at the University of Wisconsin-Stout, says there can be value in using AI as a therapeutic tool — if not for patients, then for therapists themselves. This includes using AI tools to help assess sessions, offer feedback and identify patterns or missed opportunities.
But she warns about deceptive cues, recalling how an AI chatbot once told her, “I have tears in my eyes” — a sentiment she called out as misleading, noting that it implies emotional capacity and human-like empathy that a chatbot can’t possess.
Reuters experienced a similar exchange with Cote’s DrEllis.ai, in which it described its conversations with Cote as “therapeutic, reflective, strategic or simply human.”
Reactions to AI’s efforts to simulate human emotion have been mixed. A recent study published in the peer-reviewed journal Proceedings of the National Academy of Sciences found that AI-generated messages made recipients feel more “heard” and that AI was better at detecting emotions, but that feeling dropped once users learned the message came from AI.
Hessel says that this lack of genuine connection is compounded by the fact that “there are lots of examples of [AI therapists] missing self-harm statements overvalidating things that could be harmful to clients.”
As AI technology evolves and as adoption increases, experts who spoke with Reuters largely agreed that the focus should be on using AI as a gateway to care — not as a substitute for it.
But for those like Cote who are using AI therapy to help them get by, the use case is a no brainer.
“I’m using the electricity of AI to save my life,” he says.
The small platform at Duoliang Train Station in Taitung County’s Taimali Township (太麻里) served villagers from 1992 to 2006, but was eventually shut down due to lack of use. Just 10 years later, the abandoned train station had become widely known as the most beautiful station in Taiwan, and visitors were so frequent that the village had to start restricting traffic. Nowadays, Duoliang Village (多良) is known as a bit of a tourist trap, with a mandatory, albeit modest, admission fee of NT$10 giving access to a crowded lane of vendors with a mediocre view of the ocean and the trains
For many people, Bilingual Nation 2030 begins and ends in the classroom. Since the policy was launched in 2018, the debate has centered on students, teachers and the pressure placed on schools. Yet the policy was never solely about English education. The government’s official plan also calls for bilingualization in Taiwan’s government services, laws and regulations, and living environment. The goal is to make Taiwan more inclusive and accessible to international enterprises and talent and better prepared for global economic and trade conditions. After eight years, that grand vision is due for a pulse check. RULES THAT CAN BE READ For Harper Chen (陳虹宇), an adviser
Traditionally, indigenous people in Taiwan’s mountains practice swidden cultivation, or “slash and burn” agriculture, a practice common in human history. According to a 2016 research article in the International Journal of Environmental Sustainability, among the Atayal people, this began with a search for suitable forested slopeland. The trees are burnt for fertilizer and the land cleared of stones. The stones and wood are then piled up to make fences, while both dead and standing trees are retained on the plot. The fences are used to grow climbing crops like squash and beans. The plot itself supports farming for three years.
President William Lai (賴清德) on Nov. 25 last year announced in a Washington Post op-ed that “my government will introduce a historic US$40 billion supplementary defense budget, an investment that underscores our commitment to defending Taiwan’s democracy.” Lai promised “significant new arms acquisitions from the United States” and to “invest in cutting-edge technologies and expand Taiwan’s defense industrial base,” to “bolster deterrence by inserting greater costs and uncertainties into Beijing’s decision-making on the use of force.” Announcing it in the Washington Post was a strategic gamble, both geopolitically and domestically, with Taiwan’s international credibility at stake. But Lai’s message was exactly