The mental health field is increasingly looking to chatbots to relieve escalating pressure on a limited pool of licensed therapists. But they’re entering uncharted ethical territory as they confront questions about how closely AI should be involved in such deeply sensitive support.
Researchers and developers are in the very early stages of figuring out how to safely blend artificial intelligence-driven tools like ChatGPT, or even homegrown systems, with the natural empathy offered by humans providing support — especially on peer counseling sites where visitors can ask other internet users for empathetic messages. These studies seek to answer deceptively simple questions about AI’s ability to engender empathy: How do peer counselors feel about getting an assist from AI? How do visitors feel once they find out? And does knowing change how effective the support proves?
They’re also dealing, for the first time, with a thorny set of ethical questions, including how and when to inform users that they’re participating in what’s essentially an experiment to test an AI’s ability to generate responses. Because some of these systems are built to let peers send supportive texts to each other using message templates, rather than provide professional medical care, some of these tools may fall into a gray area where the kind of oversight needed for clinical trials isn’t required.
Discover The World's MOST COMPREHENSIVE Mental Health Assessment Platform
Efficiently assess your patients for 80+ possible conditions with a single dynamic, intuitive mental health assessment. As low as $12 per patient per year.
“The field is sometimes evolving faster than ethical discussion can keep up,” said Ipsit Vahia, the head of McLean’s Digital Psychiatry Translation and Technology and Aging Lab. Vahia said the field is likely to see more experimentation in the years ahead.
That experimentation could carry risks: Experts said they’re concerned about inadvertently encouraging self-harm or missing signals that the help-seeker might need more intensive care.
But they’re also worried about rising rates of mental health issues, and the lack of easily accessible support for many people who struggle with conditions such as anxiety or depression. That’s what makes it so essential to strike the right balance between safe, effective automation and human intervention.
“In a world with not nearly enough mental health professionals, lack of insurance, stigma, lack of access, anything that can help can really play an important role,” said Tim Althoff, an assistant professor of computer science at the University of Washington. “It has to be evaluated with all of [the risks] in mind, which creates a particularly high bar, but the potential is there and that potential is also what motivates us.”
Althoff co-authored a study published Monday in Nature Machine Intelligence examining how peer supporters on a site called TalkLife felt about responses to visitors co-written by a homegrown chat tool called HAILEY. In a controlled trial, researchers found that almost 70% of supporters felt that HAILEY boosted their own ability to be empathetic — a hint that AI guidance, when used carefully, could potentially augment a supporter’s ability to communicate deeply with other humans. Supporters were informed that they might be offered AI-guided suggestions.
Instead of telling a help-seeker “don’t worry,” HAILEY might suggest the supporter type something like, “it must be a real struggle,” or ask about a potential solution, for instance.
The positive results in the study are the result of years of incremental academic research dissecting questions like “what is empathy in clinical psychology or a peer support setting,” and “how do you measure it,” Althoff emphasized. His team did not present the co-written responses to TalkLife visitors at all — their goal was simply to understand how supporters might benefit from AI guidance before sending the AI-guided replies to visitors, he said. His team’s previous research suggested that peer-supporters reported struggling to write supportive and empathetic messages on online sites.
In general, developers exploring AI interventions for mental health — even in peer support — would be “well-served being conservative around the ethics, rather than being bold,” said Vahia.
Other attempts have already drawn ire: Tech entrepreneur Rob Morris drew censure on Twitter after describing an experiment involving Koko, a peer-support system he developed that allows visitors to anonymously ask for or offer empathetic support on platforms including WhatsApp and Discord. Koko offered a few thousand peer supporters suggested responses that were guided by AI based on the incoming message, which the supporters were free to use, reject, or rewrite.
Visitors to the site weren’t explicitly told that their peer supporters might be guided by AI upfront — instead, when they received a response, which they could choose to open or not, they were notified the message may have been written with the help of a bot. AI scholars lambasted that approach in response to Morris’ posts. Some said he should have sought support from an institutional research review board — a process that academic researchers typically follow when studying human subjects — for the experiment.
Morris told STAT that he did not believe this experiment warranted such approval in part because it didn’t involve personal health information. He said the team was simply testing out a product feature, and that the original Koko system stemmed from previously academic research that had gone through IRB approval.
Morris discontinued the experiment after he and his staff concluded internally that they did not want to muddy the natural empathy that comes from a pure human-to-human interaction, he told STAT. “The actual writing could be perfect, but if a machine wrote it, it didn’t think about you … it isn’t drawing from its own experiences,” he said. “We are very particular about the user experience and we look at data from the platform, but we also have to rely on our own intuition.”
Despite the fierce online pushback he faced, Morris said he was encouraged by the discussion. “Whether this kind of work outside academia can and should go through IRB processes is a really important question and I’m really excited to see people getting super excited about that.”