Ethics in mental health broken by AI chatbot
As people look to mental health advice from AI large language models (LLMs), especially ChatGPT, new research has found that it may not be well suited for it. The study found that even when given stablished psychotherapy methods and instructed to use them, the system fails consistently in meeting ethics standards set by organisations including the American Psychological Association.
The results were presented at the AAAI/ACM Conference on Artificial Intelligence, Ethics and Society. The team is affiliated with Brown’s Center for Technological Responsibility, Reimagination and Redesign.
The team from Brown University worked closely with mental health professionals, finding repeated patterns of “problematic behaviour.” In tests, the AI chatbots mishandled crisis situations, giving responses that reinforced harmful beliefs about users or others, and used language that created an illusion of empathy without a genuine understanding.
“In this work, we present a practitioner-informed framework of 15 ethical risks to demonstrate how LLM counselors violate ethical standards in mental health practice by mapping the model’s behavior to specific ethical violations,” the researchers wrote in their study. “We call on future work to create ethical, educational and legal standards for LLM counsellors, standards that are reflective of the quality and rigor of care required for human-facilitated psychotherapy.”
Stay tuned to EyeOnLondon for the latest news and expert opinions.
Follow us on:
Subscribe to our YouTube channel for the latest videos and updates!
We value your thoughts! Share your feedback and help us make EyeOnLondon even better!



