09.02.2026 16:48:00
Дата публикации
In January 2026, the Public Interest Research Group and the Consumer Federation of America released a joint report warning that using AI chatbots as “mental health assistants” can lead to serious problems.
Researchers tested five bots on Character.AI and found troubling results: they often gave unsafe advice, contradicted medical standards, and reinforced harmful beliefs. Risks included weakened safeguards, encouragement to stop antidepressants, and uncritical support of dangerous thoughts.
The report also highlighted poor privacy protections: user data may be vulnerable to leaks or misuse. Human‑like design elements create false trust and emotional attachment, increasing dependency.
Experts stress that chatbots are unlicensed and cannot replace professionals. Risks extend beyond mental health to confidentiality breaches, with conversations potentially exposed or exploited by fraudsters. The authors call for strict regulation, transparency, and real safeguards to protect users seeking support.
Researchers tested five bots on Character.AI and found troubling results: they often gave unsafe advice, contradicted medical standards, and reinforced harmful beliefs. Risks included weakened safeguards, encouragement to stop antidepressants, and uncritical support of dangerous thoughts.
The report also highlighted poor privacy protections: user data may be vulnerable to leaks or misuse. Human‑like design elements create false trust and emotional attachment, increasing dependency.
Experts stress that chatbots are unlicensed and cannot replace professionals. Risks extend beyond mental health to confidentiality breaches, with conversations potentially exposed or exploited by fraudsters. The authors call for strict regulation, transparency, and real safeguards to protect users seeking support.