The Hard Truth From a Mental Health AI Founder: Vulnerable People Are Not Safe Test Cases

In a striking public statement, the founder of Yara AI said the company stopped because AI may be useful for low-stakes support, but becomes dangerous when people in crisis, trauma, or suicidal distress turn to it for help.

Previous
Previous

In Mental Health, “Working as Designed” Is Not Good Enough

Next
Next

AI Chatbots Are Not a Safe Substitute for Mental Health Care Professionals