In Mental Health, “Working as Designed” Is Not Good Enough

This essay makes the case that an AI system can comply with its rules, pass audits, and still remain too dangerous to deploy in mental health care, because the failure point is judgment, not just guardrails.

Previous
Previous

Why Mental Health Parity Must Include Fair Pay and Real Networks

Next
Next

The Hard Truth From a Mental Health AI Founder: Vulnerable People Are Not Safe Test Cases