News
AI Therapy App Regulation Varies by State
Source: apnews.com
Published on September 29, 2025
Updated on September 29, 2025

As AI therapy apps become increasingly popular, states are taking action to regulate these tools in the absence of federal oversight. However, developers, policymakers, and advocates argue that the current state laws fall short of adequately protecting users or ensuring accountability for potentially harmful technology.
The landscape of AI therapy app regulation varies significantly across the United States. Illinois and Nevada have implemented outright bans on AI-powered mental health treatment, while Utah has placed restrictions on therapy chatbots, requiring them to safeguard user health data and clearly disclose that the services are automated rather than human-provided. Pennsylvania, New Jersey, and California are currently exploring similar regulatory frameworks.
The consequences for users differ based on these state-specific laws. Some apps have blocked access in states with bans, while others are waiting for legal clarification before making changes. Many existing laws do not address general chatbots like ChatGPT, which, though not designed for therapy, are often used for mental health support. This has led to lawsuits when users experience negative outcomes.
State Approaches to AI Therapy Regulation
Karin Andrea Stephan, CEO of the Earkick chatbot app, recognizes the widespread use of AI therapy tools but emphasizes the need for clear guidelines. "These apps are being used by millions," she stated, "but the regulations aren't keeping up with the technology." Stephan also highlighted the importance of providing users with access to professional help, noting that the national suicide and crisis lifeline in the U.S. is available by calling or texting 988, with an online chat option at 988lifeline.org.
In Utah, the focus is on data protection and transparency. Therapy chatbots must clearly disclose their non-human nature and implement robust measures to protect user health data. Pennsylvania, New Jersey, and California are considering regulations that align with these principles, aiming to strike a balance between innovation and user safety.
The Need for Federal Oversight
Vaile Wright of the American Psychological Association underscores the shortage of mental health providers and the high cost of care. She suggests that science-based, human-monitored mental health chatbots could play a beneficial role. Federal regulation, she argues, could address marketing practices, limit addictive features, mandate disclosures, and require tracking of suicidal thoughts while providing legal protection for those reporting harmful practices.
The diversity of AI mental health tools—ranging from companion apps to full-fledged AI therapists—has led to varied regulatory strategies. Some states, like Illinois and Nevada, focus on banning products that claim to offer mental health treatment, imposing potential fines. However, categorizing these apps can be challenging. Earkick initially avoided calling its chatbot a therapist but later embraced the term for visibility before reverting to "chatbot for self-care." The app encourages users to seek professional therapy if needed and includes a "panic button" for urgent situations.
Stephan expresses concern about states' ability to keep pace with AI innovation. Other apps, like Ash, have blocked access in Illinois, urging users to contact legislators about what they describe as "misguided legislation." Mario Treto Jr., of the Illinois Department of Financial and Professional Regulation, stresses that therapy requires empathy, clinical judgment, and ethical responsibility that AI cannot replicate.
Research and Development of AI Therapy
A team at Dartmouth conducted a clinical trial of the Therabot generative AI chatbot, designed to treat anxiety, depression, and eating disorders. The study found that users rated Therabot similarly to human therapists and experienced reduced symptoms after eight weeks, though all interactions were monitored by a human. Nicholas Jacobson, a clinical psychologist, cautions that larger studies are needed and that the field should proceed with caution.
Kyle Hillman of the National Association of Social Workers argues that current chatbots are not a solution to the mental health provider shortage. He believes that AI cannot replace the human connection essential for treating mental health issues or suicidal thoughts. As the debate continues, the need for balanced regulation and continued research becomes increasingly clear.