AI Therapy App Regulation Varies by State

Source: apnews.com

Published on September 29, 2025

With the rise of artificial intelligence in mental health, states are stepping in to regulate AI therapy apps amid a lack of federal oversight. However, developers, policymakers, and advocates contend that these state laws are insufficient to safeguard users or ensure accountability for harmful technology.

Karin Andrea Stephan, CEO of the Earkick chatbot app, acknowledges the widespread use of these tools. If you or someone you know needs help, the national suicide and crisis lifeline in the U.S. is available by calling or texting 988. There is also an online chat at 988lifeline.org.

State Approaches to AI Therapy Regulation

State laws differ significantly. Illinois and Nevada have outright bans on AI for mental health treatment. Utah has implemented restrictions on therapy chatbots, including safeguarding user health data and mandating clear disclosure that the chatbot is not human. Pennsylvania, New Jersey, and California are considering similar regulations.

The consequences for users vary, with some apps blocking access in states with bans and others awaiting legal clarification before making changes. Many laws don't encompass general chatbots such as ChatGPT, which, while not designed for therapy, are used for it by many people. There have been lawsuits regarding these bots when users have experienced negative outcomes.

The Need for Federal Oversight

Vaile Wright of the American Psychological Association points out the shortage of mental health providers and the high cost of care. She suggests that science-based and human-monitored mental health chatbots could be beneficial. Federal regulation could involve marketing restrictions, limiting addictive practices, mandating disclosures, requiring tracking of suicidal thoughts, and providing legal protection for those reporting harmful practices.

AI in mental health is diverse, ranging from companion apps to AI therapists. This has resulted in varied regulatory strategies. Some states focus on friendship-based companion apps, while Illinois and Nevada ban products claiming to offer mental health treatment, with potential fines. Categorizing apps can be challenging. Earkick initially avoided calling its chatbot a therapist but later embraced the term for visibility, before recently reverting to "chatbot for self care." The company maintains it does not diagnose, but offers a "panic button" and encourages users to seek therapy if needed. The app wasn't designed for suicide prevention, and police aren't called for self-harm thoughts.

Stephan is concerned about states' ability to keep pace with AI innovation. Other apps, like Ash, have blocked access in Illinois, urging users to contact legislators about what they describe as “misguided legislation”.

Mario Treto Jr., of the Illinois Department of Financial and Professional Regulation, emphasizes that licensed therapists should be the only ones providing therapy, as it requires empathy, clinical judgment, and ethical responsibility that AI cannot replicate.

Research and Development of AI Therapy

A Dartmouth team conducted a clinical trial of the Therabot generative AI chatbot for mental health, aimed at treating anxiety, depression, or eating disorders. The study indicated users rated Therabot similarly to a therapist and experienced lower symptoms after eight weeks, but all interactions were monitored by a human. Nicholas Jacobson, a clinical psychologist, states that larger studies are needed and that the field should proceed cautiously. Therabot is still in testing and not widely available. Jacobson is concerned about the impact of strict bans and notes the lack of a clear path in Illinois to demonstrate an app's safety and effectiveness.

Kyle Hillman of the National Association of Social Workers believes today's chatbots aren't a solution to the mental health provider shortage. He says that bots are not appropriate for people with mental health issues or suicidal thoughts.