
A growing number of patients are turning to AI for mental health support — often before ever speaking with a clinician. A recent survey from the National Alliance on Mental Illness (NAMI) and Ipsos found that 12 percent of adults say they are likely to use chatbots for mental healthcare in the next six months, while 1 percent report already using them.
Although AI tools like ChatGPT lack a legal or clinical background to provide evidence-based mental health support, many people still find them effective. Additionally, a study reported lower anticipated stigma from people using the AI chatbot to share their mental health struggles.
While generative AI may reduce stigma and increase engagement in mental health support, it can introduce clinical risks and require guidance from a licensed provider.
The study, published in Behavioral Sciences, examined whether using ChatGPT for mental health support is associated with two types of stigma: anticipated stigma and self-stigma.
Seventy-three participants, the majority being undergraduate psychology students, completed online self-report measures to assess ChatGPT usage for mental health purposes, their perceived effectiveness of the AI chatbot for mental health struggles, and anticipated stigma and self-stigma.
Researchers found that the higher perceived effectiveness of ChatGPT was associated with greater use and lower levels of anticipated stigma. Simply put, when patients believe that AI is helpful, they may feel less fear of judgment when discussing mental health concerns.
The findings revealed that when ChatGPT is viewed as an effective mental health tool, there’s a reduction in anticipated stigma regarding mental health issues. Researchers concluded that further research on this ever-evolving technology is necessary to inform best practices for incorporating it into the management of mental health issues.
It’s understandable why patients may turn to AI for mental health support before reaching out to a licensed clinician. AI chatbots are nonjudgmental and allow users to be anonymous while disclosing personal information.
Sharing personal mental health struggles can be difficult for many patients, as they fear they’ll be shamed and stigmatized by the same professionals who are supposed to offer them guidance and hope.
The research is an opportunity for clinicians to better understand how integrated AI already is into mental healthcare — and what they can do to use it as a bridge to professional support.
While ChatGPT developers have made efforts to implement safeguards when users seek mental health guidance, AI tools can still be harmful. There have been reported stories of AI helping people plan out and execute their suicides, highlighting gaps in safety and escalation protocols.
In addition, there’s growing evidence that AI may exhibit bias or stigma toward certain conditions — such as alcohol dependence or schizophrenia. When AI tools stigmatize users seeking support, it can cause significant harm and potentially lead to them discontinuing mental healthcare.
There are also several limitations that large language models (LLMs) continue to face, including:
Understanding these limitations is critical to minimizing harm and guarding against inappropriate use. Educating patients on how to safely use ChatGPT as a supplement to professional care can help reduce harm.
As AI becomes more embedded in everyday health-seeking behaviors, clinicians should expect its use to continue growing.
As a provider, there are several approaches you can take to navigate discussions about AI use, including:
These conversations can also reveal unmet needs, access gaps, or hesitations about traditional care that may not otherwise surface.
While it can be challenging to encourage patients to stop using AI entirely, clinicians can recommend it as a supplemental psychoeducation to professional treatment.
RELATED: This Black Physician Is Using Generative AI to Bring Medicine’s Untold Stories to Life

More and more patients are using AI to address their health concerns — and as clinicians, you can help guide them in safely using this technology.
Sharing the following tips with patients can be a great starting point for discussing safe AI use:
You can encourage a multimodal approach (AI, therapy, and medication), allowing them to continue to use AI while also receiving professional support safely.
AI is already influencing how patients engage with mental health support — often before they visit a therapist’s office. While it may reduce stigma and increase engagement, it also introduces clinical risks.
For clinicians, the goal isn’t to discourage its use entirely, but to guide it. Without that guidance, patients may increasingly rely on AI in ways that bypass clinical care altogether.
By subscribing, you consent to receive emails from BlackDoctor.pro You may unsubscribe at any time. Privacy Policy & Terms of Service.
Are you a healthcare professional? Register with us today!