Americans are more willing to let AI monitor their mental health than you might expect. But they have one firm requirement: Humans must stay in charge of what happens next.
We surveyed 1,000 people across the United States to learn how they feel about AI detecting mental health crises. The results challenge prevailing assumptions about privacy and technology resistance. People will accept AI monitoring, just not AI decision-making.
Take suicide risk detection: An AI system analyzes patients during a digital check-in, then flags concerning behavior. A quarter of respondents feel “somewhat comfortable” with this initial identification, but only if a human clinician reviews the assessment and decides what to do.
The data points toward an operational enhancement model, not clinical replacement. Americans support AI speeding detection and triage, but believe human judgment needs to remain central to treatment.
For hospitals and health systems evaluating AI crisis detection tools, understanding the full scope and nuance of consumer acceptance is critical.
The Surprising Openness to AI Mental Health Monitoring
Nearly half of Americans (49%) say they would use AI tools to automatically monitor their mental health without any clinical intervention. The scope of this monitoring could entail AI systems continuously analyzing how someone looks during video calls, the pace of their keystrokes, changes in vocal inflection, and patterns in their digital behavior.
This finding contradicts countless headlines warning about an AI privacy backlash. While data security and privacy remain legitimate concerns, our survey reveals that mental health represents a unique exception. When the stakes involve potential self-harm or crisis, many people are willing to share data with AI tools that could protect them.
Where Americans Draw the Line
What most people don’t want, however, is AI making care decisions. When asked who should have the final say in an AI-flagged mental health emergency, 73% want human providers in control. Only 8% would trust AI systems to make those critical decisions independently.
When it comes to immediate crisis response, consumer preferences reveal a strong pull toward human connection. If AI detects someone might be at risk for self-harm, 28% want it to contact a pre-selected family member or friend first. Another 27% prefer a phone call from a trained counselor within 30 minutes.
Traditional approaches still hold significant appeal, with 32% preferring to stick with systems where they maintain full control over seeking help. That means if AI detects concerning patterns, they want to be notified but left to decide whether and how to reach out for support themselves. In contrast, only 22% trust AI to immediately connect them to a human professional without their explicit permission when risk is detected.
The Demographics that Shape Acceptance
Attitudes toward AI mental health monitoring vary widely. Understanding age, gender, and income differences is crucial.
The Gender Divide on AI-Supported Care
Men overall show significantly more comfort with AI monitoring. While 56% of men would use automatic AI mental health monitoring tools, only 41% of women express the same willingness.
The difference becomes even more pronounced when it comes to ceding control to automated systems, particularly in high-stakes mental health situations. In AI-flagged emergencies, women (78%) want human providers to make the final call more than men (68%).
These findings have direct implications for how healthcare organizations message and market AI mental health tools. Approaches that emphasize efficiency and innovation may resonate more with male patients, while messaging focused on human oversight, safety protocols, and clinician involvement may be more effective with female patients.
Millennials and Gen Z Drive AI Acceptance
Nearly three in 10 Millennials (29%) and a quarter of Gen Z (24%) feel “very comfortable” with AI identifying mental health crises. Among Boomers, that number plummets to just 5%.
This comfort gap translates into dramatically different preferences for crisis response systems. When it comes to seeking help, 74% of Boomers prefer traditional “call for help myself” approaches. Gen Z takes the opposite approach, as only 20% want to rely on self-initiated help-seeking.
This suggests that a phased implementation strategy may be most effective, starting with more automated, AI-driven tools for younger demographics while keeping traditional pathways for older patients. Organizations may also need to prepare for a future where generational preferences shape service offerings, requiring parallel systems for different age groups.
Higher Earners Show More Skepticism
Lower-income consumers show the highest receptivity to AI mental health monitoring, with 61% of those earning $25,000 or less willing to use these tools. For the highest earners, that figure is 44%, challenging assumptions about higher income brackets adopting technology earlier.
Lower-income individuals may simply see greater value in automated support systems due to limited access to traditional mental health services.
This income difference suggests that premium positioning strategies may backfire. Instead of marketing AI mental health tools as luxury or cutting-edge solutions, positioning them as accessible, practical alternatives may drive higher adoption rates.
The data also indicates that AI mental health tools might find their strongest market among middle and lower-income segments, potentially reshaping how organizations think about pricing, distribution, and target markets for these emerging technologies.
The Trust-Building Essentials
Trust in AI mental health tools depends on specific safeguards that give users confidence and control.
- More than half (56%) say it’s “extremely important” that AI systems explain their reasoning when flagging someone as high-risk.
- Professional oversight builds trust for 32% who want a licensed therapist reviewing every AI recommendation before action is taken.
- User control matters to 25% who want complete control over when and how monitoring occurs, and another 16% want the ability to override or appeal AI decisions.
- When mistakes happen, 42% believe both AI developers and healthcare providers should share responsibility.
These requirements are conditions for acceptance. Healthcare organizations that build these safeguards into their AI systems from the start will likely see higher adoption rates.
Implications for Healthcare Leaders
Healthcare organizations weighing AI crisis detection tools should address top concerns directly. Approximately 30% of respondents fear false positives leading to unnecessary interventions, and 23% worry about technology replacing human connection. Clear protocols for both will be essential, such as establishing thresholds for AI alerts that require multiple data points before triggering responses, and maintaining guaranteed human review within defined timeframes for all AI-flagged cases.
Integration also matters. One-third of consumers are more likely to use AI mental health tools embedded in familiar systems — like existing Epic MyChart portals or Zoom telehealth sessions — than downloading separate crisis monitoring apps.
Messaging strategies must also reflect demographic differences, and organizations should plan to prepare phased rollouts that start with younger people.
Want to learn how your organization can thoughtfully adopt AI to enhance behavioral health while preserving the human connection patients value? Get in touch with us to explore our comprehensive approach to transforming care delivery.
###
About the Author
As CEO, Andy Flanagan is responsible for Iris Telehealth’s strategic direction, operational excellence, and the cultural success of the company. With significant experience in all aspects of our U.S. and global healthcare system, Andy is focused on the success of the patients and clinicians Iris Telehealth serves to improve people’s lives. Andy has worked in some of the largest global companies and led multiple high-growth businesses providing a unique perspective on the behavioral health challenges in our world. Andy holds a Master of Science in Health Informatics from the Feinberg School of Medicine, Northwestern University, and a Bachelor of Science from the University of Nevada, Reno. His prior experiences include being a three-time CEO, including founding a SaaS company and holding senior-level positions at Siemens Healthcare, SAP, and Xerox.