How to Implement Joint Commission AI Guidance
Post Summary
To provide a framework for safe, responsible, and transparent use of AI in healthcare settings.
Seven pillars: governance, privacy, data security, quality monitoring, safety event reporting, bias assessment, and education.
Governance creates accountability and sets standards for safe AI use across the organization.
It emphasizes transparency, privacy protections, and strict requirements for compliant data use.
Through documented model assessments and continuous monitoring for bias across diverse populations.
Yes - while voluntary today, it is expected to influence future Joint Commission accreditation standards.
As healthcare organizations continue to embrace the transformative potential of artificial intelligence (AI), ensuring responsible and safe deployment becomes paramount. To address this, the Joint Commission, in collaboration with the Coalition for Health AI, has released a comprehensive guidance document aimed at promoting the ethical and effective use of AI in healthcare. This article breaks down the core elements of the new Joint Commission AI guidance, offering actionable insights for healthcare and cybersecurity professionals.
This guidance serves as a roadmap for healthcare delivery organizations (HDOs), emphasizing the importance of minimizing risks while maximizing the utility of AI tools. It builds on key frameworks like the National Institute for Standards and Technology (NIST) AI Risk Management Framework and the National Academy of Medicine’s AI Code of Conduct, positioning itself as essential guidance for the future of AI accreditation in healthcare.
Why the Joint Commission Guidance Matters

The Joint Commission is a leading accrediting body for healthcare organizations, and its guidance often signals forthcoming accreditation requirements. By partnering with the Coalition for Health AI, this guidance consolidates best practices, ethical standards, and technical expertise to create a clear pathway for safe AI utilization in healthcare settings.
The document outlines seven critical elements for responsible AI deployment, offering a detailed framework that healthcare organizations can adopt as they navigate the complexities of AI integration. Below, we’ll explore these seven pillars in depth, providing both context and strategies for implementation.
sbb-itb-535baee
The Seven Pillars of Responsible AI Use in Healthcare
1. AI Policies and Governance Structures
Establishing robust governance is the cornerstone of responsible AI deployment. The guidance emphasizes the need for clear policies and governance frameworks that define how AI is utilized, monitored, and escalated when issues arise. Effective governance should:
By creating these structures, healthcare organizations can ensure AI tools are integrated safely and align with organizational goals.
2. Patient Privacy and Transparency
Trust is foundational in healthcare. For AI tools to succeed, patients and caregivers must trust how their data is handled. The guidance encourages healthcare organizations to:
Transparency doesn’t just build trust - it fosters informed consent, especially in scenarios like using AI scribes in states requiring verbal consent for data recording.
3. Data Security and Data Use Protections
AI’s reliance on large datasets increases organizations’ surface area for cybersecurity risks. The guidance underscores the importance of safeguarding sensitive data from internal misuse and external threats. Key recommendations include:
By prioritizing security, organizations can mitigate risks and maintain operational continuity.
4. Ongoing Quality Monitoring
AI tools are not static; they require continuous oversight to ensure optimal performance over time. The guidance recommends:
Regular quality checks ensure that AI tools continue to perform as intended, even as data systems and operational environments evolve.
5. Voluntary Blinded Reporting of AI Safety-Related Events
To improve safety and minimize risk, the guidance calls for the establishment of non-punitive reporting pathways for AI-related safety events. This includes:
By creating a culture of transparency and learning, healthcare organizations can refine their AI implementations and prevent future errors.
6. Risk and Bias Assessment
AI is not immune to bias, and mitigating this risk starts at the development stage. The guidance recommends using tools like the Coalition for Health AI’s Applied Model Card to document:
Bias is not limited to data - it can also emerge in how AI tools are designed, deployed, and used. Regular assessments should extend beyond the technical model to include the broader "action space", ensuring that tools do not inadvertently perpetuate inequities in care.
7. Education and Training
The final pillar emphasizes the importance of equipping healthcare professionals with the knowledge to effectively use AI tools. Training should:
Education is not just about compliance - it’s a critical component of change management. Engaging staff as partners in the AI journey fosters trust, innovation, and better outcomes.
Actionable Next Steps for Healthcare Organizations
While the Joint Commission’s guidance is voluntary today, it is likely to inform future accreditation standards. To prepare, healthcare organizations can take proactive steps:
Key Takeaways
Conclusion
The Joint Commission’s AI guidance provides a robust framework for the responsible deployment of artificial intelligence in healthcare. By addressing governance, transparency, security, quality monitoring, safety reporting, risk assessment, and education, the document lays the foundation for safe and effective AI integration. As AI becomes more entrenched in healthcare, organizations that adopt and implement these best practices will not only align with emerging standards but also enhance patient safety, operational efficiency, and trust.
Healthcare and cybersecurity professionals must work collaboratively to operationalize these recommendations, ensuring that AI fulfills its promise of transforming care delivery while safeguarding the people it serves. This guidance is not just a document - it’s a call to action for the healthcare industry to lead with responsibility, foresight, and innovation.
Source: "What's The Joint Commission Saying About Healthcare AI These Days?" - Health Data Ethics Podcast, YouTube, Sep 30, 2025 - https://www.youtube.com/watch?v=mYQUX3IGydo
Related Blog Posts
Key Points:
What is the Joint Commission AI guidance, and why was it developed?
- Collaborative development: Created by the Joint Commission and the Coalition for Health AI to guide responsible healthcare AI adoption.
- Ethical foundation: Consolidates ethical, clinical, and technical best practices for safe and transparent AI use.
- Operational clarity: Helps healthcare organizations standardize governance, risk management, and oversight.
- Future relevance: Expected to influence future accreditation expectations.
What are the seven elements of responsible AI use?
- AI Governance: Establish defined oversight structures, roles, and escalation pathways.
- Privacy & Transparency: Communicate clearly about how AI systems use and capture patient data.
- Data Security: Protect data with strong cybersecurity, controlled access, and compliant storage.
- Quality Monitoring: Continuously evaluate AI performance for drift, failures, and workflow impacts.
- Safety Reporting: Create voluntary, blinded, non‑punitive reporting channels for AI‑related incidents.
- Risk & Bias Assessment: Document risks, model limitations, and performance across diverse populations.
- Education & Training: Provide role‑specific training to support safe adoption and clinician confidence.
How does governance improve AI safety?
- Defines accountability: Ensures oversight bodies review AI tools before and after deployment.
- Standardizes evaluation: Establishes consistent criteria for selecting and approving AI technologies.
- Supports escalation: Outlines when and how to intervene if an AI tool behaves unexpectedly.
- Aligns with compliance: Ensures organizational decisions mirror regulatory and accreditation expectations.
How should organizations protect patient privacy in AI systems?
- Ensure transparency: Notify patients and clinicians when their data interacts with AI tools.
- Address consent needs: Communicate clearly in workflows involving audio, transcription, or monitoring technologies.
- Limit data exposure: Restrict access and ensure data is collected only for legitimate clinical use.
- Meet regulatory requirements: Maintain compliance with HIPAA, state privacy laws, and internal policies.
Why is continuous quality monitoring necessary?
- Prevents performance drift: Detects when models degrade or produce inaccurate outputs.
- Adapts to workflow changes: Ensures AI remains reliable as clinical processes evolve.
- Validates updates: Confirms model updates or vendor changes don’t introduce new risks.
- Improves reliability: Builds trust among clinicians who rely on the tool’s stability.
How can organizations reduce AI bias and risk?
- Evaluate across demographics: Test model performance on diverse populations to avoid inequitable outcomes.
- Document model specifics: Use structured tools (e.g., model cards) to record data sources, risks, and limitations.
- Assess deployment context: Evaluate the full “action space” - how the model is used, not just how it is built.
- Monitor continuously: Review outputs regularly to catch emerging patterns of bias.
