Demo Request
X Close Search

How can we assist?

AI Adoption Survey Reveals Healthcare's Governance Gap And Drive Toward Agentic Usage

Post Summary

Listen to this article: 
Custom Audio Player
0:00
What does the AI Adoption Survey reveal about healthcare?

The survey highlights governance gaps in healthcare organizations and a growing focus on agentic AI usage.

What is agentic AI usage in healthcare?

Agentic AI refers to AI systems designed to act autonomously while adhering to ethical and governance standards.

Why is governance important in AI adoption for healthcare?

Governance ensures that AI systems are safe, ethical, and compliant with regulations, reducing risks in healthcare.

What challenges do healthcare organizations face with AI adoption?

Key challenges include governance gaps, lack of expertise, and balancing innovation with risk management.

How are organizations addressing AI governance gaps?

Organizations are implementing frameworks, investing in training, and leveraging AI tools to enhance governance.

What is the future of AI in healthcare?

The future involves integrating agentic AI systems to improve patient outcomes, operational efficiency, and innovation.

BOSTON, MA – DECEMBER 16, 2025 – Censinet, the leading provider of healthcare risk management solutions, announced today the results of a new College of Healthcare Information Management Executives (CHIME) Foundation survey of 51 healthcare organizations. The findings highlight a critical disparity: while the industry has successfully mobilized high-level governance structures, it faces a significant maturity gap in the operational processes and automated controls required to safely manage the next generation of AI.

The survey indicates that healthcare has moved past the initial phase of AI mobilization. An overwhelming 84% of respondents have established an AI Governance Committee, with strong executive participation: Chief Information Officers (CIOs) serve on 63% of these bodies, and Chief Medical Information Officers (CMIOs) on 45%.

However, the data reveals that these committees often lack the operational tools to govern effectively. Only 59% of organizations have a formal, documented process requiring approval before AI implementation. The survey also exposed a notable blind spot in committee composition: despite the industry's focus on responsible AI, Ethics/Bioethics roles are represented on only 25% of committees, significantly trailing Legal and Risk Management functions.

"The 'Day Zero' work of standing up committees is largely done, but the 'Day One' work of creating responsible and secure AI is slowly beginning," said Ed Gaudet, CEO and Founder of Censinet. “To make matters worse, we are about to see a leap from AI as an advisor to AI as an agent. Managing agents with manual spreadsheets and ad hoc discovery is a recipe for failure. The organizations that will succeed in 2026 are those that operationalize their governance today."

CIOs and IT leaders reported the following significant concerns regarding their ability to detect and monitor AI risks:

  • Inventory Visibility Crisis: Only five organizations (roughly 10%) utilize automated product monitoring to detect AI capabilities. The majority rely on ’informal ad hoc discovery’ (51%) or vendor release notes (51%), leaving health systems vulnerable to shadow AI.
  • Top Risks: ’Output quality/hallucinations’ was cited as the primary data risk by 63% of respondents. Operationally, leaders are most worried about ’automation bias’ (27%) – the risk that clinicians will over-rely on AI outputs without critical thinking.

The urgency to close this operational gap is driven by a massive projected shift toward ’agentic AI,’ autonomous systems that can execute workflows rather than just provide recommendations.

While the majority of current AI deployments are classified as ’Level 1’ (recommendation only), 63% of organizations plan to implement agentic AI systems within the next 12 months. This rapid acceleration contrasts sharply with current confidence levels: only 8% of organizations described themselves as "very confident" in their ability to identify emerging AI risks.

Download the survey at https://www.censinet.com/download-the-chime-ai-adoption-survey

About Censinet

Censinet®, based in Boston, MA, takes the risk out of healthcare with Censinet RiskOps™, the industry’s first and only AI risk exchange of healthcare organizations working together to manage and mitigate cyber risk. Purpose-built for healthcare, Censinet RiskOps delivers total automation across all third-party and enterprise risk management workflows and best practices. Censinet transforms cyber risk management by leveraging network scale and efficiencies, providing actionable insight, and improving overall operational effectiveness while eliminating risks to patient safety, data, and care delivery. Censinet is an American Hospital Association (AHA) Preferred Cybersecurity Provider. Learn more at censinet.com.

About the CHIME Foundation

The CHIME Foundation is the affiliate organization of the College of Healthcare Information Management Executives (CHIME) and is comprised of healthcare IT companies and professional services firms dedicated to collaborating with healthcare CIOs and IT leaders. The Foundation fosters collaboration and innovation to improve healthcare through the effective use of information management.

# # #

Contacts

For Censinet:
Mark Gaudet
markg@censinet.com

Key Points:

What does the AI Adoption Survey reveal about healthcare?

The AI Adoption Survey uncovers significant governance gaps in healthcare organizations, emphasizing the need for robust frameworks to manage AI risks. It also highlights a growing trend toward adopting agentic AI systems to drive innovation and improve patient care.

What is agentic AI usage in healthcare?

Agentic AI refers to autonomous AI systems that operate independently while adhering to ethical, legal, and governance standards. In healthcare, this means AI tools that can make decisions or provide recommendations without constant human oversight, ensuring safety and compliance.

Why is governance important in AI adoption for healthcare?

  • Governance is critical to ensure that AI systems are safe, ethical, and compliant with healthcare regulations.
  • It helps mitigate risks such as data breaches, biased algorithms, and unintended consequences, fostering trust in AI technologies.

What challenges do healthcare organizations face with AI adoption?

Healthcare organizations face several challenges, including:

  • Governance gaps that leave AI systems vulnerable to risks.
  • A lack of expertise and resources to manage AI effectively.
  • Balancing the need for innovation with the imperative to manage risks and ensure compliance.

How are organizations addressing AI governance gaps?

Organizations are taking proactive steps to address governance gaps by:

  • Implementing AI governance frameworks like NIST AI RMF.
  • Investing in workforce training to build AI expertise.
  • Leveraging advanced AI tools to automate risk assessments and ensure compliance.

What is the future of AI in healthcare?

The future of AI in healthcare lies in the integration of agentic AI systems that enhance patient outcomes, streamline operations, and foster innovation. As governance frameworks mature, healthcare organizations will be better equipped to harness AI's full potential while minimizing risks.

Slide 1

This is some text inside of a div block.
Text Link
Censinet Risk Assessment Request Graphic

Censinet RiskOps™ Demo Request

Do you want to revolutionize the way your healthcare organization manages third-party and enterprise risk while also saving time, money, and increasing data security? It’s time for RiskOps.

Schedule Demo

Sign-up for the Censinet Newsletter!

Hear from the Censinet team on industry news, events, content, and 
engage with our thought leaders every month.

Terms of Use | Privacy Policy | Security Statement | Crafted on the Narrow Land