
April 1, 2026 • 10 min read
Closing the AI oversight gap: GRC teams are core to the problem — and the solution

Richard Chambers
Connected risk requires governance, risk, and compliance (GRC) teams to get on the same page — aligned on key risks, sharing information and perspectives, and collaborating to achieve unified priorities. Unfortunately, a new report from Optro, The AI oversight gap: Adoption is scaling. Governance controls aren’t, reveals significant divergence in how GRC teams view the current state of AI risk and governance.
The report underscores the urgent task at hand, recognizing the growing gap between AI deployment speed and effective AI governance as “one of the defining enterprise risk challenges of this decade.” In short, as AI adoption accelerates, AI governance and risk management remain inadequate — and organizations are feeling the consequences.
AI incidents are already systemic. The most common incidents include inaccurate outputs (40% of respondents), policy violations (33%), customer complaints (28%), data breaches (27%), bias or fairness issues (26%), regulatory action (26%), legal claims (19%), and reputational damage (19%). AI use is outpacing the controls designed to govern it, accelerating risk as mitigation lags. While 85% of respondents say AI is core to operations or strategy, only 18% have active mitigation covering most or all identified AI risks.
Human behavior is the most significant and least governed risk surface. Shadow AI — unsanctioned employee use of AI tools without official oversight or approval — is moderate or pervasive in 80% organizations, but only 25% have comprehensive visibility into employees’ AI usage. What’s more, GRC teams report that ownership, accountability, and authority to shut down AI systems are distributed across the organization.
These structural problems are preventing AI governance from functioning as an effective system. However, if GRC teams don’t see and prioritize the same problems, they’ll be hard-pressed to help solve them. Addressing potential disconnects around how GRC teams see AI governance responsibility, risk mitigation, policy compliance, regulation, and cross-functional collaboration is a first step toward better alignment.
AI governance responsibility
When asked which function shoulders primary responsibility for AI governance in their organizations, each group thinks they own the lion’s share. They can’t all be right simultaneously.
Current primary responsibility for AI governance

Survey question: Which function has primary responsibility for AI governance today?
Key disconnects:
- Internal audit: 31% say they have responsibility; 24% say cross-functional ownership.
- Risk: 53% say they have responsibility; 13% say cross-functional ownership.
- Security: 56% say IT has responsibility; only 7% say cross-functional ownership.
- Compliance/governance: 22% point to compliance and 23% to cross-functional ownership.
How GRC teams can respond: Work with management and GRC teams to agree upon clear AI governance responsibilities, including decision rights (e.g., kill switch); escalation paths; and intake, approval, and tracking responsibilities and policies.
AI risk mitigation coverage
Internal audit and compliance/governance teams tend to take a dimmer view on AI risk mitigation, while risk and security teams often have a rosier outlook. Unfortunately, disparate views and priorities become a bottleneck to risk awareness and mitigation.
Percentage of identified AI risks with active mitigation in place

Survey question: What percentage of identified AI risks have active mitigation measures in place?
Key disconnects:
- While 25% of security and 23% of risk respondents say “most or all” identified AI risks have active mitigation in place, only 10% of internal audit and 15% of compliance/governance respondents agree.
- One in three internal audit respondents say that “less than half” of AI risks have active mitigation in place, but only 16% of risk respondents say the same.
How GRC teams can respond: Work to establish a unified enterprise risk view that provides cross-functional visibility. An integrated GRC system of action enabled by a connected risk approach is fundamental to understanding and mitigating AI risk.
Confidence in employee AI policy compliance
Views are wide-ranging, but the overall trend is noteworthy: Internal auditors are much less likely to say they’re confident about employees’ AI usage policy compliance, and security teams are much more likely to say they’re confident. It begs the question: What does internal audit know that security doesn’t?
Confidence level in employee compliance with AI policies

Survey question: How confident are you that employees comply with AI usage policies in practice?
How GRC teams can respond: As the report asserts, policy and training alone can’t fix the problems caused by siloed tools, fragmented ownership, and distributed authority. Assess your organization’s AI governance maturity level (i.e., lagging, starting, scaling, leading) and align GRC teams on next steps. Consult the full report for actionable recommendations fitting your maturity level.
Impact of current AI regulatory environment
GRC teams report differing views on the complexity and volatility of the current AI regulatory landscape. Are these teams effectively communicating about current and emerging regulatory needs, concerns, and priorities?
Characterization of current regulatory impact on organization

Survey question: How would you characterize the current regulatory environment for AI as it affects your organization?
Key disconnects:
- Most internal auditors (59%) view the current AI regulatory environment as “evolving but manageable,” and only 15% deem it “clear and stable.” However, many risk teams (45%) call it “clear and stable.”
- Internal audit and security respondents are twice as likely as risk respondents to assess the regulatory environment as “highly volatile and disruptive.”
Similarly, internal auditors are less confident when asked about readiness to comply with AI regulations and frameworks:
- State/regional regulations: 16% are “very confident,” compared to 32% of security, 23% of risk, and 20% of compliance/governance respondents saying the same.
- EU AI Act: 18% are “very confident,” compared to 24% of security, 23% of risk, and 21% of compliance/governance respondents saying the same.
How GRC teams can respond: The AI regulatory environment will keep evolving. Make sure your GRC infrastructure can adapt, with tools that enable controls mapping to regulatory standards, systemized evidence collection and reuse, and proactive compliance readiness.
Views on GRC risk collaboration
The survey also asked GRC teams how well they’re collaborating, including (1) how quickly organizations identify risks spanning multiple functions or teams, (2) how integrated they are, and (3) where collaboration breaks down. In the first two instances, functions see things quite differently.
Speed of cross-functional risk identification

Survey question: How quickly can your organization identify risks that span multiple functions or teams (e.g., risks involving audit, risk, compliance, IT, or security)?
Key disconnects:
- As the table shows, internal auditors tend to have significantly less rosy views on cross-functional risk identification. Do risk, security, and compliance/governance teams know something internal audit doesn’t?
- Risk and security teams often regard functions as “highly integrated” (43% and 41% respectively); only 25% of internal audit and 26% of compliance/governance respondents agree.
How GRC teams can respond: The good news is that GRC teams agree on the top two areas where collaboration often breaks down: Competing priorities and technology limitations, which significantly exceeded other answer options (i.e., data sharing, ownership/accountability, cultural resistance, leadership misalignment, accountability without decision authority). Start there, focusing on aligning priorities and implementing a GRC system of action enabling all of the above.
The AI oversight gap: Adoption is scaling. Governance controls aren’t reinforces that the future of AI governance isn’t more policy frameworks. It is operationalized, connected control that supports adaptable, resilient GRC infrastructure amid a fast-changing AI landscape. Get GRC teams in lock-step on how to close your organization’s AI oversight gap. Download the report for more insights.
About the authors

Richard Chambers, CIA, CRMA, CFE, CGAP, is the CEO of Richard F. Chambers & Associates, a global advisory firm for internal audit professionals, and also serves as Senior Advisor, Risk and Audit at Optro. Previously, he served for over a decade as the president and CEO of The Institute of Internal Auditors (IIA). Connect with Richard on LinkedIn.
You may also like to read


How Optro helped Marathon Petroleum develop the foundation for intuitive audits

Boards are struggling with AI oversight. How internal auditors can help

An internal audit director’s guide to third-party risk management

How Optro helped Marathon Petroleum develop the foundation for intuitive audits

Boards are struggling with AI oversight. How internal auditors can help
Discover why industry leaders choose Optro
SCHEDULE A DEMO



