Your auditor is about to ask about AI agents. 9 things they’ll want to see

Agentic AI software graphics illustrated over a person using a laptop.

Sandwish Studio // Shutterstock

Your auditor is about to ask about AI agents. 9 things they’ll want to see

Studies show that AI adoption outpaces understanding: 72% of organizations are already using or planning to use agentic AI, while 65% say their use of AI is moving faster than their ability to fully understand it, according to the 2025 Vanta State of Trust report.

Audits are starting to reflect that gap. In 2025, 72% of S&P 500 companies disclosed at least one material AI risk, up from 12% in 2023. Yet, only 26% of organizations have comprehensive AI governance policies in place. ‍

That shift is also formalizing. ISO 42001, published in 2023, gives organizations a structured AI Management System (AIMS) that auditors can certify against—and it aligns closely with the EU AI Act, which becomes fully enforceable in August 2026. For companies building or deploying AI, it’s quickly becoming the governance benchmark, Vanta reports.

What auditors actually evaluate in AI systems

Auditors aren’t waiting for AI-specific frameworks to catch up—they’re applying the ones that already exist. Even though SOC 2 and the NIST AI RMF weren’t designed with autonomous agents in mind, auditors map agent behavior directly to those controls. And with ISO 42001—the first certifiable international standard built specifically for AI management systems—auditors now have a dedicated framework to evaluate how organizations govern AI. If an AI agent can access data, trigger workflows, or make decisions, it’s treated like any other system that can introduce risk.

‍That shift is only speeding up. NIST’s AI Agent Standards Initiative is expected to shape compliance frameworks and vendor assessments as soon as 2027.

They’re looking for control, which usually comes down to answering a few questions:‍

  • Can you explain what your AI systems do?
  • Can you show how access and decisions are controlled?
  • Can you provide evidence that oversight is consistent?

Underneath all of it is a simple standard: Your AI systems should behave predictably, securely, and in line with defined controls. Here are nine factors your auditor will likely want to see at your organization.

1. A complete inventory of AI agents across your environment

Auditors will expect a clear list of every AI agent in use, so they can understand where automation is happening and what risks it may introduce. That includes agents across departments and functions, such as:‍

  • A support agent drafting and sending replies in Zendesk
  • A finance agent approving low-risk invoices in NetSuite
  • A sales agent updating Salesforce records
  • A security agent triaging alerts in real time

They’ll also expect context like:

  • Where each agent is deployed
  • What systems it connects to
  • What actions it can take‍

Most organizations don’t have this fully mapped. That’s where shadow AI starts to creep in.‍

2. Defined ownership for every AI system

To help mitigate that shadow AI risk, every AI system needs a clear owner. That owner should be responsible for:‍

  • Approving agent use cases
  • Managing changes and updates
  • Monitoring performance and risk‍

Without ownership, issues tend to stall. A finance agent might be configured by engineering, used by finance, and reviewed by security. When something breaks, no one is fully accountable.

3. Clear boundaries on what agents can and cannot do

Auditors will look closely at how access and permissions are defined and enforced—what each agent is allowed to do, what it’s blocked from doing, and what systems or data it can access. After all, Vanta’s report found that only 48% of organizations have frameworks in place to limit AI autonomy.

Each agent should be treated like its own identity, with scoped permissions that can be audited and reviewed. In practice, this might look like:‍

  • A support agent that’s allowed to issue refunds under $100, but is prevented from issuing larger refunds without human approval.
  • A procurement agent that can draft purchase orders, but can’t approve or send them without a reviewer.
  • A CRM automation agent that can update customer records, but has no access to financial systems.‍

These boundaries map directly to access control requirements in SOC 2 and ISO 27001. ISO 42001 goes further—it explicitly requires organizations to define the scope of AI autonomy, document whether they serve as an AI developer, deployer, or user, and conduct AI impact assessments that evaluate downstream risks of agent actions.

4. Evidence of human oversight and intervention points

Autonomy needs guardrails. Auditors expect human approval for sensitive actions, clear escalation paths, and the ability to override or stop an agent.‍

In practice, issues often emerge gradually: An agent starts by recommending refunds, then auto-approves under a threshold, and eventually expands its scope without formal review. Oversight needs to stay consistent as autonomy increases.

5. Logging and traceability of AI decisions

If an AI agent takes action, you need a record of it. Auditors expect logs that capture what happened, when it happened, what inputs were used, and why the decision was made.

For example, if an agent updates 200 CRM records in an hour, you should be able to trace exactly what triggered that behavior.‍

This visibility supports both auditability and incident response.

‍6. Data handling and model input controls

AI systems are only as controlled as the data they use. Auditors want to see clear rules around what data an agent can access, how it’s used, and whether sensitive information is properly protected.

‍In practice, that means limiting agents to only the data they need, anonymizing or minimizing personal data, and ensuring consent where required. For example, a support agent shouldn’t have access to full customer records if it only needs ticket history to do its job.

Many controls are still uneven. Vanta’s report found that only 35% of organizations rely solely on anonymized data, and just 31% require opt-in for AI data usage, leaving plenty of room for inconsistent handling.

‍7. Risk assessments specific to AI systems

AI introduces new types of risk, and auditors expect formal assessments that account for things like misuse scenarios, model failures, and downstream impact across systems. ISO 42001 formalizes this through a requirement for AI impact assessments—structured evaluations of how an AI system could affect individuals, groups, and society, including considerations around bias, transparency, and ethical use.

That means you’ll want to add AI-specific risks to your risk planning. That might include creating plans for scenarios like what happens if an agent approves fraudulent invoices or exposes sensitive data through outputs or logs.

‍Only 45% of organizations conduct regular AI risk assessments today, according to the Vanta report.

‍8. Continuous monitoring, not point-in-time reviews

AI systems don’t adhere to audit schedules. Auditors expect ongoing monitoring of behavior and access, alerts for anomalies, and clear visibility into how systems change over time—because models, integrations, and permissions can shift quickly, introducing new risks without obvious signals.

At the same time, Vanta research shows teams already spend an average of 12 weeks per year on compliance work, making manual reviews hard to sustain in dynamic environments. Continuous monitoring is what actually scales.

9. Evidence, not policies

Auditors want proof that controls are working in practice. Sixty-one percent of organizations say they spend more time proving security than improving it, according to Vanta’s report—highlighting how critical automation has become. Evidence should be continuously collected, easy to verify, and directly tied to controls.

This includes process documentation that clearly defines roles and responsibilities, along with systems that automatically collect and map evidence to controls. This is where your ticketing or workflow system comes in.

What to do now before your next audit

You don’t need to solve everything at once. Start with structure. Focus on building a centralized inventory of AI agents, assigning clear ownership, implementing identity-based access controls, monitoring activity continuously, and automating evidence collection and reporting. Documented processes need to be made available and updated regularly when changes are made.

These steps align closely with how auditors are already evaluating AI systems.

This story was produced by Vanta and reviewed and distributed by Stacker.

Information contained on this page is provided by an independent third-party content provider. XPRMedia and this Site make no warranties or representations in connection therewith. If you are affiliated with this page and would like it removed please contact pressreleases@xpr.media