Why Agent Exposure Differs From Traditional AI Risk
There is a useful shorthand for distinguishing AI tools from AI agents: tools assist, agents execute.
A traditional AI tool — a copilot, a recommendation engine, a classification model — produces output that a human then acts on. A human reads the summary. A human approves the flagged invoice. A human decides whether to send the email the AI drafted. The human is in the loop. The liability profile is familiar.
An AI agent eliminates that loop. It receives a goal and executes autonomously: calling APIs, reading and writing data, sending communications, initiating transactions, sequencing multi-step decisions — without waiting for human review at each step. The company’s exposure is not the AI’s output. It is the AI’s action.
That distinction changes the liability profile fundamentally. When an AI agent makes a decision that causes a loss, the question is not whether a human reviewed the output before acting. The question is whether the company had sufficient governance and oversight of an autonomous system that was acting on its behalf. That is a different standard, and it is one that most current policy language was not written to adjudicate.
Brokers who treat AI agent exposure the same as general “AI/tech risk” will miss the actual exposure. The pre-renewal process needs to account for this.
Pre-Renewal Questionnaire
The first task is understanding what the client has deployed. This cannot be delegated to the client’s IT team alone — agents are proliferating at the department level, embedded in vendor platforms, and operating under shadow IT conditions that IT may not be aware of.
AI inventory questions:
- What AI tools and agents has the company officially deployed in the past 12 months?
- Which of those agents take autonomous actions (send communications, execute transactions, modify data) without human approval at each step?
- Which vendor platforms in use have AI or agent features enabled — even if not explicitly adopted as “AI”?
- What is the total number of AI-assisted decisions or actions per month, by tool?
Shadow AI questions:
- What is the company’s policy on employee use of AI tools not approved by IT?
- Has the company audited non-sanctioned AI tool use in the last 12 months? What did it find?
- Which departments are most likely to have adopted AI tools independently (marketing, sales, finance, legal)?
- Are employees using personal accounts on AI platforms (ChatGPT, Claude, Gemini) to process company data?
Capability questions:
- Can any deployed agents initiate financial transactions without human approval?
- Can any agents communicate directly with customers, vendors, or regulators on behalf of the company?
- Can any agents modify production systems, databases, or code without human review?
- What is the highest-stakes action any agent in the environment is authorized to take?
Governance questions:
- Who owns AI governance in the organization? Is there a written policy?
- How are AI tools approved before deployment? Is there a formal review process?
- How are AI agents monitored in production? What triggers a human review?
- What is the incident response process when an AI agent makes a material error?
The answers to these questions define the actual exposure. They are also the foundation of any carrier conversation about coverage terms.
Policy Audit Checklist
Once you understand what the client has deployed, audit the existing policy stack against that exposure.
Look for AI exclusion endorsements. The ISO CG 40 47 and CG 40 48 forms are the most widely adopted, but carrier-specific endorsements exist and vary significantly. Pull every endorsement on the CGL, E&O, cyber, and D&O policies and identify AI-specific language.
Review the form definitions. The ISO definition of “generative artificial intelligence” is the most consequential language in play:
“Generative artificial intelligence” means a machine-based learning system or model that is trained on data with the ability to create content or responses, including but not limited to text, images, audio, video or code.
This definition is broad. It does not require the system to be a large language model. It does not require the output to be creative content. Any AI system trained on data that produces outputs — including decisions, classifications, recommendations, and agent-generated communications — is likely captured.
Check for sub-limits and carve-outs. Some carriers have introduced sub-limits for AI-related claims rather than blanket exclusions. A policy with a $50,000 AI sub-limit on a $2M E&O policy is materially different from a full exclusion — but the difference matters only if the broker knows it exists and can advise accordingly.
Review AI-specific requirements. Some carriers now include affirmative obligations as a condition of coverage — maintaining an AI inventory, annual governance reviews, incident reporting within defined timeframes. If the client is not meeting those requirements, the coverage may be voidable on a claim.
Map the exclusion to the exposure. For each AI tool or agent the client has deployed, map it against the exclusion language and assess whether a claim arising from that tool would be covered, excluded, or ambiguous. The ambiguous cases are where coverage disputes will occur.
Negotiation Levers
Coverage terms for AI exposure are still being written. Carriers are not locked in — and brokers who come to renewal with organized client information have genuine room to negotiate.
Documented inventory. A complete AI inventory with risk tier classifications, tool descriptions, and governance ownership is the single most effective lever. It signals a sophisticated client and gives underwriters something concrete to price against, rather than an undifferentiated “they use AI” exposure.
Governance framework. Evidence of a formal governance process — written policy, approval workflows, monitoring procedures, escalation protocols — directly addresses the carrier’s concern about uncontrolled AI exposure. Frame it as a risk management story, not a compliance document.
Risk tier classification. Carriers respond better to nuanced risk descriptions than to binary “uses AI / doesn’t use AI.” If the client can demonstrate that their Tier 3 and 4 workflows (autonomous execution) are a small percentage of their total AI footprint, with the majority in Tier 1 (internal, human-reviewed), that is meaningful for pricing.
Monitoring evidence. Logs, review records, and incident response history demonstrate active oversight. A client who has caught and corrected AI errors through their monitoring process is demonstrating a functioning risk management system. That is more valuable to an underwriter than a clean history that may simply reflect undiscovered problems.
Exclusion buybacks. For specific, well-documented AI workflows, explore whether the carrier will offer a buyback of the AI exclusion for that use case. This requires a detailed use case description and risk tier classification, but is increasingly available from carriers who want to retain the account.
When to Refer for Assessment
Some clients will not have the documentation needed for a productive renewal conversation. The signals are clear.
If the client cannot identify which AI tools are running in their environment — including vendor-embedded AI — they need an inventory process before renewal, not after.
If the client cannot describe the governance framework for their highest-stakes AI workflows, they do not have one. Governance documentation written in the week before renewal will not be credible to underwriters.
If the client has Tier 3 or Tier 4 AI workflows (autonomous transaction execution, autonomous business operations) without documented oversight controls, the coverage gap is material and the renewal conversation will be adversarial.
In these cases, a formal AI risk assessment creates the documentation foundation before the carrier conversation begins. The output — a complete inventory, risk tier classifications, governance summary, and carrier-facing documentation package — is what underwriters are looking for, structured in the format they need.
If your client can’t answer basic questions about what AI is running in their environment, they need an AI risk assessment before renewal. Learn about our broker partnership →