The Agent Imperative

Gartner’s 2026 strategic predictions are direct: 90% of enterprises will have AI agents deployed in production by 2028. That number is not speculative. The trajectory is already in motion.

Salesforce launched Agentforce in late 2024. Microsoft’s Copilot suite now includes autonomous agents that execute multi-step workflows across Teams, Outlook, and SharePoint. ServiceNow, SAP, Workday, and every major enterprise platform vendor has either shipped or announced agent capabilities. The enterprise software stack is being rebuilt around agentic architecture — software that doesn’t just assist humans but acts on their behalf.

For companies, this creates a strategic imperative that has nothing to do with competitive pressure from AI-native startups. The software your company already pays for is adding agent capabilities by default. Whether or not your company has an “AI agent strategy,” it almost certainly has AI agents.

The insurance implications of that fact have not been worked out.

What Insurance Was Built For vs. What’s Deployed

Commercial insurance — general liability, errors and omissions, cyber — was designed for a world of deterministic software. The system does what it is programmed to do. When it fails, the failure is discrete, traceable, and reproducible. A bug in a billing system produces a billing error. The error is found, corrected, and the damage is quantified.

AI agents are fundamentally different. They are probabilistic. Their behavior varies based on context, input phrasing, and the sequence of prior decisions in the workflow. They are autonomous — they take initiative, call external APIs, and execute actions without waiting for human approval at each step. They are self-directed — given a goal, an agent decides how to achieve it, which tools to use, and in what order.

When a probabilistic, autonomous, self-directed system causes a loss, traditional liability frameworks struggle. Who made the decision? The AI did. Was the decision reasonable? That depends on the context at the time, which may no longer be reproducible. What was the extent of the damage? The agent may have cascaded across dozens of downstream actions before the error surfaced.

Insurance policies are structured around human decisions and deterministic systems. They were not written for agents.

Three Insurance Gaps

1. The definitional gap. Most commercial policies do not define “AI agent.” They may reference “artificial intelligence,” “machine learning,” or, after 2024, “generative artificial intelligence” — but an AI agent is more than a generative model. An agent includes the orchestration layer, the tool-calling capabilities, the memory systems, the workflow logic, and the integration with external systems. None of this is captured in current policy definitions.

This definitional ambiguity cuts both ways. When a loss occurs, the policy language that excludes “generative artificial intelligence” may or may not apply to the orchestration layer that executed the flawed decision. That ambiguity will be resolved in coverage disputes — and courts will be writing the definitions that underwriters didn’t.

2. The coverage gap. Carrier exclusion language, most prominently the ISO CG 40 47 and CG 40 48 forms, targets “generative artificial intelligence” broadly. The definition in those forms — “a machine-based learning system or model that is trained on data with the ability to create content or responses” — is wide enough to capture most agent frameworks. If your agent generates a recommendation, a communication, or a decision as part of its workflow, it likely falls within this definition.

That means the exclusion applies not just to the AI component of your agent, but potentially to the entire workflow the agent executes. A claim arising from an agent-driven process may be denied across your CGL, your E&O, and your cyber policy simultaneously — because all three policies now carry AI exclusion language, often from the same ISO template.

3. The documentation gap. Underwriters cannot price risk they cannot see. When a company applies for renewal, carriers ask about AI: what tools are deployed, what they’re authorized to do, what governance exists. Most companies cannot answer these questions with precision.

The documentation gap is not a technology problem. It is a process problem. Companies deploy AI tools rapidly — often department by department, without central inventory or governance documentation. By renewal time, no one has a complete picture of what’s running. Carriers respond to that opacity the only way they can: broad exclusions.

Emerging Specialty Coverage

The specialty insurance market is beginning to respond. A small set of carriers and managing general agents are developing AI-specific policy structures that go beyond the blanket exclusion approach.

These emerging products take several forms. Some carriers are offering AI sub-limits — a sublimit within an existing cyber or E&O policy that provides coverage for AI-related claims up to a defined threshold, while the full policy exclusion remains. This creates a coverage floor without requiring the carrier to price the full risk.

Others are offering endorsement buybacks — allowing insureds to purchase back coverage for specific, documented AI workflows. The buyback is underwritten based on the specific use case, its risk tier, and the governance evidence provided. A company with a documented, governed AI agent for customer service can buy back coverage for that specific use case.

A third approach is standalone AI liability policies, currently available from a limited set of specialty markets. These policies cover AI-specific risks — errors arising from AI decisions, third-party claims from AI outputs, regulatory penalties for AI compliance failures — with underwriting driven by detailed AI inventories and governance assessments.

All of these approaches have one thing in common: they require documentation. Carriers offering specialty AI coverage are not writing it blind. They want to see what’s deployed, how it’s governed, and what controls exist.

What To Do Now

The window between “carriers are developing specialty coverage” and “specialty coverage is standard and priced accordingly” is measured in renewal cycles, not years. Companies that establish their AI documentation now will have leverage at the negotiating table. Companies that wait will find themselves writing coverage from a position of opacity.

Inventory your agents. Identify every AI agent running in your environment — sanctioned and unsanctioned, built in-house and embedded in vendor products. You cannot insure what you cannot document, and you cannot document what you haven’t found.

Audit your existing policies. Review your CGL, E&O, cyber, and D&O policies for AI exclusion language — specifically CG 40 47, CG 40 48, and any carrier-specific AI endorsements. Understand exactly what’s excluded and what’s not. The AI insurance exclusions hub provides plain-language analysis of the most common forms in circulation.

Get a carrier-ready assessment. A structured AI risk assessment does two things: it creates the documentation carriers need to underwrite your risk, and it identifies the coverage gaps that need to be filled. The output is an inventory with risk tier classifications, a governance summary, and a carrier-facing documentation package. Start with an AI risk assessment →

Talk to your broker. Brokers who are tracking the specialty AI coverage market can identify which carriers are currently writing AI coverage, what their underwriting requirements look like, and where the buyback opportunities are. Find a broker with AI expertise →


The companies that document their AI agent deployments now will have leverage when specialty AI coverage becomes standard. The ones that don’t will pay the gap tax. Start with an AI risk assessment →