There’s a conversation happening at every commercial insurance renewal right now that didn’t exist eighteen months ago. It’s not the conversation most coverage professionals expected — not a debate about whether AI is dangerous, or a philosophical dispute about liability attribution. It’s more mundane than that, and more consequential. An underwriter asks: “Can you tell me what AI systems your company is running?” And the answer to that question is now determining whether a company walks out with coverage or an exclusion.

We’ve spent the last year reading carrier filings, sitting with clients at renewal, and watching the market evolve in real time. The pattern that’s emerged is one that most commentary on AI exclusions misses entirely. The narrative you hear — that carriers are “fleeing AI” or reacting with panic to an unquantifiable risk — isn’t wrong exactly, but it misses the deeper structure of what’s happening. The market isn’t retrenching across the board. It’s bifurcating. And the variable driving the split isn’t company size, industry, or even AI maturity. It’s whether you can show your work.

The companies that can answer that underwriter’s question — coherently, completely, in writing — are having an entirely different renewal experience than those that can’t. They’re negotiating terms. They’re getting sublimit buybacks. Some are holding coverage that their direct competitors are losing. Meanwhile, the companies that respond with “we use some AI tools, you know, like everyone does” are walking out with blanket exclusions and, often, no clear path back. This isn’t a temporary market disruption that will even out. It’s a structural split, and it’s hardening with every renewal cycle that goes by.

Two Markets

The clearest way to describe what we’re seeing is to draw out the two tracks directly.

Track A companies have done a specific thing: they’ve built an AI inventory — a complete accounting of every AI tool, agent, workflow, and integration running in their environment, including the shadow AI that their official IT policies don’t acknowledge but their employees use daily. They’ve classified those systems by risk tier — not in a vague way, but with a working definition of what makes a system high-risk versus low-risk, what autonomy thresholds matter, what downstream effects each system can produce. They have a governance framework with named owners, documented policies, and an escalation procedure. And critically, they can produce carrier-facing artifacts that translate all of this into underwriter language. When renewal comes, they walk in with a package.

Track B companies — and this is the vast majority of companies we encounter — say they use AI. They do use AI. But they couldn’t tell you exactly what’s running, who approved it, where customer data touches it, or what controls exist around the models making consequential outputs. Their AI usage is real but undocumented. Their governance exists in someone’s head, or in a Confluence page no one maintains, or in a policy that was written before generative AI existed and hasn’t been updated. When the underwriter asks the question, they improvise.

The point that gets missed in most coverage of this topic: Track B companies aren’t necessarily reckless or naive. Many of them have sophisticated technology teams and genuine risk awareness. They just haven’t had to translate their internal practices into carrier-facing artifacts before, because no one asked them to. That’s changing, fast.

Carriers don’t exclude AI because it’s uninsurable. They exclude it because they can’t price what they can’t see.

The split isn’t between AI-native companies and traditional ones. It isn’t between large companies and small ones. It’s between documented and undocumented deployments — and that distinction cuts across every sector, every size, every sophistication level.

Why Carriers Are Writing It This Way

The standard read on carrier AI exclusions is that the industry is scared. That insurers looked at generative AI and decided the liability exposure was too unpredictable to touch. That reading is understandable — exclusion language does read as defensive — but it fundamentally misunderstands the actuarial logic underneath.

Carriers are not saying AI is uninsurable. Several are actively trying to write it. What they’re saying is: we can’t include an unknown exposure in a standard rate that was built on known loss history. Generative AI is generating liability patterns that haven’t worked their way through the claims system yet. The models don’t exist. The exclusion isn’t a permanent rejection — it’s a mechanism for keeping an unpriced exposure out of a policy while the industry builds the tools to price it correctly.

Look at how Verisk’s CG 40 47 language is drafted. The definition of a covered “artificial intelligence system” is deliberately broad — it reaches any machine-based learning system with the ability to create content or responses. That breadth isn’t an accident or an overreach. It’s a clean perimeter that lets carriers apply a consistent exclusion while they develop actuarial models on the other side. The scope of what’s excluded mirrors the scope of what’s not yet quantified. Once a carrier can model a specific risk profile — say, a company running generative AI in a documented, controlled environment with defined use cases and audit trails — that specific profile becomes negotiable. The exclusion functions as a default, not a ceiling.

Berkley’s PC 51380 takes a different structural approach, separating AI-generated content risks from broader automation liability, but the underlying logic is the same: the carrier is trying to isolate exposures that it can evaluate from those it can’t. The form gives underwriters a framework to work with, not a blanket no. Hamilton has gone further, offering a sublimit specifically for AI coverage — which is a carrier actively signaling that it wants to write this risk, under conditions it can see clearly. What all three approaches share is that they reward the insured who can produce information. The carrier needs inputs to produce a price. Documentation is the input.

What we’ve found in client engagements is that underwriters will frequently negotiate on terms when they’re given the raw material to do so. They want to write premium. An underwriter who receives a well-structured AI inventory, a risk tier classification, and evidence of governance controls can do something with that. They can build a case internally. They can argue for a narrower exclusion, a sublimit, a buyback. An underwriter who receives a vague assurance that “we take AI seriously” has nothing to work with, and the path of least resistance is a blanket exclusion.

What Documentation Actually Means

The word “documentation” is doing a lot of work in this conversation, and it’s worth being precise about what it actually means in this context — because “documentation” sounds like a compliance exercise, and this isn’t that.

What carriers are looking for is not a policy document. It’s not a checkbox or a certification. What they need is the ability to see the actual shape of a company’s AI deployment — its scope, its autonomy levels, its failure modes, its controls. The artifacts that make that possible are specific.

An AI inventory that is genuinely complete — not just the enterprise tools IT approved, but the Claude plugins, the Copilot integrations, the third-party SaaS tools with embedded AI features that procurement didn’t flag as AI, the automations that employees built and that are now running on customer data without anyone tracking them. We’ve consistently found that when clients do a real inventory for the first time, the number of AI touchpoints in their environment is three to four times what they estimated going in. The shadow AI problem is severe, and carriers know it, which is why “we have an AI policy” is not the same as “here is our complete AI inventory.”

A risk tier framework that maps each system to meaningful categories. The lines that matter aren’t technical — they’re about consequences. Does this system make decisions or recommendations? Can a human override it easily? What does it touch? Customer data? Financial outputs? Hiring decisions? A marketing copy generator and a contract review tool both involve AI, but they sit in entirely different risk tiers, and a governance framework that treats them identically isn’t actually doing governance.

Evidence of monitoring. Not a commitment to monitor — actual evidence that someone reviews outputs, tracks model drift, audits for bias or hallucination, and has a procedure when something goes wrong. This is where most internal AI governance programs fall down. The policy exists; the active monitoring doesn’t.

And finally, a regulatory posture. Increasingly, this means showing awareness of and, where applicable, compliance with the state AI transparency and impact assessment laws that are proliferating across the country. Colorado, California, New York, Connecticut — each has its own requirements, and the intersection of AI deployment with employment, financial services, or healthcare triggers additional obligations. Carriers writing larger risks are starting to ask about this directly.

None of this is exotic. It’s operational discipline applied to a new class of technology. But it requires intention. It doesn’t happen automatically, and it doesn’t emerge from a general commitment to responsible AI. It has to be built deliberately, with carrier readiness as an explicit design goal.

What This Sets in Motion

A few implications follow from the split we’re describing, and they compound in ways that aren’t immediately obvious.

The gap is self-reinforcing. The companies that built documentation practices early — whether out of foresight or because a broker pushed them — are now iterating. Their second renewal is easier than their first. Their AI inventory exists and just needs to be updated. Their governance framework has been tested and refined. They’re building institutional muscle. The companies that haven’t started are falling further behind at each cycle, not just in terms of coverage terms but in terms of their ability to catch up. By the time an exclusion forces the conversation, they’re often looking at two or three renewal cycles before they’ve built something a carrier can actually evaluate.

Brokers are becoming the forcing function. The brokers who understand this dynamic are already having the documentation conversation with clients six months before renewal. They know that walking into an underwriter meeting without an AI inventory is leaving premium savings on the table, and they’re not willing to do that to good clients. This is raising the floor across the market — but unevenly. Brokers who haven’t absorbed the new dynamic are still treating AI as a questionnaire item rather than a coverage-shaping variable. The delta in outcomes for clients between these two kinds of broker relationships is significant and growing.

Specialty AI coverage products are emerging, but they require the same inputs. The carriers building standalone AI coverage products — and several are now doing this seriously — are not building products designed for undocumented deployments. They’re building products for companies that can show them an AI portfolio the way a professional liability insurer wants to see a claims history. Documentation isn’t a prerequisite that goes away once specialty products exist. It becomes the ticket to entry for the good products.

The value extends beyond insurance. Companies that build genuine AI governance capability — not as a compliance exercise but as operational infrastructure — find that it travels. Acquirers in due diligence want to understand AI deployment. Regulated clients in financial services and healthcare increasingly require vendor AI transparency before signing contracts. State regulators are starting to audit. The companies that built governance frameworks for carrier readiness are in a much better position in all of these conversations than they would have been otherwise. Insurance readiness, in this sense, is a proxy for something larger: the kind of operational clarity that becomes increasingly valuable as AI regulation matures and as downstream stakeholders raise their expectations.

The question we’re left with at the end of every client conversation on this topic isn’t whether AI is insurable. It clearly is — for companies that can show their work. The question is how quickly the broader market catches up to the carriers’ new information requirements. Based on what we’re watching at renewal tables right now, the answer is: more slowly than the window allows. The carriers who are building AI pricing capacity are moving fast. The exclusion defaults that exist today will not exist in the same form in two years — but the companies that haven’t built their documentation infrastructure by then will have spent those two years accumulating adverse renewal history that compounds their problem going forward.

The split is real, it’s widening, and the time to be on the right side of it is before the next renewal, not after.


For more on how carriers are structuring AI exclusions, see our analysis of how AI risk is being classified in current carrier filings.