If you’ve been following AI insurance coverage this year, you’ve probably seen some variation of these headlines: “Carriers Flee AI Risk.” “Your Insurance Won’t Cover AI Anymore.” “The End of AI Coverage.” We’ve also seen the other kind: “AI Exclusions Are Overblown.” “Nothing Has Actually Changed.” “The Market Is Just Adjusting.”

Both versions are wrong. And the difference matters — because what the filings actually say is more interesting, more nuanced, and more actionable than either narrative.

We know this because we’ve read them. Not the press releases. Not the broker summaries. The actual form language that gets attached to policies — the definitions sections, the exclusion clauses, the endorsement schedules filed with state insurance departments across the country. This is primary source material that most commentary never touches. It’s also where the real story lives.

What we found is a market in the middle of something that looks more like repricing than retreat. Carriers are not deciding AI is uninsurable. They’re deciding they don’t have enough information to price it yet — and they’re using exclusion language to buy themselves time while they figure it out. That’s a specific thing, and it has specific consequences, and those consequences look nothing like what either the alarm-raisers or the dismissers describe.

What the Headlines Get Wrong

The first misconception is structural: that carriers are fleeing AI risk the way they fled asbestos or environmental liability. That’s not what the filings show. An exclusion endorsement isn’t a verdict on a risk category — it’s a tool for separating known exposure from unknown exposure. When a carrier files a general liability exclusion for AI-generated content, they’re not saying “we will never touch this.” They’re saying “we won’t price this at standard GL rates until we understand what we’re pricing.” Often the same carrier that files an exclusion endorsement is simultaneously developing a specialty AI coverage product, or piloting sublimit structures for select accounts. The two activities are not contradictory — they’re the same bet expressed at different layers of the market.

The second misconception runs in the opposite direction, and we hear it most often from legal and technology commentators who find the insurance alarm-ism tedious: “These are just standard exclusions. This happens all the time. Cyber exclusions went the same way fifteen years ago.” There’s a version of that argument that’s true — exclusions do often precede specialty markets, and that pattern probably holds here. But the “nothing has changed” framing ignores something important about the specific language in these filings. These definitions are not typical. They’re unusually broad in ways that have real consequences for companies that haven’t looked closely at what they’re signing.

The gap between “carriers are fleeing” and “nothing is really new” is actually the most informative place to stand. It’s where you can see what’s actually happening: a market trying to write new rules in real time, in the absence of actuarial data, using definition language as a negotiating tool while the claims history catches up.

Reading the Actual Language

The most widely filed AI exclusion in commercial general liability is Verisk’s CG 40 47, and the place to start is the definition of what it actually excludes. Here is the operative language:

ISO Form Definition — Generative Artificial Intelligence

“Generative artificial intelligence” means a machine-based learning system or model that is trained on data with the ability to create content or responses, including but not limited to text, images, audio, video or code.

Notice what this catches. Any “machine-based learning system…with the ability to create content or responses.” That’s not just ChatGPT. That’s your CRM’s AI-powered email drafting feature. That’s the customer service chatbot on your website. That’s the code completion tool your developers use every day. That’s the AI summarization features embedded in your document management system — the software you bought for document management, not for its AI capabilities, but which happens to include them now because every software vendor has added them.

The phrase “including but not limited to” is doing a lot of work in that definition. It means the list of examples — text, images, audio, video, code — is illustrative, not exhaustive. The definition isn’t limited to those output types. If it’s machine learning and it generates output of any kind, you’re in the territory of the exclusion. Courts will interpret this language. Some will interpret it narrowly, some broadly. But the form as written doesn’t give you much room to argue that your use case is categorically outside the scope.

We’ve written a detailed technical analysis of the CG 40 47 forms in our earlier piece on Verisk’s AI exclusions if you want to go further into how the exclusion operationalizes against specific claim types. But the definitional breadth is the foundation everything else rests on.

Berkley’s PC 51380 is worth examining alongside CG 40 47 specifically because it takes a different path to similar territory. Berkley didn’t adopt the ISO form — they filed their own language. This is a meaningful distinction that most coverage of these endorsements misses entirely. It means Berkley made an independent underwriting decision about how to frame AI exposure, and that decision reflects their own actuarial assessment rather than the industry consensus position that Verisk’s form represents. Some carriers will follow Verisk’s lead because adopting standard forms is efficient and defensible. Others will write their own because they have a different view of the risk, different portfolio considerations, or different ambitions in the specialty AI market. The fact that Berkley went independent signals the latter — they’re positioning, not just reacting.

The filing status discrepancies compound this. CG 40 47 is approved in some states and still pending in others. PC 51380 follows a similar pattern. The same endorsement can be attached to a policy in one state and not yet available in another. This matters practically because coverage follows policy issuance, not operations. A company operating AI systems in multiple states doesn’t get a single national answer about whether those systems are covered. They get a patchwork of answers that depends on where each policy was filed, which form version was attached, and which state’s approval timeline that carrier was navigating. That’s not a hypothetical problem. That’s the actual situation for any mid-sized company with multi-state operations.

The most interesting filing in the current market, though, is Hamilton’s sublimit approach. Hamilton isn’t filing an exclusion. They’re writing a sublimit specifically for AI-related claims — a defined coverage layer for a risk that most carriers are actively trying to push off their books. This is a materially different bet. It says: AI risk is priceable, but only if we can cap the maximum exposure. Coverage exists, but it requires underwriting the AI deployment specifically, which requires information the insured has to provide.

The filings don’t say “AI is uninsurable.” They say “we can’t price what we can’t see.” That distinction is everything.

The Hamilton filing is a signal about where this market is heading. Not blanket exclusion as a permanent condition, but gated coverage that requires documentation to unlock. The sublimit structure is essentially an underwriting hypothesis: give us enough information about what you’re deploying and how you’re managing it, and we can put a number on it. That’s a fundamentally different posture than the exclusion-only approach, and it suggests at least one significant carrier believes the information problem is solvable.

The Pattern Behind the Filings

Pull all three approaches together and a coherent pattern emerges, even though the filings point in different surface directions.

The breadth in every filing is intentional. We spent some time wondering whether the expansiveness of definitions like CG 40 47’s “machine-based learning system” was careless drafting — language that swept too wide because no one had thought carefully about edge cases. We no longer believe that. Broad definitions in insurance forms aren’t mistakes. They’re negotiating positions. The carrier writes an exclusion that captures everything, then narrows it through endorsement amendments or discretionary underwriting for accounts that can document their exposure. The breadth isn’t the final answer; it’s the opening bid. For accounts that come in with nothing — no AI inventory, no governance documentation, no usage policies — the broad exclusion is what sticks.

The filing speed varies dramatically by state, and the variance isn’t random. States with active AI regulatory discussions tend to have more deliberate approval timelines. States with lighter regulatory touch tend to approve or accept filings faster. This means the patchwork of coverage isn’t just about carrier decisions — it’s partially a function of how quickly each state’s department of insurance is moving on AI-adjacent policy questions. A company in Colorado has a different regulatory backdrop than a company in Texas, and that backdrop shapes which endorsements are live on their policies right now.

Carriers are also explicitly not making the same bet. Verisk provides the standard form that many carriers adopt for efficiency. Berkley writes independently. Hamilton writes coverage instead of exclusion. Cincinnati Financial, Frederick Mutual, Philadelphia Insurance — each is working from its own actuarial judgment and portfolio strategy. The narrative of a unified market reaction to AI risk is inaccurate. What’s actually happening is closer to experimentation: multiple carriers running parallel hypotheses about how to handle an exposure category that doesn’t yet have much claims history. Some of those hypotheses will be wrong, and the ones that are wrong will produce either adverse selection (attracting risks the carrier can’t price) or coverage gaps (pushing away accounts that would have been profitable). We’ll know which bets were right in a few years.

The consistent signal across all three approaches is the same one Hamilton’s filing makes explicit: the real problem isn’t that AI risk is unpriceable in principle. It’s that carriers don’t have the information they need to price it in practice. An exclusion is one response to that information problem — exclude first, underwrite later if the account can provide documentation. A sublimit is another response — write coverage, but cap the exposure until you have more claims data. Both approaches are expressions of the same underlying uncertainty. The cure for the uncertainty is information, which is why the market is already differentiating between documented and undocumented AI deployments, and that gap is going to widen.

What This Means If You’re Reading This

A few observations that follow directly from the filings, stated plainly.

If you’re reading headlines about AI insurance, you’re operating with a frame that doesn’t match the actual situation. The alarm-raising version overestimates how settled the coverage exclusion is. The dismissive version underestimates how broad the definitional language already in effect actually is. Neither is useful for making real decisions.

The form language on your own endorsements matters more than anything a carrier’s press release says. If CG 40 47 or a similar endorsement has been attached to your commercial general liability policy, the definition of “generative artificial intelligence” in that form is what governs your coverage — not your broker’s summary, not the carrier’s FAQ, not what the news coverage described. Reading your own endorsements isn’t optional if you want to understand your actual position.

The breadth of the definitions means your exposure is probably wider than you think. The question isn’t just “do we use AI?” The question is “does any system we use qualify as a machine-based learning system that generates content or responses?” For most companies in 2026, the honest answer is yes, probably in multiple systems, some of which aren’t being tracked as AI deployments.

Multi-state operations require policy-by-policy review, not a single assessment. The filing status patchwork means you can’t assume consistent coverage across your portfolio. The endorsement attached to your New York policy may be different from what’s on your Texas policy, and the difference may be material.

The Hamilton filing is worth watching as a directional indicator. The specialty AI coverage market is developing, but it’s developing around the assumption that insureds can document their deployments. The same documentation that makes an exclusion negotiable is the documentation that makes a sublimit application possible. Those two paths converge in the same place.


There’s a reason we read the filings rather than the headlines. The headlines tell you to panic or to dismiss. The filings tell you something more useful: exactly what carriers need to see before they’ll change the conversation. That’s the actual signal. That’s where the work is.

The market is solving an information problem, not making a permanent judgment about AI as a category of risk. The carriers that are filing exclusions are buying time. The ones that are filing sublimits are telling you what comes next. Neither group is saying AI is uninsurable. They’re saying they can’t price uncertainty, and uncertainty is what you have when you can’t see what you’re insuring. That’s a very specific thing to say, and it has a very specific remedy.

Reading the filings carefully doesn’t tell you that everything is fine or that everything is alarming. It tells you something more useful than either: it tells you exactly what the conversation looks like from the other side of the underwriting desk, and what would need to change for that conversation to go differently.