Why Shadow AI Matters

Enterprise shadow IT research consistently finds more than 1,200 unsanctioned applications operating in the average large organization at any given time. AI tools now represent the fastest-growing segment of that shadow ecosystem — not because employees are circumventing security policy deliberately, but because the friction between “I need this tool to do my job” and “IT approved this tool” has become unworkable in a market where new AI products ship weekly.

The problem is not the tools themselves. The problem is the combination of broad insurance exclusion language and an undocumented deployment. The ISO CG 40 47 definition of “generative artificial intelligence” is deliberately inclusive — any system trained on data that produces outputs is potentially captured. That definition doesn’t care whether the AI tool was sanctioned by IT or adopted by a marketing manager last Tuesday. If an employee uses an unsanctioned AI tool to draft a contract that turns out to be wrong, or to process customer data that ends up exposed, the company bears the liability. And because the tool was never inventoried, there is no coverage documentation to stand on.

Every unsanctioned AI tool is a potential uninsured liability. Discovery is not optional.

Department-by-Department Discovery Guide

Shadow AI clusters by department. Each function has a distinct set of tools employees are likely to have adopted based on job function. Working through each department systematically is more reliable than a general IT scan alone.

Marketing Look for: content generation tools (Jasper, Copy.ai, Writer, ChatGPT for marketing copy), image and design AI (Midjourney, DALL-E, Adobe Firefly, Canva AI), ad optimization and bidding AI (built into Google Ads, Meta Ads, and third-party platforms), social media scheduling tools with AI content suggestions, SEO content tools (Surfer SEO, MarketMuse), video and audio generation tools.

High-risk scenario: an AI-generated campaign asset that violates a third-party copyright, or AI-drafted copy that makes a claim the company cannot support.

Sales Look for: email drafting assistants (built into HubSpot, Salesforce, Outreach, Salesloft — often enabled by default), lead scoring AI, call recording and analysis tools (Gong, Chorus, Fireflies), proposal generation tools, CRM AI features that auto-populate contact records or suggest next actions, LinkedIn Sales Navigator AI features.

High-risk scenario: an AI-generated sales communication that misrepresents product capabilities or pricing, or a call analysis tool that stores customer conversations on third-party servers.

HR Look for: resume screening tools (often embedded in ATS platforms like Greenhouse, Lever, Workday), interview scheduling AI, candidate assessment tools, job description generation AI, employee survey analysis tools, HR chatbots for policy questions.

High-risk scenario: a resume screening tool that introduces discriminatory bias in hiring decisions — a documented source of regulatory and legal liability that standard employment practices liability policies may not cover when AI is involved.

Finance Look for: financial forecasting models, expense report categorization AI (Expensify AI, Concur AI features), accounts payable automation, report generation assistants, ERP AI features (SAP, Oracle, NetSuite have introduced AI layers that may be enabled without explicit IT decision), fraud detection tools.

High-risk scenario: an AI expense classification tool that introduces systematic errors in financial records, or a forecasting tool whose outputs influence material business decisions without disclosure.

Legal Look for: contract review AI (Ironclad, Kira, Luminance), legal research assistants (Harvey, Lexis+ AI, Westlaw AI), document drafting tools, compliance monitoring AI, due diligence automation.

High-risk scenario: AI-reviewed contract misses a material clause; AI-generated legal document contains an error that results in an adverse outcome. Legal AI tools raise E&O questions that are not resolved by existing policy language.

Customer Support Look for: customer-facing chatbots (Intercom, Zendesk AI, Freshdesk AI), ticket routing and classification AI, knowledge base generation tools, sentiment analysis AI, automated response drafting tools.

High-risk scenario: a customer service chatbot provides incorrect information about a product, service, or policy — the Air Canada chatbot precedent established that companies are bound by AI agent statements made to customers.

Engineering Look for: code assistants (GitHub Copilot, Cursor, Codeium, Tabnine), automated testing tools, code review AI, deployment automation with AI decision layers, infrastructure optimization AI, documentation generation tools.

High-risk scenario: AI-assisted code introduces a security vulnerability or functional error that reaches production; AI-generated code infringes a third-party license.

Operations Look for: process automation platforms with AI layers (Zapier AI, Make AI, UiPath AI), supply chain optimization AI, vendor management AI, data analysis and reporting tools, workflow automation that calls AI APIs as part of operational processes.

High-risk scenario: an automated operational workflow that escalates an AI error across dozens of downstream processes before human review catches it.

Step-by-Step Discovery Process

A complete discovery process combines technical investigation with direct employee engagement. Neither alone is sufficient.

Step 1 — IT and SaaS audit. Pull your SaaS management platform inventory (Okta, BetterCloud, Torii, or similar) and identify all applications with AI capabilities. Flag any that have AI features enabled — even if the underlying tool was approved before AI features were added.

Step 2 — Browser extension scan. Browser extensions are the most common shadow AI vector. Deploy a browser extension audit across managed devices. Look specifically for AI writing assistants, screen capture AI tools, and any extension that sends data to external AI APIs.

Step 3 — Department interviews. IT scans find what’s in systems. They don’t find what employees are doing on personal devices or personal accounts. Conduct structured interviews with department leads — not asking whether employees use AI tools, but asking which AI tools they’ve found most useful. The question framing matters.

Step 4 — Expense report review. AI tool subscriptions purchased on personal credit cards and expensed are a reliable signal. Review expense report categories for software subscriptions, productivity tools, and SaaS — particularly line items under $100/month that suggest individual subscriptions.

Step 5 — API traffic analysis. For organizations with network monitoring, review API call logs for traffic to known AI endpoints: api.openai.com, api.anthropic.com, generativelanguage.googleapis.com, and equivalent endpoints for other major AI providers. Unexpected API traffic reveals tools that bypass the application layer entirely.

Step 6 — Employee survey. A direct, anonymous survey asking employees which AI tools they use for work produces more complete results than any technical audit. Frame it as understanding AI adoption to improve tool access, not an audit of prohibited behavior.

Classification Using the 4-Tier Framework

Once discovered, every AI tool needs to be classified before it can be governed or insured. The AI risk classification framework provides a four-tier structure based on decision autonomy and external exposure.

Tier 1 — Internal information processing. AI summarizing internal documents, drafting internal communications, analyzing internal data. Human reviews all outputs before any external action. Examples: internal knowledge base AI, meeting transcription tools, internal report generation. Governance requirement: acceptable use policy; periodic output review.

Tier 2 — External-facing interaction. AI communicating directly with customers, prospects, or external partners. Examples: customer service chatbots, AI-drafted external emails sent without individual human review, AI-generated proposals. Governance requirement: output approval workflow; accuracy monitoring; clear disclosure policies.

Tier 3 — Supervised business execution. AI initiating transactions, processing operational decisions, or taking actions with financial consequences — with defined human approval checkpoints. Examples: automated invoice processing, AI-assisted contract review with approval gates, AI-driven expense classification. Governance requirement: defined approval thresholds; human-in-the-loop for high-value transactions; error monitoring and reporting.

Tier 4 — Autonomous business execution. AI making and executing decisions with minimal human oversight. Examples: autonomous procurement agents, dynamic pricing systems, AI that can modify production systems or initiate communications at scale without per-action approval. Governance requirement: formal risk assessment before deployment; strict scope limitations; continuous monitoring; executive ownership.

For each tool discovered in the process above, assign a tier classification based on what the tool is actually doing in your environment — not what it’s marketed as. A code assistant that generates internal drafts reviewed by engineers is Tier 1. The same tool configured to automatically push code changes to production is Tier 4.

What To Do With the Inventory

Discovery produces a list. What transforms a list into a risk management asset is the next three steps.

Map it against your policies. For each tool in your inventory, particularly Tier 2, 3, and 4 tools, determine whether a claim arising from that tool would be covered by your existing CGL, E&O, or cyber policies — or excluded by AI endorsement language. The AI insurance exclusions hub provides plain-language analysis of the most common exclusion forms. The mapping will reveal coverage gaps that can be addressed before renewal, not after a claim.

Get a carrier-ready assessment. A discovery inventory is the input to an insurance risk assessment, not the output. A carrier-ready assessment takes the inventory, applies risk tier classifications, maps against exclusion language, documents governance controls, and quantifies financial exposure in a format that underwriters can actually use. Start with an AI risk assessment →

Work with your broker. Shadow AI discovery changes the renewal conversation. Brokers who know the full scope of a client’s AI footprint can negotiate more precisely — seeking narrow exclusions for Tier 1 use cases, exploring endorsement buybacks for governed Tier 2 and 3 workflows, and addressing Tier 4 exposures through specialty coverage or governance redesign. Find a broker with AI expertise →

The discovery process is not a one-time event. AI adoption continues to accelerate, and the shadow AI problem is structural — employees will continue to adopt tools faster than IT can evaluate them. Building discovery into annual renewal preparation is the practical solution.


Discovery is just the first step. Once you know what’s running, you need to map it against your insurance coverage and build carrier-ready documentation. Start with an AI risk assessment →