Gridex advises businesses on AI governance. It would be hypocritical not to apply those same principles to our own work. Here's exactly how we use AI and where we draw lines.
Research and synthesis. We use large language models to accelerate research — scanning regulatory filings, summarizing carrier bulletins, identifying patterns across state legislation. The models find information faster. Humans decide what it means.
Content drafting. Articles, guides, and analysis on this site involve AI in the drafting process. Every piece is reviewed, fact-checked, and edited by a human before publication. AI is never the final author.
Client deliverables. When we build AI systems for clients, we use AI tools in our own development workflow. But client-facing outputs — recommendations, audit findings, compliance assessments — are human-authored and human-reviewed.
Compliance data hub. The regulatory data in our compliance hub is human-entered, not AI-generated. Every data point passes through our trust-gated review system: draft, reviewed, verified. AI doesn't get to mark its own homework.
EU AI Act. Our client-facing AI deployments follow risk classification principles from the EU AI Act. We apply proportionate governance based on the risk level of each system.
Colorado SB 24-205. As a company that advises on Colorado's AI governance requirements, we apply the same impact assessment and disclosure obligations to our own high-risk AI use.
NIST AI RMF. Our internal AI governance practices align with the NIST AI Risk Management Framework's core functions: Govern, Map, Measure, Manage.