We hold ourselves to the same standard we set for clients.

Gridex advises businesses on AI governance. It would be hypocritical not to apply those same principles to our own work. Here's exactly how we use AI and where we draw lines.

Where AI enters the work.

Research and synthesis. We use large language models to accelerate research — scanning regulatory filings, summarizing carrier bulletins, identifying patterns across state legislation. The models find information faster. Humans decide what it means.

Content drafting. Articles, guides, and analysis on this site involve AI in the drafting process. Every piece is reviewed, fact-checked, and edited by a human before publication. AI is never the final author.

Client deliverables. When we build AI systems for clients, we use AI tools in our own development workflow. But client-facing outputs — recommendations, audit findings, compliance assessments — are human-authored and human-reviewed.

Compliance data hub. The regulatory data in our compliance hub is human-entered, not AI-generated. Every data point passes through our trust-gated review system: draft, reviewed, verified. AI doesn't get to mark its own homework.

Four commitments we don't compromise on.

  • 01 Human accountability. A person is responsible for every output. AI assists; humans decide. This applies to our published content, client work, and internal processes.
  • 02 Auditability. We can explain how any piece of content or recommendation was produced. If AI was involved, we can show where and how.
  • 03 No model training on client data. Client information is never used to train, fine-tune, or improve AI models. Full stop. We use API-based services with data processing agreements that prohibit training.
  • 04 Disclosure. If you ask us whether AI was involved in something we produced, we'll tell you honestly. We don't disguise AI-assisted work as purely human output.

How our practices map to emerging standards.

EU AI Act. Our client-facing AI deployments follow risk classification principles from the EU AI Act. We apply proportionate governance based on the risk level of each system.

Colorado SB 24-205. As a company that advises on Colorado's AI governance requirements, we apply the same impact assessment and disclosure obligations to our own high-risk AI use.

NIST AI RMF. Our internal AI governance practices align with the NIST AI Risk Management Framework's core functions: Govern, Map, Measure, Manage.

Want to discuss AI governance? Let's talk.

ryan@gridex.dev