What should an AI governance framework include?

Last verified: March 24, 2026
An AI governance framework should include an AI use policy, risk classification system, impact assessment process, documentation requirements, incident response procedures, and regular audit mechanisms aligned with state regulatory requirements.

Applicable Regulations

SB-24-205

Colorado AI Act (Consumer Protections for Artificial Intelligence)

enacted

First comprehensive US state law governing high-risk AI systems. Signed May 17, 2024; compliance deadline extended to June 30, 2026 by SB 25B-004. Imposes obligations on both developers and deployers of AI systems that make or substantially influence consequential decisions affecting consumers.

Key Requirements

Impact Assessment Complete documented impact assessments annually and within 90 days of substantial modifications, covering discrimination risks, data inputs/outputs, and mitigation measures
Consumer Notice Notify consumers when a high-risk AI system makes or substantially influences a consequential decision about them
Correction & Appeal Rights Allow consumers to correct inaccurate personal data and appeal adverse decisions through human review where technically feasible
Developer Disclosure Developers must publish statements describing high-risk systems and discrimination risk management, and supply deployers documentation for impact assessments
Effective: 2026-06-30 Penalties: Enforcement by Colorado Attorney General. Violations treated as deceptive trade practices under the Colorado Consumer Protection Act.

Build Your AI Governance Framework

Get a personalized analysis of how these regulations affect your organization.

Build Your AI Governance Framework

Related Questions

  • What is an AI impact assessment? An AI impact assessment is a documented evaluation of an AI system's potential risks, including bias, privacy, and safety impacts, required by Colorado SB 24-205 and Connecticut SB-1103 before deploying high-risk AI systems.
  • What AI documentation do insurers require? Insurers increasingly want documented AI governance programs, risk assessments, and usage inventories when underwriting technology-related policies. Hamilton's sublimit endorsement explicitly rewards governance documentation with higher coverage limits.