Colorado SB 24-205 explicitly classifies AI systems that make or substantially influence lending, credit underwriting, or insurance decisions as high-risk systems. Financial services firms deploying such AI in Colorado must: conduct algorithmic impact assessments before deployment and annually; notify consumers when AI influences a consequential financial decision; provide a written explanation of the decision factors; and maintain an accessible appeal process. The Colorado Division of Insurance has also issued separate AI guidance for insurance carriers specifically addressing algorithmic underwriting and claims systems.
Applicable Regulations
First comprehensive US state law governing high-risk AI systems. Signed May 17, 2024; compliance deadline extended to June 30, 2026 by SB 25B-004. Imposes obligations on both developers and deployers of AI systems that make or substantially influence consequential decisions affecting consumers.
Key Requirements
Impact Assessment Complete documented impact assessments annually and within 90 days of substantial modifications, covering discrimination risks, data inputs/outputs, and mitigation measures
Consumer Notice Notify consumers when a high-risk AI system makes or substantially influences a consequential decision about them
Correction & Appeal Rights Allow consumers to correct inaccurate personal data and appeal adverse decisions through human review where technically feasible
Developer Disclosure Developers must publish statements describing high-risk systems and discrimination risk management, and supply deployers documentation for impact assessments
Effective: 2026-06-30 Penalties: Enforcement by Colorado Attorney General. Violations treated as deceptive trade practices under the Colorado Consumer Protection Act.
Industry Context
Financial Services & Fintech
Banks, credit unions, investment firms, fintech companies, and financial advisors that deploy AI for credit decisioning, underwriting, portfolio management, fraud detection, and customer engagement. These firms face overlapping state AI obligations and federal financial regulations (ECOA, FCRA, Dodd-Frank), creating a layered compliance environment where state AI laws add requirements on top of — not in place of — existing federal frameworks.
Typical Compliance Gaps
No disparate impact testing of AI credit or underwriting models beyond federal minimums
No state-level AI disclosure to consumers about automated financial decisions
Lack of documentation mapping AI model outputs to specific adverse actions
Assumption that federal banking compliance satisfies state AI law obligations
Related Questions
- Does Colorado require AI impact assessments? Yes. Colorado SB 24-205 requires deployers of high-risk AI systems to complete impact assessments before deployment and annually thereafter.
- What is a high-risk AI system under Colorado law? Colorado SB 24-205 defines a high-risk AI system as one that makes or substantially influences consequential decisions affecting Colorado residents in the domains of employment, education, financial services, housing, or insurance. The system must be the proximate cause of a consequential decision — meaning it produces an output such as a recommendation, score, or classification that a human deployer then uses to make the final call.
Disclaimer: This content is provided for informational purposes only and does not constitute legal advice. AI regulations and insurance policy terms change frequently. Consult with a qualified attorney or insurance professional for advice specific to your situation. Gridex makes no warranties regarding the accuracy or completeness of this information.