Colorado SB 24-205 and Connecticut SB-1103 both provide consumers the right to appeal adverse decisions made by high-risk AI systems and request human review.
Applicable Regulations
First comprehensive US state law governing high-risk AI systems. Signed May 17, 2024; compliance deadline extended to June 30, 2026 by SB 25B-004. Imposes obligations on both developers and deployers of AI systems that make or substantially influence consequential decisions affecting consumers.
Key Requirements
Impact Assessment Complete documented impact assessments annually and within 90 days of substantial modifications, covering discrimination risks, data inputs/outputs, and mitigation measures
Consumer Notice Notify consumers when a high-risk AI system makes or substantially influences a consequential decision about them
Correction & Appeal Rights Allow consumers to correct inaccurate personal data and appeal adverse decisions through human review where technically feasible
Developer Disclosure Developers must publish statements describing high-risk systems and discrimination risk management, and supply deployers documentation for impact assessments
Effective: 2026-06-30 Penalties: Enforcement by Colorado Attorney General. Violations treated as deceptive trade practices under the Colorado Consumer Protection Act.
Proposes comprehensive AI governance requirements including risk assessments, disclosure obligations, and oversight mechanisms for businesses deploying AI systems in Connecticut.
Key Requirements
Risk Assessment Conduct and document AI risk assessments before deployment
Governance Framework Establish internal AI governance policies and procedures
Incident Reporting Report AI-related incidents to the state within 72 hours
Effective: 2027-01-01 Penalties: Proposed enforcement by Connecticut Attorney General. Penalties structured as graduated fines based on revenue and severity of violation.
Related Questions
- Does the Colorado AI Act give consumers appeal rights? Yes. Colorado SB 24-205 grants consumers the right to appeal any adverse consequential decision made with the substantial involvement of a high-risk AI system. Consumers may request a human review of the decision and receive a written explanation of the factors that led to the outcome. Deployers must establish and publicize an accessible appeal process.
- What are Connecticut's high-risk AI system requirements? Connecticut SB-1103 requires deployers of high-risk AI systems to: (1) conduct and document algorithmic impact assessments before deployment and at least annually thereafter; (2) notify consumers before an AI system makes or substantially influences a consequential decision affecting them; (3) provide a clear explanation of the decision factors; (4) establish an accessible appeal process allowing consumers to request human review; and (5) submit annual compliance reports to the Connecticut Attorney General.
Disclaimer: This content is provided for informational purposes only and does not constitute legal advice. AI regulations and insurance policy terms change frequently. Consult with a qualified attorney or insurance professional for advice specific to your situation. Gridex makes no warranties regarding the accuracy or completeness of this information.