Does Connecticut regulate AI?

Last verified: March 24, 2026
Yes. Connecticut SB-1103 establishes a comprehensive framework for high-risk AI systems deployed in Connecticut. The law requires deployers to conduct algorithmic impact assessments, notify consumers when AI substantially influences consequential decisions, give consumers the right to appeal adverse decisions, and submit annual compliance reports. Connecticut's law closely mirrors Colorado SB 24-205 in structure and scope.

Applicable Regulations

SB-1103

Connecticut AI Governance Act

proposed

Proposes comprehensive AI governance requirements including risk assessments, disclosure obligations, and oversight mechanisms for businesses deploying AI systems in Connecticut.

Key Requirements

Risk Assessment Conduct and document AI risk assessments before deployment
Governance Framework Establish internal AI governance policies and procedures
Incident Reporting Report AI-related incidents to the state within 72 hours
Effective: 2027-01-01 Penalties: Proposed enforcement by Connecticut Attorney General. Penalties structured as graduated fines based on revenue and severity of violation.

Start Your AI Risk Assessment

Get a personalized analysis of how these regulations affect your organization.

Start Your AI Risk Assessment

Related Questions

  • What are Connecticut's high-risk AI system requirements? Connecticut SB-1103 requires deployers of high-risk AI systems to: (1) conduct and document algorithmic impact assessments before deployment and at least annually thereafter; (2) notify consumers before an AI system makes or substantially influences a consequential decision affecting them; (3) provide a clear explanation of the decision factors; (4) establish an accessible appeal process allowing consumers to request human review; and (5) submit annual compliance reports to the Connecticut Attorney General.