Yes. Connecticut SB-1103 establishes a comprehensive framework for high-risk AI systems deployed in Connecticut. The law requires deployers to conduct algorithmic impact assessments, notify consumers when AI substantially influences consequential decisions, give consumers the right to appeal adverse decisions, and submit annual compliance reports. Connecticut's law closely mirrors Colorado SB 24-205 in structure and scope.
Applicable Regulations
Proposes comprehensive AI governance requirements including risk assessments, disclosure obligations, and oversight mechanisms for businesses deploying AI systems in Connecticut.
Key Requirements
Risk Assessment Conduct and document AI risk assessments before deployment
Governance Framework Establish internal AI governance policies and procedures
Incident Reporting Report AI-related incidents to the state within 72 hours
Effective: 2027-01-01 Penalties: Proposed enforcement by Connecticut Attorney General. Penalties structured as graduated fines based on revenue and severity of violation.
Related Questions
- What are Connecticut's high-risk AI system requirements? Connecticut SB-1103 requires deployers of high-risk AI systems to: (1) conduct and document algorithmic impact assessments before deployment and at least annually thereafter; (2) notify consumers before an AI system makes or substantially influences a consequential decision affecting them; (3) provide a clear explanation of the decision factors; (4) establish an accessible appeal process allowing consumers to request human review; and (5) submit annual compliance reports to the Connecticut Attorney General.
Disclaimer: This content is provided for informational purposes only and does not constitute legal advice. AI regulations and insurance policy terms change frequently. Consult with a qualified attorney or insurance professional for advice specific to your situation. Gridex makes no warranties regarding the accuracy or completeness of this information.