Colorado, Connecticut, California, and Illinois all require disclosure when AI is used in consequential decisions, with California uniquely requiring watermarking of AI-generated content.
Applicable Regulations
First comprehensive US state law governing high-risk AI systems. Signed May 17, 2024; compliance deadline extended to June 30, 2026 by SB 25B-004. Imposes obligations on both developers and deployers of AI systems that make or substantially influence consequential decisions affecting consumers.
Key Requirements
Impact Assessment Complete documented impact assessments annually and within 90 days of substantial modifications, covering discrimination risks, data inputs/outputs, and mitigation measures
Consumer Notice Notify consumers when a high-risk AI system makes or substantially influences a consequential decision about them
Correction & Appeal Rights Allow consumers to correct inaccurate personal data and appeal adverse decisions through human review where technically feasible
Developer Disclosure Developers must publish statements describing high-risk systems and discrimination risk management, and supply deployers documentation for impact assessments
Effective: 2026-06-30 Penalties: Enforcement by Colorado Attorney General. Violations treated as deceptive trade practices under the Colorado Consumer Protection Act.
Proposes comprehensive AI governance requirements including risk assessments, disclosure obligations, and oversight mechanisms for businesses deploying AI systems in Connecticut.
Key Requirements
Risk Assessment Conduct and document AI risk assessments before deployment
Governance Framework Establish internal AI governance policies and procedures
Incident Reporting Report AI-related incidents to the state within 72 hours
Effective: 2027-01-01 Penalties: Proposed enforcement by Connecticut Attorney General. Penalties structured as graduated fines based on revenue and severity of violation.
Requires providers of large-scale generative AI systems (1 million+ monthly users) to make AI-generated content detectable through free public detection tools and embedded technical watermarks in image, video, and audio output. Signed September 19, 2024.
Key Requirements
Free AI Detection Tool Offer a free, publicly accessible tool allowing anyone to assess whether image, video, or audio content was created or altered by the provider's generative AI system
Manifest Disclosure Give users the option to attach a clear, conspicuous, human-readable disclosure on AI-generated content
Latent Technical Disclosure Embed technical metadata (provider name, system version, creation date, unique identifier) in AI-generated content, detectable by the provider's tool
Third-Party Licensee Enforcement Revoke licenses within 96 hours if a licensee disables disclosure capabilities
Effective: 2026-01-01 Penalties: Civil penalties of $5,000 per violation, each day constituting a separate violation.
Expands existing AI Video Interview Act to cover broader AI-driven employment decisions. Requires consent and disclosure for AI analysis.
Key Requirements
Consent Requirement Obtain explicit consent before AI analysis of candidates
Data Retention Follow data retention limits for AI-processed data
Annual Reporting Report AI usage in employment decisions annually
Effective: 2025-01-01 Penalties: Enforcement by Illinois Attorney General. Civil penalties of up to $500 per negligent violation and $2,500 per intentional violation.
Related Questions
- What are the Colorado AI Act consumer notice requirements? Under Colorado SB 24-205, deployers of high-risk AI systems must notify consumers when an AI system is the proximate cause of a consequential decision affecting them. The notice must be delivered before or at the time of the decision, explain that AI was used, describe the type of data processed, and inform the consumer of their right to appeal or request a human review.
- Who must comply with the California AI Transparency Act? California SB-942 applies to developers of generative AI systems that are made available to consumers in California and that generate text, images, audio, or video. Covered developers must implement provenance standards (such as C2PA) to embed machine-readable watermarks in AI-generated content, provide publicly accessible tools for detecting AI-generated content from their systems, and disclose when users interact with AI. The law applies to developers with 1 million or more monthly users.
Disclaimer: This content is provided for informational purposes only and does not constitute legal advice. AI regulations and insurance policy terms change frequently. Consult with a qualified attorney or insurance professional for advice specific to your situation. Gridex makes no warranties regarding the accuracy or completeness of this information.