AI Compliance for Financial Services & Fintech in Texas

Last verified: March 24, 2026

Regulatory Status

HB-149

Texas Responsible Artificial Intelligence Governance Act (TRAIGA)

enacted

Signed June 22, 2025; effective January 1, 2026. TRAIGA is Texas's primary comprehensive AI governance law from the 89th Legislature. It establishes prohibited AI practices applying to all entities that promote, advertise, or conduct business in Texas, produce products or services for Texas residents, or develop/deploy AI systems in the state. Key prohibitions cover behavioral manipulation (inciting self-harm, violence, or criminal activity), government social scoring, unlawful discrimination, biometric capture without consent, and constitutional rights infringement via AI. Governmental agencies and healthcare providers must disclose to consumers when they are interacting with an AI system, using clear and conspicuous language free of dark patterns. Enforcement is exclusively by the Texas Attorney General; no private right of action exists. A 36-month regulatory sandbox program allows companies to test AI systems with certain requirements waived. The law also establishes the Texas Artificial Intelligence Council (seven members) to advise on ethical, privacy, and public safety implications — though the Council cannot adopt binding rules.

Effective: 2026-01-01 View Bill Text →

Key Requirements

Prohibition on Behavioral Manipulation Cannot develop or deploy AI systems intentionally designed to incite or encourage a person to commit physical self-harm (including suicide), harm another person, or engage in criminal activity
Government Social Scoring Ban Government entities cannot use AI to assign detrimental categorical scores to individuals based on their behavior or personal characteristics
Biometric Capture Prohibition Cannot use AI with publicly available images or data to uniquely identify individuals via biometric identifiers without consent, subject to law enforcement and fraud prevention exceptions
Unlawful Discrimination Prohibition Cannot intentionally deploy AI to discriminate against protected classes under state and federal law; note that disparate impact alone is insufficient to prove intent
Constitutional Rights Protection Cannot develop or deploy AI systems designed to infringe constitutional rights or target individuals based on constitutionally protected characteristics
AI Interaction Disclosure Governmental agencies and healthcare providers must disclose to consumers, before or at the time of interaction, that they are interacting with an AI system; disclosures must be clear and conspicuous with no dark patterns

Insurance Implications

Relevant policy types: D&O, E&O, Cyber, Fiduciary

Compliance Gaps to Address

No disparate impact testing of AI credit or underwriting models beyond federal minimums
No state-level AI disclosure to consumers about automated financial decisions
Lack of documentation mapping AI model outputs to specific adverse actions
Assumption that federal banking compliance satisfies state AI law obligations