The Bottom Line
- On May 12, 2026, the Colorado legislature voted to repeal and replace the Colorado Artificial Intelligence Act – the first comprehensive state AI law in the United States – with a significantly scaled-back, more business-friendly framework.
- The move marks a dramatic reversal for a state that just two years ago positioned itself at the forefront of AI regulation in America.
- For businesses navigating the rapidly shifting landscape of AI compliance, this development carries major implications and signals a broader national trend toward regulatory restraint.
The Original Colorado AI Act
In May 2024, Colorado made national headlines by enacting the Colorado AI Act (SB 24-205), which established sweeping obligations for developers and deployers of “high-risk” artificial intelligence systems. The law, was originally set to take effect on February 1, 2026, but was later delayed until June 2026. It required AI developers and those deploying high-risk AI systems to take several steps to protect consumers from the risks of algorithmic discrimination. For example, businesses using “high-risk” AI systems were obligated to conduct impact assessments, provide notice to consumers when AI systems were used to make “consequential decisions” affecting them, implement robust governance programs, and disclose processes used to guard against algorithmic discrimination.
Very soon after its enactment, AI industry participants and even Colorado Governor Jared Polis raised concerns that the law went too far. As highlighted in our prior alert, it was widely viewed as the most aggressive state-level AI regulatory framework in the country and drew comparisons to the European Union’s AI Act. As part of our earlier analysis, the Colorado AI Act also represented a watershed moment in U.S. AI governance, signaling that states would not wait for Congress to act before imposing significant compliance obligations on AI developers and deployers.
Key Changes in the New Law
The replacement legislation strips away many of the most burdensome requirements of the original Colorado AI Act while retaining a more modest set of obligations. Here is what businesses should understand about the new framework.
Narrow Focus on Automated Decision-Making Technology Making Consequential Decisions
The new Colorado AI Act eliminates the “high risk” classification framework that formed the backbone of the original law. Instead, the new law focuses on automated decision-making technology (ADMT) used to “materially influence a consequential decision.” The new statute defines ADMT narrowly as: “a technology that processes personal data and uses computation to generate output” used to “make, guide, or assist a decision, judgment, or determination concerning an individual.” It also defines a “consequential decision” as one relating to a consumer’s ability to access opportunities, services, and benefits in many of the same areas covered by the original Act: education; employment; real estate; financial services; insurance; health care; and government services and public benefits. The approach to ADMT has echoes of the approach taken by the California Privacy Protection Agency (CalPrivacy) on similar issues under the California Consumer Privacy Act (CCPA).
It is evident that the new law remains focused on ensuring algorithms do not automatically make recommendations or decisions about an individual based solely on their personal data, such as their race, ethnicity, age, income, or other demographic data.
However, the new Colorado AI Act expressly excludes many AI use cases from its purview, which will lend clarity to those developing and deploying AI technologies for purposes beyond those covered by the law’s narrow definitions. Specifically:
- “Automated Decision-Making Technology” does not include:
- Malware protection and antivirus software
- Calculators and spell-checking tools
- Networking and data storage services
- AI technologies that summarize, organize, translate, draft, or otherwise present information to human users for review or administrative processing
- AI chat tools that provide information, recommendations, answer questions, or generate content (so long as it is not used to make a consequential decision about an individual)
- The following AI use cases do not make “Consequential Decisions”:
- Low-stakes or routine uses
- Advertising and content tools
- Basic spreadsheets
- Summarizing or organizing information
- Data-processing tasks
- Cybersecurity, spam filtering, and similar security services
- Fraud prevention
- Routine academic administration and student support services
Minimized Obligations of Developers and Deployers
The new legislation focuses on a narrower set of transparency and accountability measures required of developers and businesses deploying AI systems. They are no longer obligated to conduct impact assessments, implement robust governance programs, or disclose processes used to guard against algorithmic discrimination.
Developers of ADMT that make consequential decisions about consumers must simply inform users of the intended use of the technology and any harmful or inappropriate uses, limitations, or risks known to the developer, for which the technology should not be used. They must also disclose, to the extent known, the types of data, including personal data, used to train the technology, and they must provide users with instructions for appropriate use, monitoring, and human review.
Deployers have two primary disclosure obligations: (1) to provide clear and conspicuous prior notice to consumers if ADMT will be used to make a consequential decision about them, along with instructions for a consumer to obtain additional information about this; and (2) to notify a consumer if the ADMT’s decision results in an adverse outcome, explaining the nature of the decision and the process for the consumer to request additional information about the ADMT, the inputs it used, and the data that it relied upon to make the consequential decision. Deployers must also provide an opportunity for meaningful human review and reconsideration if requested.
Effective Date
Once signed by Governor Jared Polis, the new law is expected to take effect on January 1, 2027. Notably, because the original Colorado AI Act’s February 1, 2026, effective date had already been postponed to June 1, 2026, businesses that had been preparing for compliance with the original framework should now recalibrate their efforts.
What Drove This Change
The repeal of the Colorado AI Act did not happen in a vacuum. Several converging forces — political, economic, and legal — drove the legislature to abandon its original approach.
Perhaps the single most significant catalyst was the Executive Order issued by the White House on December 11, 2025, discussed in our prior alert. That Executive Order:
- established as official U.S. policy that AI regulation should impose minimal burdens on businesses, so that the country can maintain dominance in the global AI race;
- criticized the patchwork of state AI laws and proposed regulating AI at the federal level; and
- directed federal agencies to identify state laws imposing undue burdens on AI innovation, and to take steps to challenge such laws and to penalize the states enacting them.
Building on this Executive Order, the administration issued a comprehensive AI Policy Framework in March 2026 that further articulated the federal government’s preference for a light-touch, innovation-first, national regulatory approach. The framework explicitly called on states to harmonize their AI laws with federal policy and warned that state regulations inconsistent with the framework’s principles could face preemption challenges. This dynamic, should it continue, effectively creates a ceiling on how far states can go in regulating AI without risking a collision with federal authority. For businesses, this means that the most burdensome state AI laws may have a limited shelf life – either because states voluntarily scale them back (as Colorado has done) or because federal action forces the issue.
This federal policy is reflected in a shift within the broader political environment. As AI has become an increasingly central pillar of the U.S. economy and national security strategy, the bipartisan appetite for aggressive regulation seems to be diminishing. In Colorado, legislators who initially supported the original AI Act faced constituent pressure from both the business community and from workers who feared that regulatory costs could slow AI-driven economic growth in the state. Indeed, the business community mounted a sustained and effective campaign against the original Colorado AI Act. Technology companies, trade associations, and business groups argued that the law’s compliance costs were disproportionate, that its broad definitions created unacceptable legal uncertainty, and that Colorado risked driving AI investment and talent to other states.
What This Signals for the Future
The repeal and replacement of the Colorado AI Act is not a mere amendment or technical adjustment. It represents a fundamentally new perspective in the state’s approach to AI governance. Colorado has moved from prioritizing assertive regulation of artificial intelligence to focusing on fewer regulations in the interest of innovation, economic competition, and alignment with emerging federal policy. While the new AI Act will still be one of the more robust state AI laws in the country, the comprehensive regulatory architecture the state had initially presented has now been replaced with a narrower, less prescriptive set of requirements.
This pivot is particularly notable because Colorado’s original law had served as a model and reference point for several other states considering their own AI legislation. The state’s new approach sends a powerful signal to state legislatures across the country that the political and policy winds have shifted decisively in favor of a lighter regulatory touch.
The Colorado legislature’s decision is not an isolated event. It may be a harbinger of a broader national trend that will unfold over the coming months and into 2027. Some likely outcomes of these developments include:
- Early-Adopter States Will Amend or Replace Their AI Laws. States that passed comprehensive AI legislation in 2024 and early 2025, inspired in part by Colorado’s original AI Act, may decide to revisit those frameworks. Legislators in these states are watching Colorado closely and will face similar pressure from business constituencies and federal policy signals to adopt less onerous approaches. In recent months, most newer state AI laws have focused on specific issues (such as synthetic performers) and have not been comprehensive AI laws such as Colorado’s AI Act.
- Laws Not Yet in Effect May Be Delayed. Several states have AI laws on the books that have not yet taken effect. Some of these laws may be delayed, as the Colorado AI Act was, or their implementation paused, as legislatures reassess whether the original frameworks remain appropriate in light of the changed federal landscape.
- Laws Currently in Effect May Not Be Actively Enforced. Even where AI laws remain technically in force, state attorneys general and regulatory agencies may exercise discretion to deprioritize enforcement, particularly where doing so would put the state at odds with clearly articulated federal policy. The mere existence of a state AI law on the books may not mean aggressive enforcement is forthcoming. Businesses should pay close attention to enforcement actions if and when they arise, to better understand the risks around compliance.
Takeaways
Reassess Your AI Compliance Strategy
Businesses that invested significant resources in preparing for compliance with the original Colorado AI Act, or similar laws in other states, should immediately reassess their compliance roadmap. Compliance programs designed for the original framework may be over-engineered relative to what the new law actually requires. This is an opportunity to right-size AI governance efforts to match the current regulatory reality.
Monitor Federal-State Dynamics Closely
The interplay between federal AI policy and state AI legislation is the most important variable in the country’s AI regulatory landscape right now. Businesses should track not only what individual states are doing, but how the federal government responds to state-level initiatives. Congress has been directed to prioritize federal AI legislation, and when such legislation comes into effect, the preemption risk is real. But with national elections on the horizon, further shifts may occur.
Do Not Abandon AI Governance Entirely
While the trend is clearly toward less prescriptive regulation, businesses should not interpret Colorado’s pivot as a signal that AI governance no longer matters. Responsible AI practices, including transparency and meaningful human oversight, remain important for managing legal risk, maintaining consumer trust, and positioning your organization favorably as the regulatory landscape continues to evolve.