Colorado is pumping the brakes on first-of-its-kind AI regulation to find a practical path forward

8
Colorado was first to pass comprehensive AI legislation in the U.S. wildpixel/Getty Images
Colorado was first to pass comprehensive AI legislation in the U.S. wildpixel/Getty Images

Stefani Langehennig, University of Denver

When the Colorado Artificial Intelligence Act passed in May 2024, it made national headlines. The law was the first of its kind in the U.S. It was a comprehensive attempt to govern “high-risk” artificial intelligence systems across various industries before they could cause real-world harm.

Gov. Jared Polis signed it reluctantly – but now, less than a year later, the governor is supporting a federal pause on state-level AI laws. Colorado lawmakers have delayed the law’s enactment to June 2026 and are seeking to repeal and replace portions of it.

Lawmakers face pressure from the tech industry, lobbyists and the practicalities related to the cost of implementation.

What Colorado does next will shape whether its early move becomes a model for other states or a lesson in the challenges of regulating emerging technologies.

I study how AI and data science are reshaping policymaking and democratic accountability. I’m interested in what Colorado’s pioneering efforts to regulate AI can teach other state and federal legislators.

The first state to act

In 2024, Colorado legislators decided not to wait for the U.S. Congress to act on nationwide AI policy. As Congress passes fewer laws due to polarization stalling the legislative process, states have increasingly taken the lead on shaping AI governance.

The Colorado AI Act defined “high-risk” AI systems as those influencing consequential decisions in employment, housing, health care and other areas of daily life. The law’s goal was straightforward but ambitious: Create preventive protections for consumers from algorithmic discrimination while encouraging innovation.

Colorado’s leadership on this is not surprising. The state has a climate that embraces technological innovation and a rapidly growing AI sector. The state positioned itself at the frontier of AI governance, drawing from international models such as the EU AI Act and from privacy frameworks such as the 2018 California Consumer Privacy Act. With an initial effective date of Feb. 1, 2026, lawmakers gave themselves ample time to refine definitions, establish oversight mechanisms and build capacity for compliance.

When the law passed in May 2024, policy analysts and advocacy groups hailed it as a breakthrough. Other states, including Georgia and Illinois, introduced bills closely modeled after Colorado’s AI bill, though those proposals did not advance to final enactment. The law was described by the Future of Privacy Forum as the “first comprehensive and risk-based approach” to AI accountability. The forum is a nonprofit research and advocacy organization that develops guidance and policy analysis on data privacy and emerging technologies.

Legal commentators, including attorneys general across the nation, noted that Colorado created robust AI legislation that other states could emulate in the absence of federal legislation.

Politics meets process, stalling progress

Praise aside, passing a bill is one thing, but putting it into action is another.

Immediately after the bill was signed, tech companies and trade associations warned that the act could create heavy administrative burdens for startups and deter innovation. Polis, in his signing statement, cautioned that “a complex compliance regime” might slow economic growth. He urged legislators to revisit portions of the bill.

Polis convened a special legislative session to reconsider portions of the law. Multiple bills were introduced to amend or delay its implementation. Industry advocates pressed for narrower definitions and longer timelines. All the while, consumer groups fought to preserve the act’s protections.

Meanwhile, other states watched closely and changed course on sweeping AI policy. Gov. Gavin Newsom slowed California’s own ambitious AI bill after facing similar concerns. Meanwhile Connecticut failed to pass its AI legislation amid a veto threat from Gov. Ned Lamont.

Colorado’s early lead turned precarious. The same boldness that made it first also made the law vulnerable – particularly because, as seen in other states, governors can veto, delay or narrow AI legislation as political dynamics shift.

From big swing to small ball

In my opinion, Colorado can remain a leader in AI policy by pivoting toward “small ball,” or incremental, policymaking, characterized by gradual improvements, monitoring and iteration.

This means focusing not just on lofty goals but on the practical architecture of implementation. That would include defining what counts as high-risk applications and clarifying compliance duties. It could also include launching pilot programs to test regulatory mechanisms before full enforcement and building impact assessments to measure the effects on innovation and equity. And finally, it could engage developers and community stakeholders in shaping norms and standards.

This incrementalism is not a retreat from the initial goal but rather realism. Most durable policy emerges from gradual refinement, not sweeping reform. For example, the EU’s AI Act is actually being implemented in stages rather than all at once, according to legal scholar Nita Farahany.

Effective governance of complex technologies requires iteration and adjustment. The same was true for data privacy, environmental regulation and social media oversight.

In the early 2010s, social media platforms grew unchecked, generating public benefits but also new harms. Only after extensive research and public pressure did governments begin regulating content and data practices.

Colorado’s AI law may represent the start of a similar trajectory: an early, imperfect step that prompts learning, revision and eventual standardization across states.

The core challenge is striking a workable balance. Regulations need to protect people from unfair or unclear AI decisions without creating such heavy burdens that businesses hesitate to build or deploy new tools. With its thriving tech sector and pragmatic policy culture, Colorado is well positioned to model that balance by embracing incremental, accountable policymaking. In doing so, the state can turn a stalled start into a blueprint for how states nationwide might govern AI responsibly.

Stefani Langehennig, Assistant Professor of Practice, Daniels College of Business, University of Denver

This article is republished from The Conversation under a Creative Commons license. Read the original article.

LEAVE A REPLY