California shuns AI safety bill, but Colorado may hold the answer
In recent months, the dialogue around AI regulation has intensified as states grapple with balancing innovation and safety. Colorado and California have emerged as leaders in this conversation, with both states taking steps toward implementing AI regulations. Governor Gavin Newsom’s veto on Sunday of a sweeping AI safety bill in California (SB 1047) highlights the tensions between regulation and innovation, tensions with which Colorado has also been grappling.
California’s Veto and Industry Pushback
California’s AI safety bill would have imposed liability on developers for the harms caused by their software, as well as required developers to include “kill switches” for certain AI systems, among other provisions. California State Senator Scott Wiener, a co-author of the bill, stated that it was intended to serve as a “light touch, commonsense measure [to codify] commitments that the largest AI companies have already voluntarily made.”
However, industry players disagreed. Their pushback focused on concerns about the bill’s potential to stifle innovation and drive companies out of California, in addition to ambiguity within the bill’s text, which they feared would lead to expensive litigation. Consequently, Newsom vetoed the bill but vowed to address AI regulations during the next legislative session.
Such pushback, including from heavyweights like Google, OpenAI, Meta, and Andreessen Horowitz, underscores a broader debate: How can states protect the public from potential risks without strangling the emerging AI industry? I certainly don’t have any foolproof answers, but I do believe we should take it slow and ensure AI regulations are narrowly tailored to specific uses cases. We simply cannot treat all uses of AI the same (some could be gigantic steps forward for society, while others could prove extremely harmful).
The Right Balance
Achieving the correct balance between advancing innovation and safeguarding society may never occur. But I do generally like the approach of my home state, Colorado.
Colorado Governor Jared Polis signed SB205 (Consumer Protections for Artificial Intelligence) in May of this year. While I don’t necessarily agree with all the law’s provisions, I do agree with delaying its effective date until February 2026. By delaying the implementation of the law until that time (still 16 months away), Colorado has given itself the breathing room to work closely with AI developers, ensuring that regulation evolves alongside technological advancements rather than becoming a barrier.
Moreover, the law does not regulate AI as a tool, but rather, it regulates specific, “high-risk” applications of AI (meaning AI use cases that can impact “consequential decisions,” such as those pertaining to education, employment, finance, health, and legal services.)
This strategy has won over many in the AI industry, which tends to view Colorado as a more favorable environment for growth. Companies can continue developing their models while collaborating with regulators to ensure their systems are safe and transparent. The state’s law also incorporates public input, which may help to maintain public trust without the adversarial tone seen in California.
Similarly, Newsom has signed other bills into law aimed at curtailing the use of AI to distribute fake information in election campaigns and protect celebrity voices and likenesses. These narrowly tailored laws are a smart approach to regulating use cases rather than AI generally.
Looking Ahead
The contrast between California’s and Colorado’s AI regulatory paths offers a glimpse into the future of AI governance. As Governor Newsom works with experts to craft future legislation, it will be important for other states to watch how Colorado’s more collaborative approach fares. Will it serve as a model for other states looking to balance AI innovation with public safety, or will California’s more aggressive tactics ultimately prove necessary in the face of the dangers posed by rapid technological advancement?
Ultimately, I would greatly prefer we address AI safety at the national level rather than the state level to avoid a patchwork of AI-laws in different states. However, I believe we will be left with state regulations for the foreseeable future due to gridlock in our nation’s capital. As a result, all of us will be watching and playing our own roles in the development of AI guardrails in the states that are leading the way.