The rapid rise of AI agents has sparked urgent legal and policy debates distinct from broader AI regulation. At the latest Silicon Flatirons event at the University of Colorado Law School, the Artificial Intelligence, Autonomous Systems, and Law Conference on March 7, experts gathered to tackle issues related to one of the hottest AI issues of the day: What legal issues does the advent of AI agents raise, and how should we address these issues?
The second panel of the day-long event, moderated by Professor Harry Surden, specifically explored the issue of whether we should regulate these autonomous AI systems differently from AI in general. The three panelists included CU Law professor Calli Schroeder, Elon University’s David Levine, and Returned.com founder and CEO Paul Lin. Their insights provide a useful framework for evaluating Colorado’s approach to AI regulation and whether the state is getting it right.
Why AI Agents are Different
Unlike traditional AI models, which primarily assist with content generation or data processing, AI agents are designed to operate with greater autonomy and are capable of interpreting requests, making plans, executing decisions, and even acting autonomously.
Lin provided a real-world example, explaining how he created an AI-powered digital twin of his voice to handle customer service calls on his behalf, thus illustrating how AI agents are already being used in ways current laws may not have anticipated.
These fundamental differences raise significant regulatory questions: Should AI agents be subject to stricter rules, given their ability to act independently? Do existing legal frameworks account for the ways in which AI agents can make autonomous choices that impact individuals and businesses? How should we frame the debate around liability, oversight, and transparency?
Three Approaches to AI Agent Regulation
One takeaway from the discussion was that there may be three distinct methods of regulating AI agents:
- A tool-based approach, which focuses on regulating the AI agent itself, including how it is trained, what data it uses, and whether the operator fully understands its risks.
- A risk-based approach, which categorizes AI systems in a preventive manner by risk level (e.g., low, medium, high), applying stricter rules to high-risk applications (such as those relating to health care, employment, housing, financial, and legal services).
- A use-based approach, which is more reactive and focuses on what the AI does and its real-world outcomes – but the challenge, as Surden pointed out, is that this approach may require waiting until harm has already occurred before regulation can step in.
Colorado’s Consumer Protections for Artificial Intelligence Act, which goes into effect in February 2026, primarily focuses on risk-based regulation – placing obligations on AI developers and deployers to ensure transparency, avoid discrimination, and exercise reasonable care. However, the law does not explicitly distinguish AI agents from other AI systems, raising questions about whether it will sufficiently address the unique challenges posed by AI-driven autonomous decision-making.
Do We Need New Laws, or Are Existing Laws Enough?
Panelists debated whether AI agents create new risks or simply expand existing risks. Schroeder argued that many of these risks could be handled with current laws, but she indicated that those laws are not consistently enforced, making it difficult to rely on them alone.
Levine pointed out that, even if AI laws become outdated quickly, they still serve as a signal to the market, helping companies anticipate future compliance needs.
The panel also touched on whether AI systems should be required to disclose themselves. Levine noted that people can feel offended or misled when they unknowingly interact with AI. While this may seem like a minor issue, disclosure laws are often seen as "low-hanging fruit" in AI regulation and would be among the easiest regulatory actions we could enact.
Is Colorado’s Law Enough, or Does It Need Amendments?
One of the key challenges in regulating AI is how quickly the technology evolves. Surden pointed out that significantly older AI regulations worked well when early AI models were designed for specific industries, but today’s general-purpose AI models create a new level of complexity. A risk-based structure may need adjustments to account for this shift.
When Colorado passed its AI law, it intentionally set a delayed effective date to allow ample time for amendments after gathering feedback. This built-in flexibility suggests that lawmakers recognized that adjustments might be necessary. Given the distinction between AI in general and AI agents, Colorado lawmakers may need to consider amendments that specifically account for autonomous decision-making capabilities.
Schroeder and Levine noted that the tension between "signaling" and being outdated on Day One is a major challenge for AI regulation. Colorado may need to strike a balance between these approaches, ensuring that its law sends a clear message to companies, while remaining adaptable enough to handle AI’s rapid evolution.
Where Should AI Regulation Go from Here?
My past writings on AI regulation have generally supported a slow, cautious approach, favoring careful, incremental steps over broad, sweeping laws. But after this Silicon Flatirons discussion, I see a new argument emerging: Even if regulation is imperfect, it can help shape AI development by signaling to companies what’s expected.
The biggest open question is whether Colorado’s AI Act is too narrow, focusing only on risk levels, whereas AI tool-based and use-based regulation also may be needed. Will lawmakers adjust the law before it takes effect in just over 10 months, or will they take a wait-and-see approach?
As Surden mentioned at the event, one thing is certain: "It’s no longer the future. This stuff is here." AI agents are already transforming industries, and the legal system needs to catch up – carefully, and with clarity.