Exploring AI at a Mile High

Brown's Law: Take it slow with AI regulation

Chris Brown

Boulder, Colorado

Last updated on Aug 21, 2024

Posted on Aug 1, 2024

In May of this year, Colorado become one of the first states in the nation to pass comprehensive legislation aimed at “high-risk artificial intelligence systems.” Since then, there has been a flurry of opinions on the law, which has a delayed effective date of February 1, 2026 to allow for changes in the meantime.

Background

In summary, the law requires developers and deployers of “high-risk” AI systems to use reasonable care to protect consumers from algorithmic discrimination. As Phil Nugent described it, the law states that “when an AI tool is used in making a major decision, such as one affecting someone’s career, finances, health, or life, the business that has developed that tool, as well as the business using (or deploying) it, needs to comply with certain provisions in the Act to ensure that the tool does not rely on biased data to make its decisions.” The bill is known as the Consumer Protections for Interactions with Artificial Intelligence Act, which you can read in full here.

Inherent in the law is a battle between fostering advancements in technology, while protecting consumers and others in the marketplace. The resulting tug of war is far from over, and personally, I have yet to reach any conclusions on whether I believe it strikes the right balance.

Advantages of the law

On the one hand, I believe strongly that consumers (and especially the least sophisticated consumers, and individuals that fall into a protected class like age, race, etc.) deserve protections to prevent increasing disparities in our society.

In furtherance of these goals, the law requires developers and deployers to disclose benefits and risks of their systems, and to implement risk management policies. They must make these disclosures in a transparent manner to provide the public with a clear understanding of how the system works and how it might go wrong.

In theory, this level of disclosure should enable consumers to be informed about the benefits and risks, and to be able to choose for themselves whether to engage with the system.

Disadvantages of the law

On the other hand, I believe strongly that technology has improved daily life, and will continue to do so, for nearly all humans – not equally, and not at the same time, but the general movement should continue to be in a positive, forward direction for most people.

And I believe an overhanded response can stifle the development of new technologies and set us back in achieving new technology milestones that can help millions or even billions of people. For example, see my recent opinion piece on how AI can revolutionize the practice of law and access to legal services.

Colorado’s new AI law is both broad and vague. It may result in the state’s businesses taking a slower approach to AI development, which may be slower than warranted under the circumstances. Such delays may be further extended due to the law’s documentation requirements, annual reviews, impact assessments, and more. This concern is especially burdensome for new, small companies that don’t have the financial or time resources to execute those processes.

A patchwork of AI laws can’t be the solution

Looking at this from a higher level, the U.S. is headed toward a patchwork of AI-focused laws created by individual states (not to mention the many, various international laws). While there are some advantages to having many approaches (for example, some states have income taxes, while others don’t), the U.S. must address AI and privacy regulation at the federal level to avoid the headaches associated with having to comply with 50 different laws and 50 different policy goals. 

Adding to the complexity, as Grant Gross discusses in his article in CIO, states are taking three different approaches to AI regulation: (1) bills focused on transparency, typically of both the development of the AI and its use, and sometimes specifically related to political campaigns; (2) bills focused on high-risk uses of AI, which generally means an AI tool’s influence on major life issues and decisions; and (3) broad-based bills, such as Colorado’s, which can include an emphasis on transparency, preventing bias, requiring impact assessment, and providing for consumer opt-outs.

Needless to say, these approaches do not always align, and maintaining compliance across the range of approaches can be difficult and expensive. This patchwork of laws will be especially harmful for early-stage startups and small businesses that do not have the same resources as larger, established companies.

My position: Take it slow, and let’s wait and see

I mentioned above that my position is to take a light approach to regulating AI developments. We should create reasonable restrictions around serious discrimination concerns (health care, education, finance, etc.), but we should make these restrictions narrowly tailored, and time limited. Additionally, I would like to see the relevant parties come to the table every six months to renegotiate updates to the law that consider what we’ve learned in the previous six months.

But apart from these high-risk AI systems, we should take a much lighter approach that would allow for the fast-paced development of AI systems, while carefully monitoring their impact on consumers and the marketplace. At the same time, we need quick and efficient means to stop or curtail their development if the disadvantages begin to outweigh the advantages of particular systems. This hybrid approach will provide our marketplace with a reliable means to push technology forward, without abandoning the protections that certain individuals and groups deserve in a society where equity should be a focus for all of us.

At all times during this development and evaluation process, we must accept that there will be some risk. We cannot eliminate all risk. But the question needs to be whether the new AI-approach is better than the current status-quo approach. For example, if using ChatGPT to find out about the law provides information that is equal to or better than searching Google or Reddit, then we should not limit or prohibit the use of ChatGPT for that purpose. Sure, it will have errors, but searching Google and Reddit is ripe with errors when it comes to information about the law – and yet, those sources are referenced all the time. 

What’s next: This law is not final

When Colorado Governor Jarod Polis signed the law on March 17, he noted concerns about the negative impact the law could cause on technology companies in Colorado. He also emphasized that the delayed effective date would allow the interested parties (consumers, developers, deployers, policymakers, and others) sufficient time to fine-tune the law to align with everyone’s unique needs and policy goals. I strongly agree with that approach. We’ve set a baseline, and now we need to determine what needs to be improved and how to make those improvements.

Check back here for updates on changes to the law (and maybe some additional opinion from me). We are sure to see many new developments in the months and years to come.

For the time being, I recommend that you keep exploring what AI can do for you, while remaining cautious and aware of its limitations and potential for discriminatory results.

; ; ; ;

Share on

Tags

Subscribe for free to keep up with Colorado AI News!

Sign up today to get weekly email updates and to comment on selected articles.

Subscribe Now