Silicon Flatirons: The Ethics of AI and Privacy: Where to, now?
On April 1, Silicon Flatirons held the final event of its 2024-25 Ethics Series, with the evening's discussion addressing some of the issues swirling around the ethics of artificial intelligence and privacy.
CU Law School professor and director of the Samuelson-Glushko Technology Law and Policy Clinic (TLPC) Vivek Krishnamurthy served as moderator for the evening, which featured five presenters: Stevie DeGroff, First Assistant Attorney General, Technology & Privacy Protection, Colorado Attorney General's Office; Calli Schroeder, Senior Counsel and AI/Human Rights Program Lead, Electronic Privacy Information Center (EPIC); Kathleen McInerney, founder of Informed Growth Strategies; Margot Kaminski, professor at CU Law School and director of the Privacy Initiative at Silicon Flatirons; and Shelby Dolen, senior associate at Husch Blackwell.
The discussion centered on the ethical implications of AI, with Schroeder emphasizing the need to balance innovation with human rights and privacy concerns. She also called for more attention to AI's environmental harms and its potential for significant job displacement.
Kaminski related how she recently had only an hour to summarize the EU AI Act - an impossible task - and that here at Silicon Flatirons she had just six minutes. Speaking quickly, she discussed the Act, its background, and the incredibly fast and chaotic timeline - certainly for EU standards - in which it was hatched. One of Kaminski's especially interesting points was that the EU AI Act was developed around the idea that AI tools are put together for specific purposes, unlike the general purpose tools that have been dominant since ChatGPT made its public debut in late 2022.
DeGraff's focus was the Colorado AI Act and how at least a few amendments to it were quite likely. She also addressed the work of Colorado's AI Impact Task Force, and how the concept of AI neutrality is a difficult one, given the historical biases that have been written into generative artificial intelligence.
The engaging group discussion also addressed the many challenges of standardizing AI regulation across states, the importance of data governance, transparency, and accountability, and the need for international cooperation. Intriguing topics that were just briefly touched on concerned the potential for AI recalls (similar to with automobiles) and the role of military applications in AI development, citing how much of that is unknown.
McInerney drew upon her experience in Brazil, pointing out that governments look at AI through the lens of harms vs. opportunities, with context determining everything. That is, how does one define a harm? Might something not be a harm to an individual, but possibly be a harm to an authoritarian regime? (Authoritarian regimes want to know.)
Dolen finished up the panel's prepared remarks with a comprehensive review of all the various states' attempts to enact AI-related legislation - and the results of all that activity. Strikingly, a total of 479 bills (at last count) have been introduced in 45 states, with only 10 of them enacted. Perhaps as surprising is that seven of those 10 were enacted just the week before last. Dolen concluded her remarks with an important question that drew a collective "Hmmm..." from the audience: "Do users have a duty to use AI if it saves their clients money?"
It's a good guess that you've heard (or said) more than once, "This meeting could have been an email." Well, this two-hour Silicon Flatirons discussion could have been a two-day (or even two-week) event. When it comes to AI, ethics, and privacy concerns, there are so many issues to address - and from so many perspectives - that in a mere two hours, it's hard to do more than ask some important questions.
The speakers at this event asked a lot of good questions, and I know that those of us in the audience are already looking forward to continuing to drill down on the issues - and maybe even discovering possible solutions - in the upcoming 2025-26 season of the Silicon Flatirons Ethics Series.