Exploring AI at a Mile High

The Existential Angle: What, me worry?

Dave Jilk

Boulder, Colorado

Last updated on Mar 15, 2025

Posted on Mar 15, 2025

This is the first in a series of columns about artificial intelligence and human destiny. It will cover both existential threats to our civilization as well as the tremendous opportunities that could emerge.

Let’s start with the elephant in the room. It seems rather silly, doesn’t it? To waste time and energy fretting about AI taking over the world, or killing us all?

To be sure, the impressive recent progress in AI technology suggests that it could be a potent weapon in cyberwarfare, or create problematic labor market disruptions, or lead to other global instabilities. But these concerns are typical of many major technological advances, and even if “this time it’s different” the associated challenges remain within the scope of human experience. We have eighty years of nuclear weapons threatening a factory-reset of human civilization, labor market participation is already in a long, slow decline, and the state of geopolitics is not exactly comforting. AI might make a mess, but we’ll get through it.

The most extreme scenarios just don’t seem that plausible to a casual observer who has not guzzled the Kool-Aid. After all, where is my autonomous car? Yes, Waymo is starting to work, but that took ten years longer than expected. The 70-year history of artificial intelligence is one of hype and unrealized extrapolation. The software and apps I use day-to-day are persistently buggy. You’re telling me that somehow an AI system is going to avoid crashing long enough to vanquish all humanity? At best this seems like the handwringing of Chicken Little, at worst the prophecies of a doomsday cult – and certainly not a sober analysis of the technology arc.

"...it does seem inevitable that someone will eventually crack the code of intelligence and build autonomous minds that rely on silicon circuits instead of neurons."

And even if we suppose that researchers really are on a path to developing superintelligent software agents or robots in our lifetime, what can we even do about it? Boycott Claude? Call our congressional representatives? As immaterial as those efforts would be, they would have zero effect on what other countries might do, or for that matter, on the pasty teenager down the street building AI in the basement. And again, because the daily experience of AI seems less like an existential threat and more like just another groundbreaking but immature technology, efforts to stop the hypothesized apocalypse would seem like standing on a street corner with a sign that says, “The End is Near,” more off-balance than sincere.

With all that said, it does seem inevitable that someone will eventually crack the code of intelligence and build autonomous minds that rely on silicon circuits instead of neurons. While some disagree, the computational theory of mind (CTM) is relatively widely accepted. This conjecture says that the mind is fundamentally computational, that the underlying processes that give rise to a mind can be performed – or at least satisfactorily simulated – on a computer.

Rather than jump into counterarguments to CTM, let’s just roll with the theory and see where it leads. Questions of intelligence and mind are frontiers of research, and tremendous progress in cognitive neuroscience in recent decades has led to a rough consensus architecture of memory, perception, attention, and executive control. The “reference implementation” of intelligence, as I like to call the brain, is progressively yielding to scrutiny.

Coming up the other side of the pass, approaches to artificial intelligence that incorporate various bits and pieces of what has been learned about the brain show great promise. Artificial systems that perform visual perception, learn and play games, and most recently translate texts and engage in interactive linguistic discourse, all have reached human or superhuman levels in their problem domain. This was accomplished by (among other things) poaching findings from neuroscience and brilliantly adapting them to a computational environment.

"Short of severe setbacks to civilization, the work and progress will continue. Assuming that CTM is correct, a system equal or superior to humans in all aspects of cognition and agency will eventually be built. What then?"

These efforts aren’t going to end. Even if there isn’t money or power in it (and it sure seems like there is), scientists want to know how minds and brains work. Richard Feynman wrote, “What I cannot create, I do not understand,” and that sentiment underlies continuing efforts to build artificial intelligence that matches the capabilities of the human brain and mind. Short of severe setbacks to civilization, the work and progress will continue. Assuming that CTM is correct, a system equal or superior to humans in all aspects of cognition and agency will eventually be built. What then?

Note that an artificial intelligence based entirely on computation, software, and data can be easily copied. This has vast implications. It can be backed up, and therefore never needs to “die.” It can reproduce with perfect fidelity. It can be stored indefinitely without loss. It can be transmitted (really a form of copying) at the speed of light. Even if such an AI has merely “human-level” capabilities, these are overwhelming advantages relative to humans. Consequently, humans are unlikely to be able to manage or control such a system – or agent – for long.

This situation does not require an “intelligence explosion” or the “singularity” you may have heard about, where the AI rapidly improves itself until its abilities are incomprehensible to us. Human-level is probably enough, and in any case the narrower technologies already in existence exceed human capabilities. It is difficult to see how we could stop it from doing whatever it wants, at least in the digital world, without permanently unplugging all the computers. Remember, your phone, your car, your watch, even possibly your thermostat and light bulbs all contain general-purpose computers.

I won’t elaborate on the path from that point through various apocalyptic or utopian scenarios. You can imagine them, or there is plenty of science fiction on the topic. But it’s clear that we are potentially in a heap of trouble if all this comes to pass. More optimistically, it could also mean that great improvements in our lives and society are in store. It depends on what the AI wants, and what it decides to do.

"How far away is this 'loss of control' scenario? Among those who have no fundamental objection to the possibility of fully human-level AI, estimates range from just a few years to a few decades."

What will it want to do? A new field called AI Alignment endeavors to figure out how to create, train, or influence AI so that it wants the same things humans want, or at least the most important things. You may have encountered this idea in reference to pedestrian issues like ensuring that autonomous vehicles do not run over, well, pedestrians, or that chatbots do not make offensive statements. It is the same idea, although when the stakes go way up the methods may also have to change. We will explore alignment in future columns.

How far away is this “loss of control” scenario? Among those who have no fundamental objection to the possibility of fully human-level AI, estimates range from just a few years to a few decades. The capabilities of large language models caught a lot of people by surprise, including me, and many have moved their estimates closer. Still, recent projections in the news that “AGI” is only twelve to twenty-four months away mostly refer to a different meaning of “AGI” than the term implied a few years ago. As part of its hype cycle, the AI field has a history of changing the meaning of terms, especially those that originally meant “fully human-level.” In any case, future columns will discuss the state of the art and the technical issues that remain to be solved. We will also look at CTM and some of the reasoning for and against.

At this point you may think I have been arguing that, contrary to my introduction, you should in fact be worried about the existential threats of artificial intelligence, and maybe even advocate for action. Not so. I don’t think there is much people can do, aside from engaging in alignment efforts, or creating marginal delays in progress – and delay may actually make things worse for our prospects. Instead, I hope to convey that you may want to follow these great issues, and take them seriously, simply because you may be alive when the single most consequential series of events in human history occurs. 

It’s worth understanding what the doomers and accelerationists are claiming, and especially what assumptions they are making. It’s worth tracking what we understand about alignment and what we don’t. It’s worth having a sense of when it all might happen. And if you don’t buy any of it, it’s worth knowing exactly where you think the chain of reasoning fails instead of simply scoffing. Because even if you don’t find large language models disconcerting, I promise there are capabilities coming down the pike that will give you pause.

; ; ; ;

Share on

Tags

Subscribe for free to keep up with Colorado AI News!

Sign up today to get weekly email updates and to comment on selected articles.

Subscribe Now