The Big Picture: ChatGPT is two years old. Where do we stand, what's up with AGI, and where should our focus be right now?
Love it or hate it, ChatGPT has been out there, living – or at least, working – amongst us, for two years now.
Yes, it was back on November 30, 2022 that OpenAI launched ChatGPT into the world, and I think at this point we’d all agree that the world will never be the same. Over the last two years, ChatGPT and its competitors have been making steady – and, at times, remarkable – progress.
It’s been a busy two years. To refresh everyone’s memories as to what that GenAI progress has included, here’s a high-level recap of just some of what ChatGPT and the other foundation models have achieved:
· Significant advancements in natural language understanding and accuracy
· Game-changing multimodal capabilities, including images, audio, and video
· Widespread integration into a diversity of applications, ranging from enterprise tools such as Microsoft Office to thousands of home-grown apps via APIs
· Continuous fine-tuning of the models for personal and organizational use
No surprise to those who’ve been paying attention, but all this dramatic progress in the strength and abilities of GenAI tools has led to an equally dramatic adoption by organizations, with “regular use” doubling earlier this year to 65% from its level just 10 months prior, according to McKinsey.
Or, as Azeem Azhar recently wrote on his Substack, Exponential View, “Foundation AI models are progressing on a smooth exponential, getting better and better at benchmarks and surpassing human capability in many areas, including coding and maths. We’re learning how to make frontier models smaller and more efficient after we’ve developed them, leading to a proliferation.” (Note that Azhar was quoting from one of his prompts to ChatGPT in a “discussion” they had about the chatbot’s second birthday.)
For another take, see Clint Boulton’s article in Forbes last week. It’s titled, “As ChatGPT Turns Two, AI Innovation Is Thriving,” and this – please forgive the pun – was its meta-message:
The pace of GenAI innovation has been unprecedented. In 2024 alone, OpenAI broke new ground in LLM reasoning, while Meta introduced the first open frontier class model. Google meanwhile conjured a breakthrough in GenAI-fueled podcasting. And Anthropic launched tools that help users create and modify content in a separate window, as well as the ability for computers to use computers.
Finally, Ethan Mollick, Wharton professor and author of the bestselling book, Co-Intelligence, recently weighed in on the issue of where we are – and where we’re headed – in an excellent Substack post, “The Present Future: AI's Impact Long Before Superintelligence.” Mollick points to posts from Open AI’s Sam Altman and Anthropic’s Dario Amodei, in which each one discusses Artificial General Intelligence (AGI) and how soon it’s likely to make its appearance.
As a side note, neither of these CEOs uses the term AGI, with Altman choosing “superintelligence” and Amodei preferring “powerful AI.” Regardless, it’s clear that both are fixated on this concept, with Altman stating that it could arrive within “a few thousand days,” which suggests that we could see AGI within eight to 10 years.
For his part, Amodei drafted a balanced and thoughtful 13,000-word opus in which he discusses all the positive things that might be possible with “powerful AI.” He has the highest hopes for the realms of biology, health and neuroscience, while being somewhat less optimistic – but still hopeful – about AI’s positive impact on economic development, poverty, democracy, and governance. Amodei addresses the issue of how work and meaning will fit into this new world, but he readily admits that these topics are the ones he feels most uncertain about.
And yet, as interesting as it is to discuss AGI and what dramatic changes it’s likely to bring to the world, there’s a certain amount of jumping the gun going on here. Yes, AGI is what all the Masters of the AI Universe are focused on, and understandably so. They realize that they’re operating in a zero-sum, winner-take-all contest, and that those who cross the AGI finish line first have the best chance of taking all the spoils.
Of course, even if they believe there’s room for more than one at the top – as there typically is in business – there are no guarantees of that, and besides, it’s important to have a rallying cry to keep the troops motivated.
However, for the rest of us, this action of focusing only on what’s around the next corner (and the one after that) makes as much sense as if we chose to skip Thanksgiving dinner because we were solely focused on enjoying next year’s Fourth of July BBQ.
As Mollick points out, “…in many ways, we do not need super-powerful AIs for the transformation of work. We already have more capabilities inherent in today’s Gen2/GPT-4 class systems than we have fully absorbed. Even if AI development stopped today, we would have years of change ahead of us integrating these systems into our world [emphasis added].”
The professor followed that up by putting Anthropic’s Claude to work in a variety of creative tasks that went far beyond what the average GenAI user currently does with a chatbot (including having a Zoom call with a HeyGen avatar, as seen in the image above). Quite impressed with what Claude was able to do. Mollick’s takeaway was striking: “These capabilities demand immediate attention to both policy and practice. Even as imperfect as they are, current AI systems are already reshaping fundamental aspects of work – from how we monitor safety to how we conduct meetings.”
He went on to discuss the tough work that we have in front of us right now: “The choices organizations make today about AI deployment will set precedents that could echo for a long time. Will AI-powered monitoring be used to mentor and protect workers, or to impose algorithmic control? Will AI assistants augment human capability, or gradually replace human judgment?” Mollick ended with this:
“Organizations need to move beyond viewing AI deployment as purely a technical challenge. Instead, they must consider the human impact of these technologies. Long before AIs achieve human-level performance, their impact on work and society will be profound and far-reaching. … The urgent task before us is ensuring these transformations enhance rather than diminish human potential, creating workplaces where technology serves to elevate human capability rather than replace it. The decisions we make now, in these early days of AI integration, will shape not just the future of work, but the future of human agency in an AI-augmented world [emphasis added].”
That’s a tall order. And with all that in mind, while we’re impatiently waiting for a sumptuous feast to take place at some unknown point in the future – maybe sometime in the next few years, but maybe not – we’ve got a whole lot on our plates right now. And it deserves our attention.