Stanford's AI Index Annual Report is here - can you handle 502 pages of unbiased AI info?
As you may have noticed, you can’t turn around without bumping into yet another report on the state of AI. What’s up, what’s down, what’s in, and what’s out. All good, of course, except when they contradict each other or seem to have a not-so-secret agenda behind their findings.
But every once in a while, a report comes along that’s really worth paying attention to. Something that's produced by a truly respected organization that’s not attempting to use the report to sell you its services or products, or to convince you of anything that it hasn’t carefully researched and documented as fact.
The latest edition of one such report was just published by HAI, the Stanford Institute for Human-Centered Artificial Intelligence. No Johnny-come-lately, the AI Index Annual Report actually predates HAI and has been published by AI researchers at Stanford University since 2017. In fact, one way to measure the explosion of AI over the last seven years is to look at how the length of the report itself has grown: from 101 pages in 2017 to 502 pages this year.
The report’s introduction describes its undertaking in this way: “Our mission is to provide unbiased, rigorously vetted, broadly sourced data in order for policymakers, researchers, executives, journalists, and the general public to develop a more thorough and nuanced understanding of the complex field of AI.” That sounds useful, doesn’t it?
I’m here to tell you that 502 pages is a lot. But there are many pearls of wisdom, invaluable insights and credible survey results strewn throughout those pages, so the report is truly worth a deep dive. Not everything is a surprise, of course, but in those cases of "Well, yeah, I already knew that," it still can be useful to have our own impressions of what's been happening across the AI landscape backed up with real data.
HAI does offer its own list of the report’s top 10 takeaways, and we’ll review them today.
With that, let’s jump into the 30,000-foot view of the AI Index Annual Report. And although there are nine chapters, HAI offers 10 top takeaways. I’ll repeat each of their takeaways here and then provide a brief thought or two of my own.
1. AI beats humans on some tasks, but not on all.
“AI has surpassed human performance on several benchmarks, including some in image classification, visual reasoning, and English understanding. Yet it trails behind on more complex tasks like competition-level mathematics, visual commonsense reasoning and planning.”
This takeaway is no surprise to readers of Colorado AI News. “Some, not all” pretty much describes whether AI today is hitting various performance benchmarks vis-à-vis humans. AI has been hitting a few home runs, to be sure, in addition to a larger number of base hits. Of course, there’s also been a good number of strikeouts. (“Abstract reasoning – what’s that?”)
With this takeaway, we are reminded of the difference between narrow AI (focused expertise) and general AI (broad, human-like capabilities.) And yet, it’s still early in the game. Are we even in the second inning yet?
Most experts agree that AGI (artificial general intelligence) is coming. Is it two years, five years, or ten years out? It depends on whom you ask. Apparently, Sam Altman thinks it could arrive in 2025.
2. Industry continues to dominate frontier AI research.
“In 2023, industry produced 51 notable machine learning models, while academia contributed only 15. There were also 21 notable models resulting from industry-academia collaborations in 2023, a new high.”
The high-level takeaway here isn’t a shock, but I am a little surprised that academia had as many as 15 ML models compared to industry’s 51. For academia to have produced 29% of the models as industry is impressive when one looks at how much more funding, compute power, and talent the industry giants have had at their fingertips.
Similarly, it’s exciting to see that there was a record number of ML collaborations in 2023 between industry and academia, as I believe that helps both the goose and the gander become better.
3. Frontier models get way more expensive.
"According to AI Index estimates, the training costs of state-of-the-art AI models have reached unprecedented levels. For example, OpenAI’s GPT-4 used an estimated $78 million worth of compute to train, while Google’s Gemini Ultra cost $191 million for compute.”
We’ve been hearing a lot about this trend, so no surprises here. And for a company such as OpenAI, which is currently valued at $157 billion, what’s $78 million? That said, this highlights the very real concern that training foundation models will be increasingly limited to a smaller and smaller coterie of companies. This isn’t good for competition, and it’s a guaranteed way of deepening global inequalities in technology.
4. The United States leads China, the EU, and the U.K. as the leading source of top AI models.
“In 2023, 61 notable AI models originated from U.S.-based institutions, far outpacing the European Union’s 21 and China’s 15.”
This is exciting news for those of us here in the U.S. who believe that continued AI innovation in this country is essential, not just for our economy, but also for our national security in the years ahead. Additionally, for all that we've heard about how China is steaming ahead and the EU is lagging, it feels like something of a suprise that the EU came in second place, not only outpacing China, but doing so in a significant way, by nearly 50 percent.
5. Robust and standardized evaluations for LLM responsibility are seriously lacking.
“New research from the AI Index reveals a significant lack of standardization in responsible AI reporting. Leading developers, including OpenAI, Google, and Anthropic, primarily test their models against different responsible AI benchmarks. This practice complicates efforts to systematically compare the risks and limitations of top AI models.”
This finding brings attention to an ongoing and potentially troubling concern. How valuable is so-called “responsible AI reporting” without any standardization among the leading developers? Maybe not that valuable at all. In fact, it reminds me of unmonitored drug tests for athletes. (“Guess what? Everyone passed with flying colors!”)
Ultimately, I think most of us would like to see some balance between rapid, unchecked innovation and ethical responsibility. But without any true regulation from governments, it’s unreasonable to think that companies would be willing to hit the brakes unilaterally and risk losing market share to their competitors.
6. Generative AI investment skyrockets.
“Despite a decline in overall AI private investment last year, funding for generative AI surged, nearly octupling from 2022 to reach $25.2 billion. Major players in the generative AI space, including OpenAI, Anthropic, Hugging Face, and Inflection, reported substantial fundraising rounds.”
Octupling…that’s not a word I run into frequently, but maybe that's simply an indication that I’ve haven't been hanging out with the right crowd. Typically, I've probably just rounded things up and said, “10x,” but I'm going to start using "octupling"...whenever appropriate, of course.
So, is this skyrocketing investment in GenAI a problem? Only if one thinks that it's utterly foolish and solely driven by hype. Are we running headlong into the AI version of the dot-com bust of 2000? Is it 1996 all over again? Maybe we need Fed Chairman Alan Greenspan to come back and warn us of our “irrational exuberance.” That’s all quite possible, of course.
But here at Colorado AI News, we're playing the long game, and I’m reminded of Roy Amara’s famous line: “We tend to overestimate the effect of a technology in the short run and underestimate the effect in the long run.”
7. The data is in: AI makes workers more productive and leads to higher quality work.
“In 2023, several studies assessed AI’s impact on labor, suggesting that AI enables workers to complete tasks more quickly and to improve the quality of their output. These studies also demonstrated AI’s potential to bridge the skill gap between low- and high-skilled workers. Still other studies caution that using AI without proper oversight can lead to diminished performance.”
Overall, this is good to hear. We’ve experienced – and heard – the first two points a lot, and it feels as if they are well documented. Who doesn’t like improved quality and more efficiency, especially at the same time? But we also hear about a potential de-skilling of the workforce and a gutting of lower-level jobs. That's not a good thing.
Finally, there's the matter of "oversight," or the lack thereof. We keep hearing that organizations are paralyzed by their inability to take top-down action regarding AI, even as their employees, students, and others are pushing ahead on their own – all without any guidance, training, or standards. We need to work on that.
8. Scientific progress accelerates even further, thanks to AI.
“In 2022, AI began to advance scientific discovery. 2023, however, saw the launch of even more significant science-related AI applications—from AlphaDev, which makes algorithmic sorting more efficient, to GNoME, which facilitates the process of materials discovery.
This is some of the most exciting and inspiring news to come out of the AI Index, especially as the big jump seen in 2023 is likely to continue in 2024 and beyond. In fact, it's difficult to imagine what could happen to slow down such continued progress, as scientific discovery would appear to be one of the fields that AI was made for.
9. The number of AI regulations in the United States sharply increases.
“The number of AI-related regulations in the U.S. has risen significantly in the past year and over the last five years. In 2023, there were 25 AI-related regulations, up from just one in 2016. Last year alone, the total number of AI-related regulations grew by 56.3%.”
Should a 56% increase in AI-related regulations be considered a “sharp increase”? Typically, in any field other than AI, that would seem to be the case, but for AI regulation in 2023, after ChatGPT exploded onto the scene in November of 2022, that almost seems a paltry increase.
On the other hand, it has taken – and it’s still taking – government entities some time to know how to react, and it seems likely that the 2024 numbers will be an even larger leap forward.
10. People across the globe are more cognizant of AI’s potential impact—and more nervous.
“A survey from Ipsos shows that, over the last year, the proportion of those who think AI will dramatically affect their lives in the next three to five years has increased from 60% to 66%. Moreover, 52% express nervousness toward AI products and services, marking a 13 percentage point rise from 2022. In America, Pew data suggests that 52% of Americans report feeling more concerned than excited about AI, rising from 38% in 2022.”
Well, at least many of us seem to be paying attention, and that’s a good thing. We can all agree that unwarranted anxiety is not good. However, given the almost total lack of regulation over an industry that promises to have at least as much of an impact on our daily lives as the auto and airline industries combined, maybe some anxiety over the current free-for-all is, in fact, warranted. Which would suggest that some regulation might not be a terrible thing. After all, reasonable regulation hasn’t killed off either the auto or the airline industry, and frankly, it’s helped to ensure that neither of these industries is killing off all of us, either.