There’s something possibly very big in the AI atmosphere right now. It wasn’t born here in Colorado, and it’s not from Silicon Valley, either. Interestingly, it has a strong Boston accent and the whiff of salt air about it.
What does “possibly very big” mean? Is this just AI hype? Hard to say for sure, but know this: Liquid AI, located in Cambridge, Massachusetts, offers generative AI that’s not based on a transformer model, and according to some experts, it could eclipse all the foundational models out there today.
Liquid AI certainly won over VentureBeat. Its headline summarized all the excitement found in the company’s announcement: “MIT spinoff Liquid debuts non-transformer AI models and they’re already state-of the art.”
And earlier this week, The Boston Globe covered Liquid AI’s launch presentation at the Kresge Auditorium on the MIT campus. The 2,000 attendees heard the company’s CEO, Ramin Hasani, a research affiliate at the school, proclaim, “This is a completely new way to look at AI systems. We want to change the basis of AI. We are building the most capable, the most efficient AI systems in ways you haven’t seen before.”
As the Globe's story makes clear, while AI has become a multitrillion-dollar phenomenon, “it has also created growing alarm about how much energy it requires at a time when climate change is wreaking havoc.” Liquid AI’s technology, on the other hand, “has the potential to offer the same revolutionary applications” as OpenAI and Google “while using a fraction of the electricity.”
Wired also ran a story on Liquid AI this past week in which it quoted Sebastian Bubeck, a researcher at OpenAI, who said, “The benchmark results for their SLMs look very interesting.” Note that Bubeck is a researcher at OpenAI(!), and that he formerly was at Microsoft Research for 10 years.
All this sounds promising. And quite possibly, revolutionary.
How big a deal is this?
Unlike most other generative AI models, Liquid AI’s models are not based around the now-omnipresent Transformer architecture that’s found in everything from OpenAI’s ChatGPT to Google’s Gemini to Anthropic’s Claude.
The now-standard deep learning Transformer architecture was outlined in the landmark 2017 paper “Attention is All You Need.” Written by eight Google computer scientists and cited over 100,000 times, it’s not hyperbole to say that this paper revolutionized generative AI.
And yet, in light of all that – and just when you thought the Large Language Model (LLM) was going to be the dominant form of AI architecture for some time to come – Liquid AI has pioneered what it calls Liquid Foundation Models, or LFMs.
These LFMs are built for “training at every scale,” as the company puts it, as they’ve been engineered for specific offline uses. The three sizes of the new models are 1.3 billion parameters, 3 billion parameters, and 40 billion parameters. (For some perspective, the mid-sized, 3-billion-parameter model will fit on a laptop.)
But what really makes LFMs special? In addition to the promises of improved performance, three things jump out:
1. The offline option minimizes the usual privacy and security concerns of using a GPT, as well as providing AI options to those who might be off the grid or may not have an internet connection for whatever reason.
2. There appears to be improved transparency with LFMs, unlike with the famed “black box” of LLMs. It’s well understood today that much about how LLMs actually operate is not well understood by anyone. Nobody – not even those behind the curtain – can see exactly how the sausage is made in an LLM. And yet, according to Wired, the liquid neural networks are “open to inspection in a way that existing models are not, because their behavior can essentially be rewound to see how it produced an output.”
3. As noted above, the energy consumption of LFMs promises to be less than that of LLMs, and perhaps significantly less. We need to hear some precise numbers on this, but the company has made clear that its models use minimal system memory while delivering exceptional computing power.
What’s the takeaway?
It’s still early in the process, of course. Much remains to be discovered about whether LFMs can do everything their backers say they can.
But in the big picture, here’s why I see this as important news: The hybrid Toyota Prius, initially introduced in the U.S. in 2000, had plenty of critics, but it helped to convince millions of Americans that the traditional internal combustion engine might not be the only way to power a motor vehicle.
In much the same way, Liquid AI’s Liquid Foundation Models could be the initial shot across the bow for many AI enthusiasts to see that LLMs might not be the only way to get to where we want to go with generative AI. While LLMs do a lot of things well, being energy efficient is definitely not one of them. But what if we weren't always "stuck" with LLMs?
Silicon Angle’s article on Liquid AI quoted Holger Mueller, analyst at Constellation Research, who indicated how it’s exciting to see the rapid pace of innovation in AI, with Liquid AI’s first models demonstrating that it’s far from clear who the ultimate winners in this industry will be.
“It’s not only the big players who are capable of fighting for the crown of the best AI model,” said Mueller. “Liquid is coming forward with some impressive models that, by all measures, are better than anything we’ve seen so far. But the AI industry will be pushing on, so they probably won’t remain the best for long. In any case, it’s good to see some diversity in model architectures leveling the playing field.”
Ultimately, in the world of AI, it would be a mistake to assume that the weaknesses of today’s technology will continue to be the weaknesses of what we’re working with tomorrow.