The end of the year is generally a time that prompts reflection about the past. But as a forward-thinking organization, we’re going to use this period instead to think about the future.
Specifically, the future of artificial intelligence (AI). We’ve written a great deal over the past few months about all the ways in which AI is changing contact centers, customer service, and more. But the pioneers of this field do not stand still, and there will no doubt be even larger changes ahead.
This piece presents our research into the seven main AI advancements for 2024, and how we think they’ll matter for you.
Let’s dive in!
What are the 2024 Technology Trends in AI?
In the next seven sections, we’ll discuss what we believe are the major AI trends to look out for in 2024.
Bigger (and Better) Generative AI Models
Probably the simplest trend is that generative models will continue getting bigger. At billions of internal parameters, we already know that large language models are pretty big (it’s in the name, after all). But there’s no reason to believe that the research groups training such models won’t be able to continue scaling them up.
If you’re not familiar with the development of AI, it would be easy to dismiss this out of hand. We don’t get excited when Microsoft releases some new OS with more lines of code than we’ve ever seen before, so why should we care about bigger language models?
For reasons that remain poorly understood, bigger language models tend to mean better performance, in a way that doesn’t hold for traditional programming. Writing 10 times more Python doesn’t guarantee that an application will be better – it’s more likely to be the opposite, in fact – but training a model that’s 10 times bigger probably will get you better performance.
This is more profound than it might seem at first. If you’d shown me ChatGPT 15 years ago, I would’ve assumed that we’d made foundational progress in epistemology, natural language processing, and cognitive psychology. But, it turns out that you can just build gargantuan models and feed them inconceivable amounts of textual data, and out pops an artifact that’s able to translate between languages, answer questions, write excellent code, and do all the other things that have stunned the world since OpenAI released ChatGPT.
As things stand, we have no reason to think that this trend will stop next year. To be sure, we’ll eventually start running into the limits of the “just make it bigger” approach, but it seems to be working quite well so far.
This will impact the way people search for information, build software, run contact centers, handle customer service issues, and so much more.
More Kinds of Generative Models
The basic approach to building a generative model fits well with producing text, but it is not limited to that domain.
DALL-E, Midjourney, and Stable Diffusion are three well-known examples of image-generation models. Though these models sometimes still struggle with details like perspective, faces, and the number of fingers on a human hand, they’re nevertheless capable of jaw-dropping output.
Here’s an example created in ~5 minutes of tinkering with DALL-E 3:
As these image-generation models improve, we expect they’ll come to be used everywhere images are used – which, as you probably know, is a lot of places. YouTube thumbnails, murals in office buildings, dynamically created images in games or music videos, illustrations in scientific papers or books, initial design drafts for cars, consumer products, etc., are all fair game.
Now, text and images are the two major generative AI use cases with which everyone is familiar. But what about music? What about novel protein structures? What about computer chips? We may soon have models that design the chips used to train their successors, with different models synthesizing the music that plays in the chip fabrication plant.
Open Source v.s. Closed Source Models
Concerns around automation and AI-specific existential risk aren’t new, but one major front that’s opened in that war concerns whether models should be closed source or open source.
“Closed source” refers to a paradigm in which a code base (or the weights of a generative model) are kept under lock and key, only available to the small teams of engineers working on them. “Open source”, by contrast, is an antipodal philosophy that believes the best way to create safe, high-quality software is to disseminate the code far and wide, giving legions of people the opportunity to find and fix flaws in its design.
There are many ways in which this interfaces with the broader debate around generative AI. If emerging AI technologies truly present an existential threat, as the “doomers” claim, then releasing model weights is spectacularly dangerous. If you’ve built a model that can output the correct steps for synthesizing weaponized smallpox, for example, open-sourcing it would mean that any terrorist anywhere in the world could download and use it for that purpose.
The “accelerationists”, on the other hand, retort by saying that the basic dynamics of open-source systems hold for AI as they do for every other kind of software. Yes, making AI widely available means that some people will use it to harm others, but it also means that you’ll have far more brains working to create safeguards, guardrails, and sentinel systems able to thwart the aims of the wicked.
It’s still far too early to tell whether AI researchers will choose to adopt the open or closed-source approaches, but we predict that this will continue to be a hotly-contested issue. Though it seems unlikely that OpenAI will soon release the weights for its best models, there will be competitor systems that are almost as good which anyone could download, modify, and deploy. We also think there’ll be more leaks of weights, such as what happened with Meta’s LLaMa model in early 2023.
AI Regulation
For decades, debates around AI safety occurred in academic papers and obscure forums. But with the rise of LLMs, all that changed. It was immediately clear that they would be incredibly powerful, amoral tools, suitable for doing enormous good and enormous harm.
A consequence of this has been that regulators in the United States and abroad are taking notice of AI, and thinking about the kind of legal frameworks that should be established in response.
One manifestation of this trend was the parade of Congressional hearings that took place throughout 2023, with luminaries like Gary Marcus, Sam Altman, and others appearing before the federal government to weigh in on this technology’s future and likely impact.
On October 30th, 2023, the Biden White House issued an executive order meant to set the stage for new policies concerning dual-use foundation models. It gives the executive branch around a year to conduct a sweeping series of reports, with the ultimate aim being to create guidelines for industry as it continues developing powerful AI models.
The gears of government turn slowly, and we expect it will be some time before anything concrete comes out of this effort. Even when it does, questions about its long-term efficacy remain. How will it help to stop dangerous research in the U.S., for example, if China charges ahead without restraint? And what are we to do if some renegade group creates a huge compute cluster in international waters, usable by anyone, anywhere in the world wanting to train a model bigger than GPT-4?
These and other questions will have to be answered by lawmakers and could impact the way AI unfolds for the next century.
The Rise of AI Agents
We’ve written elsewhere about the many ongoing attempts to build AI systems – agents – capable of pursuing long-range goals in complex environments. For all that it can do, ChatGPT is unable to take a high-level instruction like “run this e-commerce store for me” and get it done successfully.
But that may change soon. Systems like Auto-GPT, AssistGPT, and SuperAGI are all attempts to augment existing generative AI models to make them better able to accomplish larger goals. As things stand, agents have a notable tendency to get stuck in unproductive loops or to otherwise arrive at a state they can’t get out of on their own. But we may only be a few breakthroughs away from having much more robust agents, at which point they’ll begin radically changing the economy and the way we live our lives.
New Approaches to AI
When people think about “AI”, they’re usually thinking of a machine learning or deep learning system. But these approaches, though they’ve been very successful, are but a small sample of the many ways in which intelligent machines could be built.
Neurosymbolic AI is another. It usually combines a neural network (such as the ones that power LLMs) with symbolic reasoning systems, able to make arguments, weigh evidence, and do many of the other things we associate with thinking. Given the notable tendency of LLMs to hallucinate false or incorrect information, neurosymbolic scaffolding could make them far better and more useful.
Causal AI is yet another. These AI systems are built to learn causal relationships in the world, such as the fact that dropping glass on a hard surface will cause it to break. This too, is a crucial part of what is missing from current AI systems.
Quantum Computing and AI
Quantum computing represents the emergence of the next great computational substrate. Whereas today’s “classical” computers exploit lightning-fast transistor operations, quantum computers are able to utilize quantum phenomena, such as entanglement and superposition, to solve problems that not even the grandest supercomputers can handle in less than a million years.
Naturally, researchers started thinking about applying quantum computing to artificial intelligence very early on, but it remains to be seen how useful it’ll be. Quantum computers excel at certain kinds of tasks, especially those involving combinatorics, solving optimization problems, and anything utilizing linear algebra. This last undergirds huge amounts of AI work, so it stands to reason that quantum computers will speed up at least some of it.
AI and the Future
It would appear as though the Pandora’s box of AI has been opened for good. Large language models are already changing many fields, from copywriting and marketing to customer service and hospitality – and it’ll likely change many more in the years ahead.
This piece has discussed a number of the most AI industry important trends to look out for in 2024, and should help anyone interfacing with these technologies to prepare themselves for what may come.