In late 2022, large language models (LLMs) exploded into public awareness almost overnight. But like most overnight sensations, the history of large language models is long, fascinating, and informative.
In this piece, we’ll trace the deep evolution of language models and use this as a lens into how they can change your contact center today–and in the future.
Let’s get started!
A Brief History of Artificial Intelligence Development
The human fascination with building artificial beings capable of thought and action goes back a long way. Writing in roughly the 8th century BCE, Homer recounted tales of the Greek god Hephaestus outsourcing repetitive manual tasks to automated bellows and working alongside robot-like “attendants” that were “…golden, and in appearance like living young women.”
Some 500 years later, mathematicians in Alexandria would produce treatises on creating mechanical servants and various kinds of automata. Heron wrote a technical manual for producing a mechanical shrine and an automated theater whose figurines could stage a full tragic play.
Nor is it only ancient Greece that tells similar tales. Jewish legends speak of the Golem, a being made of clay and imbued with life and agency through language. The word “abracadabra”, in fact, comes from the Aramaic phrase “avra k’davra,” which translates to “I create as I speak.”
Through the ages, these old ideas have found new expression in stories such as “The Sorcerer’s Apprentice,” Mary Shelley’s “Frankenstein,” and Karel Čapek’s “R.U.R.,” a science fiction play that features the first recorded use of the word “robot.”
From Science Fiction to Science Fact
But they remained purely fiction until the early 20th Century – a pivotal moment in the history of LLMs – when advances in the theory of computation and the development of primitive computers began to offer a path to building intelligent systems.
Arguably, this really began in earnest with the 1950 publication of Alan Turing’s “Computing Machinery and Intelligence” – in which he proposed the famous “Turing test” – and with the 1956 Dartmouth conference on AI, organized by luminaries John McCarthy and Marvin Minsky.
People began taking AI seriously. Over the next ~50 years in the evolution of large language models, there were numerous periods of hype and exuberance in which major advances were made and long “AI winters” in which funding dried up, and little was accomplished.
Three advances acted to really bring LLMs into their own: the development of neural networks, the deep learning revolution, and the rise of big data. These are important for understanding the history of large language models, so it’s to these that we now turn.
Neural Networks and the Deep Learning Revolution
Walter Pitts and Warren McCulloch laid the groundwork for the eventual evolution of language models in the early 1940s. Inspired by the burgeoning study of the human brain, they wondered if it would be possible to build an artificial neuron with some of the same basic properties as a biological one.
They were successful, though several other breakthroughs would be required before artificial neurons could be arranged into systems capable of doing useful work. One such breakthrough was the discovery of backpropagation in 1960, the basic algorithm still used to train deep learning systems.
It wasn’t until 1985, however, that David Rumelhart, Ronald Williams, and Geoff Hinton used backpropagation in neural networks; in 1989, this allowed Yann LeCun to train such a network to recognize handwritten digits.
Ultimately, it would be these deep neural networks (DNNs) that would emerge from the history of LLMs as the dominant paradigm, but for completeness, we should briefly mention some of the methods that it replaced.
One was known as “rule-based approaches,” and it was exactly what it sounded like. Early AI assistants would be programmed directly with grammatical rules, which were used to parse text and craft responses. This was just as limiting as you’d imagine, and the approach is rarely seen today except in the most straightforward of cases.
Then, there were statistical language models, which bear at least a passing resemblance to the behemoth LLMs that came later. These models try to predict the probability of word n given the n-1 words that came before. If you read our deep dive on LLMs, this will sound familiar, though it was not at all as powerful and flexible as what’s available today.
There were others that are beyond the scope of this treatment, but the key takeaway is that gargantuan neural networks ended up winning the day.
To close this section out, we’ll mention a handful of architectural improvements that came out of this period and would play a crucial role in the evolution of language models. We’ll focus on two in particular: transformers and word vector embeddings.
If you’ve investigated how LLMs work, you’ve probably heard both terms. Transformers are famously intricate, but the basic idea is that they creatively combined elements of predecessor architectures to ameliorate the problems those approaches faced. Specifically, they can use self-attention to selectively attend to key pieces of information in text, allowing them to render higher-fidelity translations and higher-quality text generations.
Word vector embeddings are numerical representations of words that capture underlying semantic information. When interacting with ChatGPT, it can be easy to forget that computers don’t actually understand language, they understand numbers. A word vector embedding is an array of numbers generated with one of several different algorithms, with similar words having similar embeddings. LLMs can process these embeddings to learn enormous statistical patterns in unstructured linguistic data, then use those patterns to generate their own outputs.
All of this research went into making the productive neural networks that are currently changing the nature of work in places like contact centers. The last missing piece was data, which we’ll cover in the next section.
The Big Data Era
Neural networks and deep-learning applications tend to be extremely data-hungry, and access to quality training data has always been a major bottleneck. In 2009 Stanford’s Fei-Fei Li sought to change this by releasing Imagenet, a database of over 14 million labeled images that could be used for free by researchers. The increase in available data, together with substantial improvements in computer hardware like graphical processing units (GPUs), meant that at long last the promise of deep learning could begin to be fulfilled.
And it was. In 2011, a convolutional neural network called “AlexNet” won multiple international competitions for image recognition, IBM’s Watson system beat several Jeopardy! all-stars in a real game, and Apple launched Siri. Amazon’s Alexa followed in 2014, and from 2015 to 2017 DeepMind’s AlphaGo shocked the world by utterly dominating the best human Go players.
All of this set the stage for the rise of LLMs just four short years later.
Where are we Now in the Evolution of Large Language Models?
Now that we’ve discussed this history, we’re well-placed to understand why LLMs and generative AI have ignited so much controversy. People have been mulling over the promise (and peril) of thinking machines for literally thousands of years, and it looks like they might finally be here.
But what, exactly, has people so excited? What is it that advanced AI tools are doing that has captured the popular imagination? In the following sections, we’ll talk about the astonishing (and astonishingly rapid) improvements seen in language models in recent memory.
Getting To Human-Level
One of the more surprising things about LLMs such as ChatGPT is just how good they are at so many different things. LLMs are trained by having them take samples of the text data they’re given, and then trying to predict what words come next given the words that came before.
Modern LLMs can do this incredibly well, but what is remarkable is just how far this gets you. People are using generative AI to help them write poems, business plans, and code, create recipes based on the ingredients in their fridges, and answer customer questions.
What is Emergence in Language Models?
Perhaps even more interesting, however, is the phenomenon of emergence in language models. When researchers tested LLMs on a wide variety of tasks meant to be especially challenging to these models – things like identifying a movie given a string of emojis or finding legal chess moves – they found that in about 5% of tasks, there is a sudden, sharp increase in ability on a given task once a model reaches a certain size.
At present, it’s not really clear how we should think about emergence. One hypothesis for emergence is that a big enough model is able to learn some general piece of knowledge not attainable by a smaller sibling, while another, more prosaic one is that it’s a relatively straightforward consequence of the model’s internal statistical machinery.
What’s more, it’s difficult to pin down the conditions required for emergence in language models. Though it generally appears to be a function of model size, there are cases in which the same abilities can be achieved with smaller models, or with models trained on very high-quality data, and emergence shows up at different scales for different models and tasks.
Whatever ends up being the case, it’s clear that this is a promising direction for future research. Much more work needs to be done to understand how precisely LLMs accomplish what they accomplish. This will not only redound upon the question of emergence, it will also inform the ongoing efforts to make language models safer and less biased.
LLM Agents
One of the bigger frontiers in LLM research is the creation of agents. ChatGPT and similar platforms can generate API calls and functioning code, but humans still need to copy and paste the code to actually do anything with it.
Agents are meant to get around this limitation. Auto-GPT, for example, pairs an underlying LLM with a “bot” that takes high-level tasks, breaks them down into tasks an LLM can solve, and stitches together those solutions.
This work is still in its infancy, but it continues to be very promising.
Multimodal Models
Another development worth mentioning is the rise of multi-modality. A model is “multi-modal” when it can process more than one kind of information, like images and text.
LLMs are staggeringly good at producing coherent language, and image models could do the same thing with images, but now a lot of time and effort is being spent on combining these two kinds of functionality.
The result has been models able to find specific sections of lengthy videos, generate images to accompany textual explanations, and create their own incredible videos from short, simple prompts.
It’s too early to tell what this will mean, but it’s already impacting branding, marketing, and related domains.
What’s Next For Large Language Models?
As with so many things, the meteoric rise of LLMs was presaged by decades of technical work and thousands of years of thought and speculation. In just a few short years, it has become the strategic centerpiece for contact centers the world over.
If you want to get in on the action, you could start by learning more about how Quiq builds customer-facing AI assistants using LLMs. This will provide the context you need to make the wisest decision about deploying this remarkable technology.