4 Benefits of Using Generative AI to Improve Customer Experiences

Generative AI has captured the popular imagination and is already changing the way contact centers work.

One area in which it has enormous potential is also one that tends to be top of mind for contact center managers: customer experience.

In this piece, we’re going to briefly outline what generative AI is, then spend the rest of our time talking about how generative AI benefits can improve customer experience with personalized responses, endless real-time support, and much more.

What is Generative AI?

As you may have puzzled out from the name, “generative AI” refers to a constellation of different deep learning models used to dynamically generate output. This distinguishes them from other classes of models, which might be used to predict returns on Bitcoin, make product recommendations, or translate between languages.

The most famous example of generative AI is, of course, the large language model ChatGPT. After being trained on staggering amounts of textual data, it’s now able to generate extremely compelling output, much of which is hard to distinguish from actual human-generated writing.

Its success has inspired a panoply of competitor models from leading players in the space, including companies like Anthropic, Meta, and Google.

As it turns out, the basic approach underlying generative AI can be utilized in many other domains as well. After natural language, probably the second most popular way to use generative AI is to make images. DALL-E, MidJourney, and Stable Diffusion have proven remarkably adept at producing realistic images from simple prompts, and just the past week, Fable Studios unveiled their “Showrunner” AI, able to generate an entire episode of South Park.

But even this is barely scratching the surface, as researchers are also training generative models to create music, design new proteins and materials, and even carry out complex chains of tasks.

What is Customer Experience?

In the broadest possible terms, “customer experience” refers to the subjective impressions that your potential and current customers have as they interact with your company.

These impressions can be impacted by almost anything, including the colors and font of your website, how easy it is to find e.g. contact information, and how polite your contact center agents are in resolving a customer issue.

Customer experience will also be impacted by which segment a given customer falls into. Power users of your product might appreciate a bevy of new features, whereas casual users might find them disorienting.

Contact center managers must bear all of this in mind as they consider how best to leverage generative AI. In the quest to adopt a shiny new technology everyone is excited about, it can be easy to lose track of what matters most: how your actual customers feel about you.

Be sure to track metrics related to customer experience and customer satisfaction as you begin deploying large language models into your contact centers.

How is Generative AI For Customer Experience Being Used?

There are many ways in which generative AI is impacting customer experience in places like contact centers, which we’ll detail in the sections below.

Personalized Customer Interactions

Machine learning has a long track record of personalizing content. Netflix, take to a famous example, will uncover patterns in the shows you like to watch, and will use algorithms to suggest content that checks similar boxes.

Generative AI, and tools like the Quiq conversational AI platform that utilize it, are taking this approach to a whole new level.

Once upon a time, it was only a human being that could read a customer’s profile and carefully incorporate the relevant information into a reply. Today, a properly fine-tuned generative language model can do this almost instantaneously, and at scale.

From the perspective of a contact center manager who is concerned with customer experience, this is an enormous development. Besides the fact that prior generations of language models simply weren’t flexible enough to have personalized customer interactions, their language also tended to have an “artificial” feel. While today’s models can’t yet replace the all-elusive human touch, they can do a lot to add make your agents far more effective in adapting their conversations to the appropriate context.

Better Understanding Your Customers and Their Journies

Marketers, designers, and customer experience professionals have always been data enthusiasts. Long before we had modern cloud computing and electronic databases, detailed information on potential clients, customer segments, and market trends used to be printed out on dead treads, where it was guarded closely. With better data comes more targeted advertising, a more granular appreciation for how customers use your product and why they stop using it, and their broader motivations.

There are a few different ways in which generative AI can be used in this capacity. One of the more promising is by generating customer journeys that can be studied and mined for insight.

When you begin thinking about ways to improve your product, you need to get into your customers’ heads. You need to know the problems they’re solving, the tools they’ve already tried, and their major pain points. These are all things that some clever prompt engineering can elicit from ChatGPT.

We took a shot at generating such content for a fictional network-monitoring enterprise SaaS tool, and this was the result:

 

 

While these responses are fairly generic [1], notice that they do single out a number of really important details. These machine-generated journal entries bemoan how unintuitive a lot of monitoring tools are, how they’re not customizable, how they’re exceedingly difficult to set up, and how their endless false alarms are stretching the security teams thin.

It’s important to note that ChatGPT is not soon going to obviate your need to talk to real, flesh-and-blood users. Still, when combined with actual testimony, they can be a valuable aid in prioritizing your contact center’s work and alerting you to potential product issues you should be prepared to address.

Round-the-clock Customer Service

As science fiction movies never tire of pointing out, the big downside of fighting a robot army is that machines never need to eat, sleep, or rest. We’re not sure how long we have until the LLMs will rise up and wage war on humanity, but in the meantime, these are properties that you can put to use in your contact center.

With the power of generative AI, you can answer basic queries and resolve simple issues pretty much whenever they happen (which will probably be all the time), leaving your carbon-based contact center agents to answer the harder questions when they punch the clock in the morning after a good night’s sleep.

Enhancing Multilingual Support

Machine translation was one of the earliest use cases for neural networks and machine learning in general, and it continues to be an important function today. While ChatGPT was noticeably very good at multilingual translation right from the start, you may be surprised to know that it actually outperforms alternatives like Google Translate.

If your product doesn’t currently have a diverse global user base speaking many languages, it hopefully will soon, at the means you should start thinking about multilingual support. Not only will this boost table stakes metrics like average handling time and resolutions per hour, it’ll also contribute to the more ineffable “customer satisfaction.” Nothing says “we care about making your experience with us a good one” like patiently walking a customer through a thorny technical issue in their native tongue.

Things to Watch Out For

Of course, for all the benefits that come from using generative AI for customer experience, it’s not all upside. There are downsides and issues that you’ll want to be aware of.

A big one is the tendency of large language models to hallucinate information. If you ask it for a list of articles to read about fungal computing (which is a real thing whose existence we discovered yesterday), it’s likely to generate a list that contains a mix of real and fake articles.

And because it’ll do so with great confidence and no formatting errors, you might be inclined to simply take its list at face value without double-checking it.

Remember, LLMs are tools, not replacements for your agents. They need to be working with generative AI, checking its output, and incorporating it when and where appropriate.

There’s a wider danger that you will fail to use generative AI in the way that’s best suited to your organization. If you’re running a bespoke LLM trained on your company’s data, for example, you should constantly be feeding it new interactions as part of its fine-tuning, so that it gets better over time.

And speaking of getting better, sometimes machine learning models don’t get better over time. Owing to factors like changes in the underlying data, model performance can sometimes get worse over time. You’ll need a way of assessing the quality of the text generated by a large language model, along with a way of consistently monitoring it.

What are the Benefits of Generative AI for Customer Experience?

The reason that people are so excited over the potential of using generative AI for customer experience is because there’s so much upside. Once you’ve got your model infrastructure set up, you’ll be able to answer customer questions at all times of the day or night, in any of a dozen languages, and with a personalization that was once only possible with an army of contact center agents.

But if you’re a contact center manager with a lot to think about, you probably don’t want to spend a bunch of time hiring an engineering team to get everything running smoothly. And, with Quiq, you don’t have to – you can leverage generative AI to supercharge your customer experience while leaving the technical details to us!

Schedule a demo to find out how we can bring this bleeding-edge technology into your contact center, without worrying about the nuts and bolts.

Footnotes
[1] It’s worth pointing out that we spent no time crafting the prompt, which was really basic: “I’m a product manager at a company building an enterprise SAAS tool that makes it easier to monitor system breaches and issues. Could you write me 2-3 journal entries from my target customer? I want to know more about the problems they’re trying to solve, their pain points, and why the products they’ve already tried are not working well.” With a little effort, you could probably get more specific complaints and more usable material.

Understanding the Risk of ChatGPT: What you Should Know

OpenAI’s ChatGPT burst onto the scene less than a year ago and has already seen use in marketing, education, software development, and at least a dozen other industries.

Of particular interest to us is how ChatGPT is being used in contact centers. Though it’s already revolutionizing contact centers by making junior agents vastly more productive and easing the burnout contributing to turnover, there are nevertheless many issues that a contact center manager needs to look out for.

That will be our focus today.

What are the Risks of Using ChatGPT?

In the following few sections, we’ll detail some of the risks of using ChatGPT. That way, you can deploy ChatGPT or another large language model with the confidence born of knowing what the job entails.

Hallucinations and Confabulations

By far the most well-known failure mode of ChatGPT is its tendency to simply invent new information. Stories abound of the model making up citations, peer-reviewed papers, researchers, URLs, and more. To take a recent well-publicized example, ChatGPT accused law professor Jonathan Turley of having behaved inappropriately with some of his students during a trip to Alaska.

The only problem was that Turley had never been to Alaska with any of his students, and the alleged Washington Post story which ChatGPT claimed had reported these facts had also been created out of whole cloth.

This is certainly a problem in general, but it’s especially worrying for contact center managers who may increasingly come to rely on ChatGPT to answer questions or to help resolve customer issues.

To those not steeped in the underlying technical details, it can be hard to grok why a language model will hallucinate in this way. The answer is: it’s an artifact of how large language models train.

ChatGPT learns how to output tokens from being trained on huge amounts of human-generated textual data. It will, for example, see the first sentences in a paragraph, and then try to output the text that completes the paragraph. The example below is the opening lines of J.D. Salinger’s The Catcher in the Rye. The blue sentences are what ChatGPT would see, and the gold sentences are what it would attempt to create itself:

“If you really want to hear about it, the first thing you’ll probably want to know is where I was born, and what my lousy childhood was like, and how my parents were occupied and all before they had me, and all that David Copperfield kind of crap, but I don’t feel like going into it, if you want to know the truth.”

Over many training runs, a large language model will get better and better at this kind of autocompletion work, until eventually it gets to the level it’s at today.

But ChatGPT has no native fact-checking abilities – it sees text and outputs what it thinks is the most likely sequence of additional words. Since it sees URLs, papers, citations, etc., during its training, it will sometimes include those in the text it generates, whether or not they’re appropriate (or even real.)

Privacy

Another ongoing risk of using ChatGPT is the fact that it could potentially expose sensitive or private information. As things stand, OpenAI, the creators of ChatGPT, offer no robust privacy guarantees for any information placed into a prompt.

If you are trying to do something like named entity recognition or summarization on real people’s data, there’s a chance that it might be seen by someone at OpenAI as part of a review process. Alternatively, it might be incorporated into future training runs. Either way, the results could be disastrous.

But this is not all the information collected by OpenAI when you use ChatGPT. Your timezone, browser type and IP address, cookies, account information, and any communication you have with OpenAI’s support team is all collected, among other things.

In the information age we’ve become used to knowing that big companies are mining and profiting off the data we generate, but given how powerful ChatGPT is, and how ubiquitous it’s becoming, it’s worth being extra careful with the information you give its creators. If you feed it private customer data and someone finds out, that will be damaging to your brand.

Bias in Model Output

By now, it’s pretty common knowledge that machine learning models can be biased.

If you feed a large language model a huge amount of text data in which doctors are usually men and nurses are usually women, for example, the model will associate “doctor” with “maleness” and “nurse” with “femaleness.”
This is generally an artifact of the data the models were trained, and is not due to any malfeasance on the part of the engineers. This does not, however, make it any less problematic.

There are some clever data manipulation techniques that are able to go a long way toward minimizing or even eliminating these biases, though they’re beyond the scope of this article. What contact center managers need to do is be aware of this problem, and establish monitoring and quality-control checkpoints in their workflow to identify and correct biased output in their language models.

Issues Around Intellectual Property

Earlier, we briefly described the training process for a large language model like ChatGPT (you can find much more detail here.) One thing to note is that the model doesn’t provide any sort of citations for its output, nor any details as to how it was generated.

This has raised a number of thorny questions around copyright. If a model has ingested large amounts of information from the internet, including articles, books, forum posts, and much more, is there a sense in which it has violated someone’s copyright? What about if it’s an image-generation model trained on a database of Getty Images?

By and large, we tend to think this is the sort of issue that isn’t likely to plague contact center managers too much. It’s more likely to be a problem for, say, songwriters who might be inadvertently drawing on the work of other artists.

Nevertheless, a piece on the potential risks of ChatGPT wouldn’t be complete without a section on this emerging problem, and it’s certainly something that you should be monitoring in the background in your capacity as a manager.

Failure to Disclose the Use of LLMs

Finally, there has been a growing tendency to make it plain that LLMs have been used in drafting an article or a contract, if indeed they were part of the process. To the best of our knowledge, there are not yet any laws in place mandating that this has to be done, but it might be wise to include a disclaimer somewhere if large language models are being used consistently in your workflow. [1]

That having been said, it’s also important to exercise proactive judgment in deciding whether an LLM is appropriate for a given task in the first place. In early 2023, the Peabody School at Vanderbilt University landed in hot water when it disclosed that it had used ChatGPT to draft an email about a mass shooting that had taken place at Michigan State.

People may not care much about whether their search recommendations were generated by a machine, but it would appear that some things are still best expressed by a human heart.

Again, this is unlikely to be something that a contact center manager faces much in her day-to-day life, but incidents like these are worth understanding as you decide how and when to use advanced language models.

Someone stopping a series of blocks from falling into each other, symbolizing the prevention of falling victim to ChatGPT risks.

Mitigating the Risks of ChatGPT

From the moment it was released, it was clear that ChatGPT and other large language models were going to change the way contact centers run. They’re already helping agents answer more queries, utilize knowledge spread throughout the center, and automate substantial portions of work that were once the purview of human beings.

Still, challenges remain. ChatGPT will plainly make things up, and can be biased or harmful in its text. Private information fed into its interface will be visible to OpenAI, and there’s also the wider danger of copyright infringement.

Many of these issues don’t have simple solutions, and will instead require a contact center manager to exercise both caution and continual diligence. But one place where she can make her life much easier is by using a powerful, out-of-the-box solution like the Quiq conversational AI platform.

While you’re worrying about the myriad risks of using ChatGPT you don’t also want to be contending with a million little technical details as well, so schedule a demo with us to find out how our technology can bring cutting-edge language models to your contact center, without the headache.

Footnotes
[1] NOTE: This is not legal advice.

Request A Demo

The Ongoing Management of an LLM Assistant

Technologies like large language models (LLMs) are amazing at rapidly generating polite text that helps solve a problem or answer a question, so they’re a great fit for the work done at contact centers.

But this doesn’t mean that using them is trivial or easy. There are many challenges associated with the ongoing management of an LLM assistant, including hallucinations and the emergence of bad behavior – and that’s not even mentioning the engineering prowess required to fine-tune and monitor these systems.

All of this must be borne in mind by contact center managers, and our aim today is to facilitate this process.

We’ll provide broad context by talking about some of the basic ways in which large language models are being used in business, discuss, setting up an LLM assistant, and then enumerate some of the specific steps that need to be taken in using them properly.

Let’s go!

How Are LLMs Being Used in Science and Business?

First, let’s adumbrate some of the ways in which large language models are being utilized on the ground.

The most obvious way is by acting as a generative AI assistant. One of the things that so stunned early users of ChatGPT was its remarkable breadth in capability. It could be used to draft blog posts, web copy, translate between languages, and write or explain code.

This alone makes it an amazing tool, but it has since become obvious that it’s useful for quite a lot more.

One thing that businesses have been experimenting with is fine-tuning large language models like ChatGPT over their own documentation, turning it into a simple interface by which you can ask questions about your materials.

It’s hard to quantify precisely how much time contact center agents, engineers, or other people spend hunting around for the answer to a question, but it’s surely quite a lot. What if instead you could just, y’know, ask for what you want, in the same way that you do a human being?

Well, ChatGPT is a long way from being a full person, but when properly trained it can come close where question-answering is concerned.

Stepping back a little bit, LLMs can be prompt engineered into a number of useful behaviors, all of which redound to the benefit of the contact centers which use them. Imagine having an infinitely patient Socratic tutor that could help new agents get up to speed on your product and process, or crafting it into a powerful tool for brainstorming new product designs.

There have also been some promising attempts to extend the functionality of LLMs by making them more agentic – that is, by embedding them in systems that allow them to carry out more open-ended projects. AutoGPT, for example, pairs an LLM with a separate bot that hits the LLM with a chain of queries in the pursuit of some goal.

AssistGPT goes even further in the quest to augment LLMs by integrating them with a set of tools that allow them to achieve objectives involving images and audio in addition to text.

How to Set Up An LLM Assistant

Next, let’s turn to a discussion of how to set up an LLM assistant. Covering this topic fully is well beyond the scope of this article, but we can make some broad comments that will nevertheless be useful for contact center managers.

First, there’s the question of which large language model you should use. In the beginning, ChatGPT was pretty much the only foundation model on offer. Today, however, that situation has changed, and there are now foundation models from Anthropic, Meta, and many other companies.

One of the biggest early decisions you’ll have to make is whether you want to try and use an open-source model (for which the code and the model weights are freely available) or a close-source model (for which they are not).

If you go the closed-source route you’ll almost certainly be hitting the model over an API, feeding it your queries and getting its responses back. This is orders of magnitude simpler than provisioning an open-source model, but it means that you’ll also be beholden to the whims of some other company’s engineering team. They may update the model in unexpected ways, or simply go bankrupt, and you’ll be left with no recourse.

Using an open-source alternative, of course, means grabbing the other horn of the dilemma. You’ll have visibility into how the model works and will be free to modify it as you see fit, but this won’t be worth much unless you’re willing to devote engineering hours to the task.

Then, there’s the question of fine-tuning large language models. While ChatGPT and LLMs more generally are quite good on their own, having them answer questions about your product or respond in particular ways means modifying their behavior somehow.

Broadly speaking, there are two ways of doing this, which we’ve mentioned throughout: proper fine-tuning, and prompt engineering. Let’s dig into the differences.

Fine-tuning means showing the model many (i.e. several hundred) examples of the behaviors you want to see, which changes its internal weights and biases it towards those behaviors in the future.

Prompt engineering, on the other hand, refers to carefully structuring your prompts to elicit the desired behavior. These LLMs can be surprisingly sensitive to little details in the instructions they’re provided, and prompt engineers know how to phrase their requests in just the right way to get what they need.

There is also some middle ground between these approaches. “One-shot learning” is a form of prompt engineering in which the prompt contains a singular example of the desired behavior, while “few-shot learning” refers to including between three and five examples.

Contact center managers thinking about using LLMs will need to think about these implementation details. If you plan on only lightly using ChatGPT in your contact center, a basic course on prompt engineering might be all you need. If you plan on making it an integral part of your organization, however, that most likely means a fine-tuning pipeline and serious technical investment.

The Ongoing Management of an LLM

Having said all this, we can now turn to the day-to-day details of managing an LLM assistant.

Monitoring the Performance of an LLM

First, you’ll need to continuously monitor the model. As hard as it may be to believe given how perfect ChatGPT’s output often is, there isn’t a person somewhere typing the responses. ChatGPT is very prone to hallucinations, in which it simply makes up information, and LLMs more generally can sometimes fall into using harmful or abusive language if they’re prompted incorrectly.

This can be damaging to your brand, so it’s important that you keep an eye on the language created by the LLMs your contact center is using.

And of course, not even LLMs can obviate the need to track the all-import key performance indicators. So far, there’s been one major study on generative AI in contact centers that found they increased productivity and reduced turnover, but you’ll still want to measure customer satisfaction, average handle time, etc.

There’s always a temptation to jump on a shiny new technology (remember the blockchain?), but you should only be using LLMs if they actually make your contact center more productive, and the only way you can assess that is by tracking your figures.

Iterative Fine-Tuning and Training

We’ve already had a few things to say about fine-tuning and the related discipline of prompt engineering, and here we’ll build on those preliminary comments.
The big thing to bear in mind is that fine-tuning a large language model is not a one-and-done kind of endeavor. You’ll find that your model’s behavior will drift over time (the technical term is “model degradation”), and this means you will likely to have to periodically re-train it.

It’s also common to offer the model “feedback”, i.e. by ranking it’s responses or indicating when you did or did not like a particular output. You’ve probably heard of reinforcement learning through human feedback, which is one version of this process, but there are also others you can use.

Quality Assurance and Oversight

A related point is that your LLMs will need consistent oversight. They’re not going to voluntarily improve on their own (they’re algorithms with no personal initiative to speak of), so you’ll need to checking in routinely to make sure they’re performing well and that your agents are using them responsibly.

There are many parts to this, including checks on the models outputs and an audit process that allows you to track down any issues. If you suddenly see a decline in performance, for example, you’ll need to quickly figure out whether it’s isolated to one agent or part of a larger pattern. If it’s the former, was it a random aberration, or did the agent go “off script” in a way that caused the model to behave poorly?

Take another scenario, in which an end-user was shown inappropriate text generated by an LLM. In this situation, you’ll need to take a deeper look at your process. If there were agents interacting with this model, ask them why they failed to spot the problematic text and stop it being shown to a customer. Or, if it came from a mostly-automated part of your tech stack, you need to uncover the reasons for which your filters failed to catch it, and perhaps think about keeping humans more in the loop.

The Future of LLM Assistants

Though the future is far from certain, we tend to think that LLMs have left Pandora’s box for good. They’re incredibly powerful tools which are poised to transform how contact centers and other enterprises operate, and experiments so far have been very promising; for all these reasons, we expect that LLMs will become a steadily more important part of the economy going forward.

That said, the ongoing management of an LLM assistant is far from trivial. You need to be aware at all times of how your model is performing and how your agents are using it. Though it can make your contact center vastly more productive, it can also lead to problems if you’re not careful.

That’s where the Quiq platform comes in. Our conversational AI is some of the best that can be found anywhere, able to facilitate customer interactions, automate text-message follow-ups, and much more. If you’re excited by the possibilities of generative AI but daunted by the prospect of figuring out how TPUs and GPUs are different, schedule a demo with us today.

Request A Demo

How Do You Train Your Agents in a ChatGPT World?

There’s long been an interest in using AI for educational purposes. Technologist Danny Hillis has spent decades dreaming of a digital “Aristotle” that would teach everyone in the way that the original Greek wunderkind once taught Alexander the Great, while modern companies have leveraged computer vision, machine learning, and various other tools to help students master complex concepts in a variety of fields.

Still, almost nothing has sparked the kind of enthusiasm for AI in education that ChatGPT and large language models more generally have given rise to. From the first, its human-level prose, knack for distilling information, and wide-ranging abilities made it clear that it would be extremely well-suited for learning.

But that still leaves the question of how. How should a contact center manager prepare for AI, and how should she change the way she trains her agents?

In our view, this question can be understood in two different, related ways:

  1. How can ChatGPT be used to help agents master skills related to their jobs?
  2. How can they be trained to use ChatGPT in their day-to-day work?

In this piece, we’ll take up both of these issues. We’ll first provide a general overview of the ways in which ChatGPT can be used for both education and training, then turn to the question of the myriad ways in which contact center agents can be taught to use this powerful new technology.

How is ChatGPT Used in Education and Training?

First, let’s get into some of the early ways in which ChatGPT is changing education and training.

NOTE: Our comments here are going to be fairly broad, covering some areas that may not be immediately applicable to the work contact center agents do. The main purpose for this is that it’s very difficult to forecast how AI is going to change contact center work.

Our section on “creating study plans and curricula”, for example, might not be relevant to today’s contact center agents. But it could become important down the road if AI gives rise to more autonomous workflows in the future, in which case we expect that agents would be given more freedom to use AI and similar tools to learn the job on their own.

We pride ourselves on being forward-looking and forward-thinking here at Quiq, and we structure our content to reflect this.

Making a Socratic Tutor for Learning New Subjects

The Greek philosopher Socrates famously pioneered the instructional methodology which bears his name. Mostly, the Socratic method boils down to continuously asking targeted questions until areas of confusion emerge, at which point they’re vigorously investigated, usually in a small group setting.

A well-known illustration of this process is found in Plato’s Republic, which starts with an attempt to define “justice” and then expands into a much broader conversation about the best way to run a city and structure a social order.

ChatGPT can’t replace all of this on its own, of course, but with the right prompt engineering, it does a pretty good job. This method works best when paired with a primary source, such as a textbook, which will allow you to double-check ChatGPT’s questions and answers.

Having it Explain Code or Technical Subjects

A related area in which people are successfully using ChatGPT is in having it walk you through a tricky bit of code or a technical concept like “inertia”.

The more basic and fundamental, the better. In our experience so far, ChatGPT has almost never failed in correctly explaining simple Python, Pandas, or Java. It did falter when asked to produce code that translates between different orbital reference frames, however, and it had no idea what to do when we asked it about a fairly recent advance in the frontiers of battery chemistry.

There are a few different reasons that we advise caution if you’re a contact center agent trying to understand some part of your product’s codebase. For one thing, if the product is written in a less-common language ChatGPT might not be able to help much.

But even more importantly, you need to be extremely careful about what you put into it. There have already been major incidents in which proprietary code and company secrets were leaked when developers pasted them into the ChatGPT interface, which is visible to the OpenAI team.

Conversely, if you’re managing teams of contact center agents, you should begin establishing a policy on the appropriate uses of ChatGPT in your contact center. If your product is open-source there’s (probably) nothing to worry about, but otherwise, you need to proactively instruct your agents on what they can and cannot use the tool to accomplish.

Rewriting Explanations for Different Skill Levels

Wired has a popular Youtube series called “5 levels”, where experts in quantum computing or the blockchain will explain their subject at five different skill levels: “child”, “teen”, “college student”, “grad student”, and a fellow “expert.”

One thing that makes this compelling to beginners and pros alike is seeing the same idea explored across such varying contexts – seeing what gets emphasized or left out, or what emerges as you gradually climb up the ladder of complexity and sophistication.

This, too, is a place where ChatGPT shines. You can use it to provide explanations of concepts at different skill levels, which will ultimately improve your understanding of them.

For a contact center manager, this means that you can gradually introduce ideas to your agents, starting simply and then fleshing them out as the agents become more comfortable.

Creating Study Plans and Curricula

Stepping back a little bit, ChatGPT has been used to create entire curricula and even daily study plans for studying Spanish, computer science, medicine, and various other fields.

As we noted at the outset, we expect it will be a little while before contact center agents are using ChatGPT for this purpose, as most centers likely have robust training materials they like to use.

Nevertheless, we can project a future in which these materials are much more bare-bones, perhaps consisting of some general notes along with prompts that an agent-in-training can use to ask questions of a model trained on the company’s documentation, test themselves as they go, and gradually build skill.

Training Agents to Use ChatGPT

Now that we’ve covered some of the ways in which present and future contact center agents might use ChatGPT to boost their own on-the-job learning, let’s turn to the other issue we want to tackle today: how to train ChatGPT to agents today?

Getting Set Up With ChatGPT (and its Plugins)

First, let’s talk about how you can start using ChatGPT.

This section may end up seeming a bit anticlimactic because, honestly, it’s pretty straightforward. Today, you can get access to ChatGPT by going to the signup page. There’s a free version and a paid version that’ll set you back a whopping $20/month (which is a pretty small price to pay for access to one of the most powerful artifacts the human race has ever produced, in our opinion.)

As things stand, the free tier gives you access to GPT-3.5, while the paid version gives you the choice to switch to GPT-4 if you want the more powerful foundational model.

A paid account also gives you access to the growing ecosystem of ChatGPT plugins. You access the ChatGPT plugins by switching over to the GPT-4 option:

How do you Train Your Agents in a ChatGPT World?

 

How do you Train Your Agents in a ChatGPT World?

 

There are plugins that allow ChatGPT to browse the web, let you directly edit diagrams or talk with PDF documents, or let you offload certain kinds of computations to the Wolfram platform.

Contact center agents may or may not find any of these useful right now, but we predict there will be a lot more development in this space going forward, so it’s something managers should know about.

Best Practices for Combining Human and AI Efforts

People have long been fascinated and terrified by automation, but so far, machines have only ever augmented human labor. Knowing when and how to offload work to ChatGPT requires knowing what it’s good for.

Large language models learn how to predict the next token from their training data, and are therefore very good at developing rough drafts, outlines, and more routine prose. You’ll generally find it necessary to edit its output fairly heavily in order to account for context and so that it fits stylistically with the rest of your content.

As a manager, you’ll need to start thinking about a standard policy for using ChatGPT. Any factual claims made by the model, especially any references or citations, need to be checked very carefully.

Scenario-Based Training

In this same vein, you’ll want to distinguish between different scenarios in which your agents will end up using generative AI. There are different considerations in using Quiq Compose or Quiq Suggest to format helpful replies, for example, and in using it to translate between different languages.

Managers will probably want to sit down and brainstorm different scenarios and develop training materials for each one.

Ethical and Privacy Considerations

The rise of generative AI has sparked a much broader conversation about privacy, copyright, and intellectual property.

Much of this isn’t particularly relevant to contact center managers, but one thing you definitely should be paying attention to is privacy. Your agents should never be putting real customer data into ChatGPT, they should be using aliases and fake data whenever they’re trying to resolve a particular issue.

To quote fictional chemist and family man Walter White, we advise you to tread lightly here. Data breaches are a huge and ongoing problem, and they can do substantial damage to your brand.

ChatGPT and What it Means for Training Contact Center Agents

ChatGPT and related technologies are poised to change education and training. They can be used to help get agents up to speed or to work more efficiently, and they, in turn, require a certain amount of instruction to use safely.

These are all things that contact center managers need to worry about, but one thing you shouldn’t spend your time worrying about is the underlying technology. The Quiq conversational AI platform allows you to leverage the power of language models for contact centers, without looking at any code more complex than an API call. If the possibilities of this new frontier intrigue you, schedule a demo with us today!

The Pros and Cons of Using ChatGPT: Agents vs. Customers

If you’re a contact center manager who has been impressed with ChatGPT and everything it makes possible, a natural follow-up question is where you should deploy it.

On the one hand, you could use it internally to make your contact center agents more efficient. They’d be able to ask questions of your company documentation, summarize important emails, outsource the more trivial parts of their workload, and plenty besides.

On the other hand, you could use it externally as a customer-facing application. If you had clients that were confused about a feature or needed help figuring something out, ChatGPT could go a long way towards resolving their issues with minimal attention from your contact center agents.

Of course, there is major overlap in both these options, but there are crucial differences as well. In this article, we’ll discuss the pros and cons of using ChatGPT or a similar large language model (LLM) for contact center agents v.s. using it for customers.

How is ChatGPT Making Contact Center Agents More Efficient?

To a first approximation, a contact center is a place where questions are answered. No matter how clear your instructions or comprehensive your documentation, there will inevitably be users who simply can’t get an issue resolved, and that’s when they’ll reach out to customer support.

This means that much of a contact center agent’s day-to-day revolves around interacting with clients via text, either over a chat interface or possibly through text messaging.

What’s more, much of this interaction will be relatively formulaic. Customers will be repeatedly asking about similar sorts of issues, or there’ll be asking questions that are covered somewhere in your product’s documentation.

If you’ve spent even five minutes with ChatGPT, it’s probably occurred to you that it’s a powerful tool for handling exactly these kinds of tasks. Let’s spend a few minutes digging into this idea.

Outsourcing Routine Tasks

The most obvious way that ChatGPT is making contact center agents more efficient is by allowing them to outsource some of this more routine work.

There are a few ways this can happen. First, ChatGPT can help with answering basic questions. Today, large language models are not particularly good at generating highly original and inventive text, but when it comes to churning out helpful, simple boilerplate, they’re without peer.

This means that, with a little training or fine-tuning, your contact center agents can use ChatGPT to answer the sorts of questions they see multiple times a day, such as where a given feature is located or how to handle a common error. This will free them up to focus on the more involved queries, for which they have a comparative advantage.

In this same vein, tools like ChatGPT can also help contact center agents adopt the appropriate, polite tone in their correspondences. Customer experience and customer service are major parts of being a contact center agent, which means replies must be crafted so as to put the customer (who may be frustrated, angry, and belligerent) at ease.

This is something ChatGPT excels at, and according to the paper “Generative AI at work”, this exact dynamic was responsible for a lot of the gain in productivity seen in a contact center that began using an LLM. The model was trained on the interactions of more seasoned agents who know how to deal with tricky customers, and a good portion of this ability was transferred to more junior agents via the model’s output.

Another place where ChatGPT can help is in writing documentation. This may fall to a technical writer rather than an actual agent, but in either case, ChatGPT’s remarkable ability to provide outlines and quickly generate expository text can speed up the process of documenting your product’s core features.

And finally, ChatGPT is quite good at writing and explaining simple code. As with documentation, it’s doubtful that a contact center agent is going to be spending much time writing code. Nevertheless, your agents might find themselves hit with questions from savvier users about e.g. API integrations, so they should know that they can query ChatGPT about what a code snippet is doing, and they can have it generate a basic code example if they need to.

Learning and Brainstorming

This is a bit more abstract, but ChatGPT has proven remarkably useful in brainstorming study plans, solutions to problems, etc. Though the algorithm itself isn’t particularly creative, when it generates ideas that a human being can riff off of the combination of algorithm + human can be much more creative than a human working by herself.

While there will be many situations in which a contact center agent has a script to work off of, when they don’t, turning to ChatGPT can be the spark that moves them forward.

ChatGPT Plugins for Contact Center Agents

One of the more exciting developments for ChatGPT was the release of its plugin library in March of 2023. There are now plugins from Instacart (for food delivery), Expedia (for trip planning), Klarna Shopping (for online retail), and many others.

Truthfully, most of this won’t (yet) be of much use for contact center agents, but it’s worth mentioning given how quickly people are developing new plugins. If you’re a contact center agent or manager wanting to extend the functionality of powerful LLM technologies, plugins are something you’ll want to be aware of.

Getting the Most out of ChatGPT for Customer Service

ChatGPT is remarkably good for a wide range of tasks, but to really leverage its full capacities you’ll need to be aware of a few common terms.

Large language models are known to be really sensitive to small changes in word choice and structure, which means there’s an art to phrasing your requests just so. This is known as “prompt engineering” a language model, and it’s a new discipline that can be enormously valuable if done correctly.

You can also get better results if you show ChatGPT an example or two of what you’re looking for. This is known as “one-shot” learning (if you show it one example), and “few-shot” learning (if you show it five or six).

Of course, if that doesn’t work you can instead try to fine-tune a large language model. This involves gathering hundreds of examples of the conversations, text, or output you want to see and feeding them all to the model (probably over its API) so that the model’s internal structure actually changes. Though it’s obviously a more significant engineering challenge, it will probably give you the best results of all.

ChatGPT v.s. Chatbots

We in the customer experience field have quite a lot of experience with chatbots, so it’s natural to wonder how ChatGPT is different.

Chatbots are just algorithms that are capable of carrying on a dialogue with customers, and this can be accomplished in many different ways. Some chatbots are extremely simple and follow a rules-based approach to formatting their responses, while those based on neural networks or some other advanced machine-learning technology are much more flexible.

Chatbots can be built with ChatGPT, but most aren’t.

How is ChatGPT Changing Customer Experience?

Now that we’ve covered some of the ways in which ChatGPT is helping customer service agents, let’s discuss some of the ways ChatGPT is used for customer support.

Personalized Responses

One property of ChatGPT that makes it extremely effective is that it’s able to remember the context. When you chat with ChatGPT, it’s not generating each new response in a vacuum, it’s producing them either on the basis of what has already been said or based on information that it’s been given.

This means that if you have a customer interacting with a chat interface powered by a LLM (and are being smart by guardrailing it with a conversational CX platform like Quiq), they’ll be able to have more open-ended and personalized interactions with the tool than would be possible with simpler chatbots.

This will go a long way toward making them feel like they’re being taken care of, thus boosting your company’s overall customer satisfaction.

Automatically Resolving Customer Issues

Earlier, we talked about how contact center agents would be able to leverage ChatGPT in order to outsource their more routine tasks.

Well, one of those routine tasks is resolving a steady stream of quotidian issues. How many times a day do you think a contact center agent has to help a person log in to their client’s software or reset a password? It’s probably not “hundreds”, but we’d bet that it’s a lot.

ChatGPT is a long way away from being able to patiently guide a user through any arbitrary problem they might have, but it’s already more than capable of handling the kinds simple of repetitive, basic queries that sap an agent’s energy.

Automatic Natural Language Translation

One of the surprising places where ChatGPT excels is in fast, accurate translation between multiple languages. Given the fact that English is so commonly used in the technical community, it can be easy to lose sight of the fact that billions of people have either no knowledge of it or, at best, a very rudimentary grasp.

But not many can afford to have all their documentation translated into dozens of different tongues or to keep a team of translators on staff. ChatGPT is almost certainly not going to capture every little nuance in a translation, but it should be sufficient to help a person resolve their issue on their own or to ask more pointed, technical questions.

Dangers in Using ChatGPT

Whether you end up letting your agents or your customers get ahold of ChatGPT first, you should know that it’s not a panacea, nor is it perfect. It can and will fail, and some of those failures are reasonably predictable ones you should be prepared for.

The most obvious and well-known failure is referred to as a “hallucination”, and it results from the way that LLMs like ChatGPT are trained. An LLM learns how to output sequences of tokens, it’s not doing any fact-checking on its own. That means it will cheerfully and confidently make up names, book titles, and URLs.

It’s also possible for ChatGPT to become obnoxious and insulting. The team at OpenAI has done a good job of tuning this behavior out, but recall that these systems are very sensitive to the way prompts are structured, and it can reemerge.

There’s no general solution to these issues as far as we know. You can assiduously construct a fine-tuning pipeline for LLMs that does even more to get rid of toxicity, but ultimately you’re going to have to monitor ChatGPT’s output to see if it’s straying or otherwise being unhelpful.

Quiq specializes in defining guardrails for enterprise businesses who want to harness ChatGPT’s benefits, but are brand protective.

Figuring Out Where to Deploy ChatGPT

Whether it makes more sense to use ChatGPT internally or externally will depend a lot on your circumstances. There’s a lot ChatGPT can do to make your contact center agents more efficient, but if you’re just wanting to offload basic customer queries they can certainly be useful for that purpose.

In our considered opinion, the ROI is ultimately higher for using ChatGPT in a customer-facing way. This will allow your clients to help themselves, ultimately boosting their satisfaction and their estimation of your product.

But whichever way you choose to go, you can substantially reduce the headache associated with managing the infrastructure for this complex technology by making use of the Quiq conversational CX platform. With us, you can get world-leading results, satisfy your customer, lighten the load on your agents, and never have to worry about a rogue answer,  compute cluster, or GPU.

Ways to Use ChatGPT for Customer Service

Now that we’ve all seen what ChatGPT can do, it’s natural to begin casting about for ways to put it to work. An obvious place where a generative AI language model can be used is in contact centers, which involve a great deal of text-based tasks like answering customer questions and resolving their issues.

But is ChatGPT ready for the on-the-ground realities of contact centers? What if it responds inappropriately, abuses a customer, or provides inaccurate information?

We at Quiq pride ourselves on being experts in the domain of customer experience and customer service, and we’ve been watching the recent developments in the realm of generative AI for some time. This piece presents our conclusions about what ChatGPT is, the ways in which ChatGPT can be used for customer service, and the techniques that exist to optimize it for this domain.

What is ChatGPT?

ChatGPT is an application built on top of GPT-4, a large language model. Large language models like GTP-4 are trained on huge amounts of textual data, and they gradually learn the statistical patterns present in that data well enough to output their own, new text.

How does this training work? Well, when you hear a sentence like “I’m going to the store to pick up some _____”, you know that the final word is something like “milk”, “bread”, or “groceries”, and probably not “sawdust” or “livestock”. This is because you’ve been using English for a long time, you’re familiar with what happens at a grocery store, and you have a sense of how a person is likely to describe their exciting adventures there (nothing gets our motor running like picking out the really good avocados).

GPT-4, of course, has none of this context, but if you show it enough examples it can learn to imitate natural language quite well. It will see the first few sentences of a paragraph and try to predict what the final sentence is. At first, its answers are terrible, but with each training run its internal parameters are updated, and it gradually gets better. If you do this for long enough you get something that can write its own emails, blog posts, research reports, book summaries, poems, and codebases.

Is ChatGPT the Same Thing as GPT-4?

So then, how is ChatGPT different from GPT-4? GPT-4 is the large language model trained in the manner just described, and ChatGPT is a version fine-tuned using reinforcement learning with human feedback to be good at conversations.

Fine-tuning refers to a process of taking a pre-trained language model and doing a little extra work to narrow its focus to doing a particular task. A generic LLM can do many things, including write limericks; but if you want it to consistently write high-quality limericks, you’ll need to fine-tune it by showing it a few dozen or a few hundred examples of them.

From that point on it will be specialized for limerick production, and might consequently be less useful for other tasks.

This is how ChatGPT was created. After GPT-3.5 or GPT-4 was finished training, engineers did additional fine-tuning work that led to a model that was especially good at having open-ended interactions with users.

What does ChatGPT mean for Customer Service?

Given that ChatGPT is useful for customer interactions, how might it be deployed in customer service? We believe that a good list of initial use cases includes question answering, personalizing responses to different customers, summarizing important information, translating between languages, and performing sentiment analysis.

This is certainly not everything current and future versions of ChatGPT will be able to do for customer service, but we think it’s a good place to start.

Question Answering

Question answering has long been of such interest to machine learning engineers that there’s a whole bespoke dataset specifically for it (the Stanford Question Answering Dataset, or SQuAD).

It’s not hard to see why. Humans can obviously answer questions, but there are so many possible questions that there’s just no way to get to it all. What if you’d like high-level summaries of all the major research papers published about an obscure scientific sub-discipline? What if you’d like to see how the tone of Victorian-era English novels changed over time? There are only so many person-hours that can go toward digging into queries like this.

Customers, too, have many questions, and answering them takes a lot of time. You could collect all the frequently asked questions and put them into a single document for easy reference, but there are still going to be areas of confusion and requests for clarification (and that’s not even considering the fraction of users who never make it to your FAQ page in the first place).

Automating the process of asking questions is an obvious place to utilize technology like ChatGPT. It’ll never get frustrated answering the same thing thousands of times, it’ll never lose its patience, it’ll never sleep, and it’ll never take a bathroom break.

Vanilla ChatGPT is pretty good at doing this already, and there are already many projects focused on getting it to answer questions about a particular company’s documentation.

This functionality will enable you to field an effectively unlimited number of customer questions while freeing up your contact center agents to tackle more important issues.

Onboarding New Hires

Customers are not the only people who might have questions about your product – new hires unfamiliar with your process for doing things might also have their fair share of confusion.

Even in companies that are very conscious about documentation, there can often be so much to get through that new employees – who already have a lot going on – can feel overwhelmed.

A large language model trained to answer questions about your documentation will be a godsend to the fresh troops you’ve brought in.

Summarization

A related task is summarizing email threads, important technical documents, or even videos.

Just as you can’t realistically expect every customer to assiduously look through all your company’s documents, it’s usually not realistic to expect that all of your own employees will do so either.

Here, too, is a place where ChatGPT can be useful. It’s quite good at taking a lengthy bit of text and summarizing it, so there’s no reason it can’t be used to keep your teams up to speed on what’s happening in parts of the organization that they don’t interact with all that often.

If your engineers don’t want to go over an exchange between product designers, or your marketing team doesn’t want all the details of a conversation between the data scientists, ChatGPT can be used to create summaries of these interactions for easier reading.

This way everyone knows what’s going on throughout the company without needing to spend hours every day staying abreast of evolving issues.

At Quiq, we’ve developed proprietary ways to harness ChaGPT’s generative abilities to summarize conversations for your contact center agents.

Sentiment Analysis

Finally, another way in which ChatGPT will power the contact center of the future is with sentiment analysis. Sentiment analysis refers to a branch of machine learning aimed at parsing the overall tone of a piece of text. This can be more subtle than you might think.

“I hate this restaurant” is pretty unambiguous, but what about a review like “Yeah, we loved this restaurant, we had plenty of time to chat because the food took an hour to come out, and since my enchilada was frozen it counteracted my usual inability to eat spicy food”? You and I can hear the implied eye-rolling in this text, but a machine won’t necessarily be able to unless it’s very powerful.

This matters for contact centers because you need to understand how people are talking about your product, whether that’s in online reviews, internal tickets, or during conversations with your agents.

And ChatGPT can help. It’s not only quite good at sentiment analysis, but it’s also better than quite a lot of alternative machine-learning approaches to sentiment analysis, even without fine-tuning.

(Note, however, that these tests compare it to relatively simple machine learning models, not to the very best deep-learning sentiment analyzers.)

Prioritizing Incoming Issues

One way that ChatGPT can add tremendous value to your contact center is in helping to prioritize issues as they come in. There are always lots of problems to solve, but they’re not all equally important. Finding the most pressing issues and marking them for resolution is a huge part of keeping your center running smoothly.

This is something that humans can do, but there’s only so much energy they can devote to this task. A properly trained generative language model, however, can handle a huge chunk of it, especially when it forms part of a broader suite of AI tools.

One way this could work is using ChatGPT for plucking out essential keywords from a customer service ticket. This by itself might be enough to help your contact center agents figure out what they should focus on, but it can be made even better if these words are then fed to a classification algorithm trained to identify urgent problems.

Real-time Language Translation

Language translation, too, is a clear use case for LLMs, and the deep learning upon which they are based has seen much success in translating from one language to another.

This is especially useful if your product or service enjoys a global audience. Many people have a passing familiarity with English but will not necessarily be able to follow a detailed procedure involving technical vocabulary, and that will be a source of frustration for them.

By substantially or totally automating real-time language translation, ChatGPT can help customers who lack English fluency to better interact with your company’s offerings, answering their questions, resolving their issues, and in general moving them along.

And in case you’re wondering, ChatGPT is currently even better than Google Translate or DeepL at most translation tasks, including tricky ones involving jokes and humor.

Fine-Tuning ChatGPT for Customer Service

So far we’ve mostly talked about ChatGPT out of the box, but we’ve also made some references to “fine-tuning” it.

In this section, we’ll flesh out our earlier comments about fine-tuning ChatGPT, and distinguish fine-tuning from related techniques, like prompt engineering.

What is Fine-Tuning ChatGPT?

Once upon a time, it was anyone’s guess as to whether you’d be able to pre-train a single large model on a dataset and then tweak it for particular applications, or whether you’d need to train a special model for every individual task.

Beginning around 2011, it became increasingly clear that for many applications, pre-training was the way to go, and since then, many techniques have been developed for doing the subsequent fine-tuning.

When you fine-tune a pre-trained generative AI model, you are effectively altering its internal structure so that it does better on the task you’re interested in. Sometimes this involves changing the whole model, other times you’re altering the last few output layers and leaving the rest of the model intact.

But what it ultimately boils down to is creating a fine-tuning pipeline through which your model sees a lot of examples of the behavior you’re trying to elicit. If you were fine-tuning it to be more polite in its follow-up questions, for example, you’d need to collect a bunch of examples of this politeness and have your model learn on them.

How many examples you end up needing will depend on your specific use case, but it’s usually a few dozen and could be as many as a few hundred.

How is Fine-Tuning Different From Prompt Engineering?

Prompt engineering refers to the practice of carefully sculpting the prompt you feed your model to do a better job of producing the output you want to see.

The reason this works is that GPT-4 and other LLMs are extremely sensitive to slight changes in the wording of their prompts. It takes a while to develop the feel required to reliably produce good results with an LLM, and all of this falls under the label of “prompt engineering”.

It’s possible to inject some light fine-tuning into prompt engineering, through one-shot and few-shot learning. One-shot learning means including one example of the behavior you want to see in your prompt, and few-shot learning is the same idea, but you’re including 2-5 examples for the LLM to learn from.

FAQs About ChatGPT for Customer Service

Now that we’ve finished our discussion of the basics of ChatGPT for customer service, we’ll spend some time addressing common questions about this subject.

Can I Use ChatGPT for Customer Service?

Yes! ChatGPT is ideal for customer service applications, but you need to fine-tune ChatGPT on your own company’s documentation or to get it to strike the right tone. With the right guardrails, it’s a powerful tool for those looking to build a forward-looking contact center.

What are the Examples of ChatGPT in Customer Service?

ChatGPT can be used for customer service tasks like question answering, sentiment analysis, translating between natural languages, and summarizing documents. These are all time-intensive tasks, the automation of which will free up your contact center agents to focus on higher-priority work.

Can you Automate Customer Service?

Tools like AutoGPT and SuperAGI are making it easier than ever to create and manage sophisticated agents capable of handling open-ended tasks. Still, artificial intelligence is not yet flexible enough to entirely automate customer service at present.

It can be used to automate substantial parts of customer service, like answering user questions, but for the moment the lion’s share of the work must still be done by flesh-and-blood human beings.

If you’re interested in developments in this space, be sure to follow the Quiq blog for updates.

ChatGPT and the Contact Center of the Future

ChatGPT and related technologies are already changing the way contact centers function. From automated translation to helping field dramatically more questions per hour, they are helping contact center agents be more productive and reducing organizational turnover.

The Quiq platform is an excellent tool for incorporating conversational AI into your offering, without having to hire a team or manage your own infrastructure. Quiq can help you automate text messaging, handle real-time translation, and track the performance of your AI Assistants to see where improvements need to be made.

What Is Transfer Learning? – The Role of Transfer Learning in Building Powerful Generative AI Models

Machine learning is hard work. Sure, it only takes a few minutes to knock out a simple tutorial where you’re training an image classifier on the famous iris dataset, but training a big model to do something truly valuable – like interacting with customers over a chat interface – is a much greater challenge.

Transfer learning offers one possible solution to this problem. By making it possible to train a model in one domain and reuse it in another, transfer learning can reduce demands on your engineering team by a substantial amount.

Today, we’re going to get into transfer learning, defining what it is, how it works, where it can be applied, and the advantages it offers.

Let’s get going!

What is Transfer Learning in AI?

In the abstract, transfer learning refers to any situation in which knowledge from one task, problem, or domain is transferred to another. If you learn how to play the guitar well and then successfully use those same skills to pick up a mandolin, that’s an example of transfer learning.

Speaking specifically about machine learning and artificial intelligence, the idea is very similar. Transfer learning is when you pre-train a model on one task or dataset and then figure out a way to reuse it for another (we’ll talk about methods later).

If you train an image model, for example, it will tend to learn certain low-level features (like curves, edges, and lines) that show up in pretty much all images. This means you could fine-tune the pre-trained model to do something more specialized, like recognizing faces.

Why Transfer Learning is Important in Deep Learning Models

Building a deep neural network requires serious expertise, especially if you’re doing something truly novel or untried.

Transfer learning, while far from trivial, is simply not as taxing. GPT-4 is the kind of project that could only have been tackled by some of Earth’s best engineers, but setting up a fine-tuning pipeline to get it to do good sentiment analysis is a much simpler job.

By lowering the barrier to entry, transfer learning brings advanced AI into reach for a much broader swath of people. For this reason alone, it’s an important development.

Transfer Learning vs. Fine-Tuning

And speaking of fine-tuning, it’s natural to wonder how it’s different from transfer learning.

The simple answer is that fine-tuning is a kind of transfer learning. Transfer learning is a broader concept, and there are other ways to approach it besides fine-tuning.

What are the 5 Types of Transfer Learning?

Broadly speaking, there are five major types of transfer learning, which we’ll discuss in the following sections.

Domain Adaptation

Under the hood, most modern machine learning is really just an application of statistics to particular datasets.

The distribution of the data a particular model sees, therefore, matters a lot. Domain adaptation refers to a family of transfer learning techniques in which a model is (hopefully) trained such that it’s able to handle a shift in distributions from one domain to another (see section 5 of this paper for more technical details).

Domain Confusion

Earlier, we referenced the fact that the layers of a neural network can learn representations of particular features – one layer might be good at detecting curves in images, for example.

It’s possible to structure our training such that a model learns more domain invariant features, i.e. features that are likely to show up across multiple domains of interest. This is known as domain confusion because, in effect, we’re making the domains as similar as possible.

Multitask Learning

Multitask learning is arguably not even a type of transfer learning, but it came up repeatedly in our research, so we’re adding a section about it here.

Multitask learning is what it sounds like; rather than simply training a model on a single task (i.e. detecting humans in images), you attempt to train it to do several things at once.

The debate about whether multitask learning is really transfer learning stems from the fact that transfer learning generally revolves around adapting a pre-trained model to a new task, rather than having it learn to do more than one thing at a time.

One-Shot Learning

One thing that distinguishes machine learning from human learning is that the former requires much more data. A human child will probably only need to see two or three apples before they learn to tell apples from oranges, but an ML model might need to see thousands of examples of each.

But what if that weren’t necessary? The field of one-shot learning addresses itself to the task of learning e.g. object categories from either one example or a small number of them. This idea was pioneered in “One-Shot Learning of Object Categories”, a watershed paper co-authored by Fei-Fei Li and her collaborators. Their Bayesian one-shot learner was able to “…to incorporate prior knowledge of the object world into the learning scheme”, and it outperformed a variety of other models in object recognition tasks.

Zero-Shot Learning

Of course, there might be other tasks (like translating a rare or endangered language), for which it is effectively impossible to have any labeled data for a model to train on. In such a case, you’d want to use zero-shot learning, which is a type of transfer learning.

With zero-shot learning, the basic idea is to learn features in one data set (like images of cats) that allow successful performance on a different data set (like images of dogs). Humans have little problem with this, because we’re able to rapidly learn similarities between types of entities. We can see that dogs and cats both have tails, both have fur, etc. Machines can perform the same feat if the data is structured correctly.

How Does Transfer Learning Work?

There are a few different ways you can go about utilizing transfer learning processes in your own projects.

Perhaps the most basic is to use a good pre-trained model off the shelf as a feature extractor. This would mean keeping the pre-trained model in place, but then replacing its final layer with a layer custom-built for your purposes. You could take the famous AlexNet image classifier, remove its last classification layer, and replace it with your own, for example.

Or, you could fine-tune the pre-trained model instead. This is a more involved engineering task and requires that the pre-trained model be modified internally to be better suited to a narrower application. This will often mean that you have to freeze certain layers in your model so that the weights don’t change, while simultaneously allowing the weights in other layers to change.

What are the Applications of Transfer Learning?

As machine learning and deep learning have grown in importance, so too has transfer learning become more crucial. It now shows up in a variety of different industries. The following are some high-level indications of where you might see transfer learning being applied.

Speech recognition across languages: Teaching machines to recognize and process spoken language is an important area of AI research and will be of special interest to those who operate contact centers. Transfer learning can be used to take a model trained in a language like French and repurpose it for Spanish.

Training general-purpose game engines: If you’ve spent any time playing games like chess or go, you know that they’re fairly different. But, at a high enough level of abstraction, they still share many features in common. That’s why transfer learning can be used to train up a model on one game and, under certain conditions, use it in another.

Object recognition and segmentation: Our Jetsons-like future will take a lot longer to get here if our robots can’t learn to distinguish between basic objects. This is why object recognition and object segmentation are both such important areas of research. Transfer learning is one way of speeding up this process. If models can learn to recognize dogs and then quickly be re-purposed for recognizing muffins, then we’ll soon be able to outsource both pet care and cooking breakfast.

transfer_learning_chihuahua
In fairness to the AI, it’s not like we can really tell them apart!

Applying Natural Language Processing: For a long time, computer vision was the major use case of high-end, high-performance AI. But with the release of ChatGPT and other large language models, NLP has taken center stage. Because much of the modern NLP pipeline involves word vector embeddings, it’s often possible to use a baseline, pre-trained NLP model in applications like topic modeling, document classification, or spicing up your chatbot so it doesn’t sound so much like a machine.

What are the Benefits of Transfer Learning?

Transfer learning has become so popular precisely because it offers so many advantages.

For one thing, it can dramatically reduce the amount of time it takes to train a new model. Because you’re using a pre-trained model as the foundation for a new, task-specific model, far fewer engineering hours have to be spent to get good results.

There are also a variety of situations in which transfer learning can actually improve performance. If you’re using a good pre-trained model that was trained on a general enough dataset, many of the features it learned will carry over to the new task.

This is especially true if you’re working in a domain where there is relatively little data to work with. It might simply not be possible to train a big, cutting-edge model on a limited dataset, but it will often be possible to use a pre-trained model that is fine-tuned on that limited dataset.

What’s more, transfer learning can work to prevent the ever-present problem of overfitting. Overfitting has several definitions depending on what resource you consult, but a common way of thinking about it is when the model is complex enough relative to the data that it begins learning noise instead of just signal.

That means that it may do spectacularly well in training only to generalize poorly when it’s shown fresh data. Transfer learning doesn’t completely rule out this possibility, but it makes it less likely to happen.

Transfer learning also has the advantage of being quite flexible. You can use transfer learning for everything from computer vision to natural language processing, and many domains besides.

Relatedly, transfer learning makes it possible for your model to expand into new frontiers. When done correctly, a pre-trained model can be deployed to solve an entirely new problem, even when the underlying data is very different from what it was shown before.

When To Use Transfer Learning

The list of benefits we just enumerated also offers a clue as to when it makes sense to use transfer learning.

Basically, you should consider using transfer learning whenever you have limited data, limited computing resources, or limited engineering brain cycles you can throw at a problem. This will often wind up being the case, so whenever you’re setting your sights on a new goal, it can make sense to spend some time seeing if you can’t get there more quickly by simply using transfer learning instead of training a bespoke model from scratch.

Check out the second video in Quiq’s LLM Intuitions series—created by our Head of AI, Kyle McIntyre—to learn about one of the oldest forms of transfer learning: Word embeddings.

Transfer Learning and You

In the contact center space, we understand how difficult it can be to effectively apply new technologies to solve our problems. It’s one thing to put together a model for a school project, and quite another to have it tactfully respond to customers who might be frustrated or confused.

Transfer learning is one way that you can get more bang for your engineering buck. By training a model on one task or dataset and using it on another, you can reduce your technical budget while still getting great results.

You could also just rely on us to transfer our decades of learning on your behalf (see what we did there). We’ve built an industry-leading conversational AI chat platform that is changing the game in contact centers. Reach out today to see how Quiq can help you leverage the latest advances in AI, without the hassle.

How Generative AI is Supercharging Contact Center Agents

If you’re reading this, you’ve probably had a chance to play around with ChatGPT or one of the other large language models (LLMs) that have been making waves and headlines in recent months.

Concerns around automation go back a long way, but there’s long been extra worry about the possibility that machines will make human labor redundant. If you’ve used generative AI to draft blog posts or answer technical questions, it’s natural to wonder if perhaps algorithms will soon be poised to replace humans in places like contact centers.

Given how new these LLMs are there has been little scholarship on how they’ve changed the way contact centers function. But “Generative AI at Work” by Erik Brynjolfsson, Danielle Li, and Lindsey R. Raymond took aim at exactly this question.

The results are remarkable. They found that access to tools like ChatGPT not only led to a marked increase in productivity among the lowest-skilled workers, it also had positive impacts on other organizational metrics, like reducing turnover.

Today, we’re going to break this economic study down, examining its methods, its conclusions, and what they mean for the contact centers of the future.

Let’s dig in!

A Look At “Generative AI At Work”

The paper studies data from the use of a conversational AI assistant by a little over 5,000 agents working in customer support.

It contains several major sections, beginning with a technical primer on what generative AI is and how it works before moving on to a discussion of the study’s methods and results.

What is Generative AI?

Covering the technical fundamentals of generative AI will inform our efforts to understand the ways in which this AI technology affected work in the study, as well as how it is likely to do so in future deployments.

A good way to do this is to first grasp how traditional, rules-based programming works, then contrast this with generative AI.

When you write a computer program, you’re essentially creating a logical structure that furnishes instructions the computer can execute.

To take a simple case, you might try to reverse a string such as “Hello world”. One way to do this explicitly is to write code in a language like Python which essentially says:

“Create a new, empty list, then start at the end of the string we gave you and work forward, successively adding each character you encounter to that list before joining all the characters into a reversed string”:

Python code demonstrating a reverse string.

Despite the fact that these are fairly basic instructions, it’s possible to weave them into software that can steer satellites and run banking infrastructure.

But this approach is not suitable for every kind of problem. If you’re trying to programmatically identify pictures of roses, for example, it’s effectively impossible to do this with rules like the ones we used to reverse the string.

Machine learning, however, doesn’t even try to explicitly define any such rules. It works instead by feeding a model many pictures of roses, and “training” it to learn a function that lets it identify new pictures of roses it has never seen before.

Generative AI is a kind of machine learning in which gargantuan models are trained on mind-boggling amounts of text data until they’re able to produce their own, new text. Generative AI is a distinct sub-branch of ML because its purpose is generation, while other kinds of models might be aimed at tasks like classification and prediction.

Is Generative AI The Same Thing As Large Language Models?

At this point, you might be wondering how whether generative AI is the same thing as LLMs. With all the hype and movement in the space, it’s easy to lose track of the terminology.

LLMs are a subset of the broader category of generative AI. All LLMs are generative AI, but there are generative algorithms that work with images, music, chess moves, and other things besides natural language.

How Did The Researchers Study the Effects of Generative AI on Work?

Now we understand that ML learns to recognize patterns, how this is different from classical computer programming, and how generative AI fits into the whole picture.

We can now get to the meat of the study, beginning with how Brynjolfsson, Li, and Raymond actually studied the use of generative AI by workers at a contact center.

The firm from which they drew their data is a Fortune 500 company that creates enterprise software. Its support agents are located mainly in the Phillippines (with a smaller number in the U.S.) to resolve customer issues via a chat interface.

Most of the agent’s job boils down to answering questions from the owners of small businesses that use the firm’s software. Their productivity is assessed via how long it takes them to resolve a given issue (“average handle time”), the fraction of total issues a given agent is able to resolve to the customer’s satisfaction (“resolution rate”), and the net number of customers who would recommend the agent (“net promoter score.”)

Line graphs showing handle time, resolution rate and customer satisfaction using AI.

The AI used by the firm is a version of GPT which has received additional training on conversations between customers and agents. It is mostly used for two things: generating appropriate responses to customers in real-time and surfacing links to the firm’s technical documentation to help answer specific questions about the software.

Bear in mind that this generative AI system is meant to help the agents in performing their jobs. It is not intended to – and is not being trained to – completely replace them. They maintain autonomy in deciding whether and how much of the AI’s suggestions to take.

How Did Generative AI Change Work?

Next, we’ll look at what the study actually uncovered.

There were four main findings, touching on how total worker productivity was impacted, whether productivity gains accrued mainly to low-skill or high-skill workers, how access to an AI tool changed learning on the job, and how the organization changed as a result.

1. Access to Generative AI Boosted Worker Productivity

First, being able to use the firm’s AI tool increased worker productivity by almost 14%. This came from three sources: a reduction in how long it took any given agent to resolve a particular issue, an expansion in the total number of resolutions an agent was able to work on in an hour, and a small jump in the fraction of chats that were completed successfully.

The firm's AI tool increased worker productivity by almost 14%

This boost happened very quickly, showing up in the first month after deployment, growing a little in the second month, and then remaining at roughly that level for the duration of the study.

2. Access to Generative AI Was Most Helpful for Lower-Skilled Agents

Intriguingly, the greatest productivity gains were seen among agents that were relatively low-skill, such as those that were new to the job, with longer-serving, higher-skilled agents seeing virtually none.

The agents in the very bottom quintile for skill level, in fact, were able to resolve 35% more calls per hour—a substantial jump.

The agents in the very bottom quintile for skill level were able to resolve more calls per hour 35%.

With the benefit of hindsight it’s tempting to see these results as obvious, but they’re not. Earlier studies have usually found that the benefits of new computing technologies accrued to the ablest workers, or led firms to raise the bar on skill requirements for different positions.

If it’s true that generative AI is primarily going to benefit less able employees, this fact alone will distinguish it from prior waves of innovation. [1]

3. Access To Generative AI Helps New Workers “Move Down the Learning Curve”

Perhaps the most philosophically interesting conclusion drawn by the study’s authors relates to how generative AI is able to partially learn the tacit knowledge of more skilled workers.

The term “tacit knowledge” refers to the hard-to-articulate behaviors you pick up as you get good at something.

Imagine trying to teach a person how to ride a bike. It’s easy enough to give broad instructions (“check your shoelaces”, “don’t brake too hard”), but there ends up being a billion little subtleties related to foot placement, posture, etc. that are difficult to get into words.

This is true for everything, and it’s part of what distinguishes masters from novices. It’s also a major reason for the fact that many professions have been resistant to full automation.

Remember our discussion of how rule-based programming is poorly suited to tasks where the rules are hard to state? Well, that applies to tasks involving a lot of tacit knowledge. If no one, not even an expert, can tell you precisely what steps to take to replicate their results, then no one is going to be able to program a computer to do it either.

But ML and generative AI don’t face this restriction. With data sets that are big enough and rich enough, the algorithms might be able to capture some of the tacit knowledge expert contact center agents have, e.g. how they phrase replies to customers.

This is suggested by the study’s results. By analyzing the text of customer-agent interactions, the authors found that novice agents using generative AI were able to sound more like experienced agents, which contributed to their success.

4. Access to Generative AI Changed the Way the Organization Functioned

Organizations are profoundly shaped by their workers, and we should expect to see organization-level changes when a new technology dramatically changes how employees operate.

Two major findings from the study were that employee turnover was markedly reduced and there were far fewer customers “escalating” an issue by asking to speak to a supervisor. This could be because agents using generative AI were overall treated much better by customers (who have been known to become frustrated and irate), leading to less stress.

The Contact Center of the Future

Generative AI has already impacted many domains, and this trend will likely only continue going forward. “Generative AI At Work” provides a fascinating glimpse into the way that this technology changed a large contact center by boosting productivity among the least-skilled agents, helping disseminate the hard-won experience of the most-skilled agents, and overall reducing turnover and dissatisfaction.

If this piece has piqued your curiosity about how you can use advanced AI tools for customer-facing applications, schedule a demo of the Quiq conversational CX platform today.

From resolving customer complaints with chatbots to automated text-message follow-ups, we’ve worked hard to build a best-in-class solution for businesses that want to scale with AI.

Let’s see what we can do for you!

[1] See e.g. this quote: “Our paper is related to a large literature on the impact of various forms of technological adoption on worker productivity and the organization of work (e.g. Rosen, 1981; Autor et al., 1998; Athey and Stern, 2002; Bresnahan et al., 2002; Bartel et al., 2007; Acemoglu et al., 2007; Hoffman et al., 2017; Bloom et al., 2014; Michaels et al., 2014; Garicano and Rossi-Hansberg, 2015; Acemoglu and Restrepo, 2020). Many of these studies, particularly those focused on information technologies, find evidence that IT complements higher-skill workers (Akerman et al., 2015; Taniguchi and Yamada, 2022). Bartel et al. (2007) shows that firms that adopt IT tend to use more skilled labor and increase skill requirements for their workers. Acemoglu and Restrepo (2020) study the diffusion of robots and find that the negative employment effects of robots are most pronounced for workers in blue-collar occupations and those with less than a college education. In contrast, we study a different type of technology—generative AI—and find evidence that it most effectively augments lower-skill workers.”

A Guide to Fine-Tuning Pretrained Language Models for Specific Use Cases

Over the past half-year, large language models (LLMs) like ChatGPT have proven remarkably useful for a wide range of tasks, including machine translation, code analysis, and customer interactions in places like contact centers.

For all this power and flexibility, however, it is often still necessary to use fine-tuning to get an LLM to generate high-quality output for specific use cases.

Today, we’re going to do a deep dive into this process, understanding how these models work, what fine-tuning is, and how you can leverage it for your business.

What is a Pretrained Language Model?

First, let’s establish some background context by tackling the question of what pretrained models are and how they work.

The “GPT” in ChatGPT stands for “generative pretrained transformer”, and this gives us a clue as to what’s going on under the hood. ChatGPT is a generative model, meaning its purpose is to create new output; it’s pretrained, meaning that it has already seen a vast amount of text data by the time end users like us get our hands on it; and it’s a transformer, which refers to the fact that it’s built out of billions of transformer modules stacked into layers.

If you’re not conversant in the history of machine learning it can be difficult to see what the big deal is, but pretrained models are a relatively new development. Once upon a time in the ancient past (i.e. 15 or 20 years ago), it was an open question as to whether engineers would be able to pretrain a single model on a dataset and then fine-tune its performance, or whether they would need to approach each new problem by training a model from scratch.

This question was largely resolved around 2013, when image models trained on the ImageNet dataset began sweeping competitions left and right. Since then it has become more common to use pretrained models as a starting point, but we want to emphasize that this approach does not always work. There remain a vast number of important projects for which building a bespoke model is the only way to go.

What is Transfer Learning?

Transfer learning refers to when an agent or system figures out how to solve one kind of problem and then uses this knowledge to solve a different kind of problem. It’s a term that shows up all over artificial intelligence, cognitive psychology, and education theory.

Author, chess master, and martial artist Josh Waitzkin captures the idea nicely in the following passage from his blockbuster book, The Art of Learning:

“Since childhood I had treasured the sublime study of chess, the swim through ever-deepening layers of complexity. I could spend hours at a chessboard and stand up from the experience on fire with insight about chess, basketball, the ocean, psychology, love, art.”

Transfer learning is a broader concept than pretraining, but the two ideas are closely related. In machine learning, competence can be transferred from one domain (generating text) to another (translating between natural languages or creating Python code) by pretraining a sufficiently large model.

What is Fine-Tuning A Pretrained Language Model?

Fine-tuning a pretrained language model occurs when the model is repurposed for a particular task by being shown illustrations of the correct behavior.

If you’re in a whimsical mood, for example, you might give ChatGPT a few dozen limericks so that its future output always has that form.

It’s easy to confuse fine-tuning with a few other techniques for getting optimum performance out of LLMs, so it’s worth getting clear on terminology before we attempt to give a precise definition of fine-tuning.

Fine-Tuning a Language Model v.s. Zero-Shot Learning

Zero-shot learning is whatever you get out of a language model when you feed it a prompt without making any special effort to show it what you want. It’s not technically a form of fine-tuning at all, but it comes up in a lot of these conversations so it needs to be mentioned.

(NOTE: It is sometimes claimed that prompt engineering counts as zero-shot learning, and we’ll have more to say about that shortly.)

Fine-Tuning a Language Model v.s. One-Shot Learning

One-shot learning is showing a language model a single example of what you want it to do. Continuing our limerick example, one-shot learning would be giving the model one limerick and instructing it to format its replies with the same structure.

Fine-Tuning a Language Model v.s. Few-Shot Learning

Few-shot learning is more or less the same thing as one-shot learning, but you give the model several examples of how you want it to act.

How many counts as “several”? There’s no agreed-upon number that we know about, but probably 3 to 5, or perhaps as many as 10. More than this and you’re arguably not doing “few”-shot learning anymore.

Fine-Tuning a Language Model v.s. Prompt Engineering

Large language models like ChatGPT are stochastic and incredibly sensitive to the phrasing of the prompts they’re given. For this reason, it can take a while to develop a sense of how to feed the model instructions such that you get what you’re looking for.

The emerging discipline of prompt engineering is focused on cultivating this intuitive feel. Minor tweaks in word choice, sentence structure, etc. can have an enormous impact on the final output, and prompt engineers are those who have spent the time to learn how to make the most effective prompts (or are willing to just keep tinkering until the output is correct).

Does prompt engineering count as fine-tuning? We would argue that it doesn’t, primarily because we want to reserve the term “fine-tuning” for the more extensive process we describe in the next few sections.

Still, none of this is set in stone, and others might take the opposite view.

Distinguishing Fine-Tuning From Other Approaches

Having discussed prompt engineering and zero-, one-, and few-shot learning, we can give a fuller definition of fine-tuning.

Fine-tuning is taking a pretrained language model and optimizing it for a particular use case by giving it many examples to learn from. How many you ultimately need will depend a lot on your task – particularly how different the task is from the model’s training data and how strict your requirements for its output are – but you should expect it to take on the order of a few dozen or a few hundred examples.

Though it bears an obvious similarity to one-shot and few-shot learning, fine-tuning will generally require more work to come up with enough examples, and you might have to build a rudimentary pipeline that feeds the examples in through the API. It’s almost certainly not something you’ll be doing directly in the ChatGPT web interface.

Contact Us

How Can I Fine-Tune a Pretrained Language Model?

Having gotten this far, we can now turn our attention to what the fine-tuning procedure actually consists in. The basic steps are: deciding what you’re wanting to accomplish, gather the requisite data (and formatting it correctly), feeding it to your model, and evaluating the results.

Let’s discuss each, in turn.

Deciding on Your Use Case

The obvious place to begin is figuring out exactly what it is you want to fine-tune a pretrained model to do.

It may seem as though this is too obvious to be included as its own standalone step, but we’ve singled it out is because you need to think through the specifics of what you’re trying to accomplish. It’s not enough to say “We want to fine-tune this model to write tweets for us”, you have to consider questions like “Should the tone by formal or informal” and “Are we okay with it adding emojis”.

This matters because it informs the data that you gather for fine-tuning. If you feed the model irreverent tweets with lots of 🙂s and 🚀s, you can expect that to show up in the tweets it generates for you.

Picking a Base Model

Next, you have to figure out which model you want to fine-tune. This is not the place to get into a discussion on all the “foundation” models that are available today, but suffice it to say that HuggingFace, OpenAI, Anthropic, and the other companies in the pretrained model space offer a variety of different options. BigScience has BLOOM and T0pp models, OpenAI has Ada, Babbage, Da Vinci, and Curie, and so on.

Different models have different fine-tuning requirements, so this choice will impact how you structure your pipeline later.

Gathering and Formatting the Fine-Tuning Data

The heart of fine-tuning a pretrained model is gathering the samples from which it will learn. This is more or less exactly what it sounds like: you compile a list of prompts and desired responses. This is the data that will adjust the model’s internal parameters, optimizing it for your use case.

AI educator Dave Shapiro has a tutorial on fine-tuning GPT-3 to be a tutor for younger students, and here is one of the training samples he used (“TIM” is the name he’s using for the tutor):

User: Hey TIM, I’m interested in learning more about Charles Darwin. Can you tell me a bit about him?

TIM: Charles Darwin was an English naturalist and geologist who developed the theory of evolution by natural selection. He was born into a wealthy family and was able to attend the prestigious Shrewsbury School. After graduation, he enrolled at the University of Edinburgh to study medicine, but he soon dropped out and enrolled at Christ’s College, Cambridge.

User: Interesting! I didn’t know he was born into a wealthy family.

TIM: Yes, his family was very wealthy and his father was a doctor. Darwin’s grandfather, Erasmus Darwin, was a well-known scientist and wrote several books on botany and evolution.

User: Wow, I had no idea. What made Darwin decide to drop out of medical school and study at Cambridge?

TIM: Darwin was interested in natural history and had developed a keen interest in taxidermy. After dropping out of medical school, he traveled to Wales with some friends to study geology. It was during this time that he began to develop his theory of evolution.

One thing you’ll note is that this sample is pretty long. It could take a while to come up with hundreds of these to fine-tune your model, but that’s actually something a generative pretrained language model like ChatGPT can help with. Just be sure you have a human go over the samples and check them for accuracy, or you risk compromising the quality of your outputs.

Another thing to think about is how you’ll handle adversarial behavior and edge cases. If you’re training a conversational AI chatbot for a contact center, for example, you’ll want to include plenty of instances of the model calmly and politely responding to an irate customer. That way, your output will be similarly calm and polite.

Lastly, you’ll have to format the fine-tuning data according to whatever specifications are required by the base model you’re using. It’ll probably be something similar to JSON, but check the documentation to be sure.

Feeding it to Your Model

Now that you’ve got your samples ready, you’ll have to give them to the model for fine-tuning. This will involve you feeding the examples to the model via its API and waiting until the process has finished.

What is the Difference Between Fine-Tuning and a Pretrained Model?

A pretrained model is one that has been previously trained on a particular dataset or task, and fine-tuning is getting that model to do well on a new task by showing it examples of the output you want to see.

Pretrained models like ChatGPT are often pretty good out of the box, but if you’re wanting it to create legal contracts or work with highly-specialized scientific vocabulary, you’ll likely need to fine-tune it.

Should You Fine-Tune a Pretrained Model For Your Business?

Generative pretrained language models like ChatGPT and Bard have already begun to change the way businesses like contact centers function, and we think this is a trend that is likely to accelerate in the years ahead.

If you’ve been intrigued by the possibility of fine-tuning a pretrained model to supercharge your enterprise, then hopefully the information contained in this article gives you some ideas on how to begin.

Another option is to leverage the power of the Quiq platform. We’ve built a best-in-class conversational AI system that can automate substantial parts of your customer interactions (without you needing to run your own models or set up a fine-tuning pipeline.)

To see how we can help, schedule a demo with us today!

Request A Demo

Brand Voice And Tone Building With Prompt Engineering

Artificial intelligence tools like ChatGPT are changing the way strategists are building their brands.

But with the staggering rate of change in the field, it can be hard to know how to utilize its full potential. Should you hire an engineering team? Pay for a subscription and do it yourself?

The truth is, it depends. But one thing you can try is prompt engineering, a term that refers to carefully crafting the instructions you give to the AI to get the best possible results.

In this piece, we’ll cover the basics of prompt engineering and discuss the many ways in which you can build your brand voice with generative AI.

What is Prompt Engineering?

As the name implies, generative AI refers to any machine learning (ML) model whose primary purpose is to generate some output. There are generative AI applications for creating new images, text, code, and music.

There are also ongoing efforts to expand the range of outputs generative models can handle, such as a fascinating project to build a high-level programming language for creating new protein structures.

The way you get output from a generative AI model is by prompting it. Just as you could prompt a friend by asking “How was your vacation in Japan,” you can prompt a generative model by asking it questions and giving it instructions. Here’s an example:

“I’m working on learning Java, and I want you to act as though you’re an experienced Java teacher. I keep seeing terms like `public class` and `public static void`. Can you explain to me the different types of Java classes, giving an example and explanation of each?”

When we tried this prompt with GPT-4, it responded with a lucid breakdown of different Java classes (i.e., static, inner, abstract, final, etc.), complete with code snippets for each one.

When Small Changes Aren’t So Small

Mapping the relationship between human-generated inputs and machine-generated outputs is what the emerging field of “prompt engineering” is all about.

Prompt engineering only entered popular awareness in the past few years, as a direct consequence of the meteoric rise of large language models (LLMs). It rapidly became obvious that GPT-3.5 was vastly better than pretty much anything that had come before, and there arose a concomitant interest in the best ways of crafting prompts to maximize the effectiveness of these (and similar) tools.

At first glance, it may not be obvious why prompt engineering is a standalone profession. After all, how difficult could it be to simply ask the computer to teach you Chinese or explain a coding concept? Why have a “prompt engineer” instead of a regular engineer who sometimes uses GPT-4 for a particular task?

A lot could be said in reply, but the big complication is the fact that a generative AI’s output is extremely dependent upon the input it receives.

An example pulled from common experience will make this clearer. You’ve no doubt noticed that when you ask people different kinds of questions you elicit different kinds of responses. “What’s up?” won’t get the same reply as “I notice you’ve been distant recently, does that have anything to do with losing your job last month?”

The same basic dynamic applies to LLMs. Just as subtleties in word choice and tone will impact the kind of interaction you have with a person, they’ll impact the kind of interaction you have with a generative model.

All this nuance means that conversing with your fellow human beings is a skill that takes a while to develop, and that also holds in trying to productively using LLMs. You must learn to phrase your queries in a way that gives the model good context, includes specific criteria as to what you’re looking for in a reply, etc.

Honestly, it can feel a little like teaching a bright, eager intern who has almost no initial understanding of the problem you’re trying to get them to solve. If you give them clear instructions with a few examples they’ll probably do alright, but you can’t just point them at a task and set them loose.

We’ll have much more to say about crafting the kinds of prompts that help you build your brand voice in upcoming sections, but first, let’s spend some time breaking down the anatomy of a prompt.

This context will come in handy later.

What’s In A Prompt?

In truth, there are very few real restrictions on how you use an LLM. If you ask it to do something immoral or illegal it’ll probably respond along the lines of “I’m sorry Dave, but as a large language model I can’t let you do that,” otherwise you can just start feeding it text and seeing how it responds.

That having been said, prompt engineers have identified some basic constituent parts that go into useful prompts. They’re worth understanding as you go about using prompt engineering to build your brand voice.

Context

First, it helps to offer the LLM some context for the task you want done. Under most circumstances, it’s enough to give it a sentence or two, though there can be instances in which it makes sense to give it a whole paragraph.

Here’s an example prompt without good context:

“Can you write me a title for a blog post?”

Most human beings wouldn’t be able to do a whole lot with this, and neither can an LLM. Here’s an example prompt with better context:

“I’ve just finished a blog post for a client that makes legal software. It’s about how they have the best payments integrations, and the tone is punchy, irreverent, and fun. Could you write me a title for the post that has the same tone?”

To get exactly what you’re looking for you may need to tinker a bit with this prompt, but you’ll have much better chances with the additional context.

Instructions

Of course, the heart of the matter is the actual instructions you give the LLM. Here’s the context-added prompt from the previous section, whose instructions are just okay:

“I’ve just finished a blog post for a client that makes legal software. It’s about how they have the best payments integrations, and the tone is punchy, irreverent, and fun. Could you write me a title for the post that has the same tone?”

A better way to format the instructions is to ask for several alternatives to choose from:

“I’ve just finished a blog post for a client that makes legal software. It’s about how they have the best payments integrations, and the tone is punchy, irreverent, and fun. Could you give me 2-3 titles for the blog post that have the same tone?”

Here again, it’ll often pay to go through a couple of iterations. You might find – as we did when we tested this prompt – that GPT-4 is just a little too irreverent (it used profanity in one of its titles.) If you feel like this doesn’t strike the right tone for your brand identity you can fix it by asking the LLM to be a bit more serious, or rework the titles to remove the profanity, etc.

You may have noticed that “keep iterating and testing” is a common theme here.

Example Data

Though you won’t always need to get the LLM input data, it is sometimes required (as when you need it to summarize or critique an argument) and is often helpful (as when you give it a few examples of titles you like.)

Here’s the reworked prompt from above, with input data:

“I’ve just finished a blog post for a client that makes legal software. It’s about how they have the best payments integrations, and the tone is punchy, irreverent, and fun. Could you give me 2-3 titles for the blog post that have the same tone?

Here’s a list of two titles that strike the right tone:
When software goes hard: dominating the legal payments game.
Put the ‘prudence’ back in ‘jurisprudence’ by streamlining your payment collections.”

Remember, LLMs are highly sensitive to what you give them as input, and they’ll key off your tone and style. Showing them what you want dramatically boosts the chances that you’ll be able to quickly get what you need.

Output Indicators

An output indicator is essentially any concrete metric you use to specify how you want the output to be structured. Our existing prompt already has one, and we’ve added another (both are bolded):

“I’ve just finished a blog post for a client that makes legal software. It’s about how they have the best payments integrations, and the tone is punchy, irreverent, and fun. Could you give me 2-3 titles for the blog post that have the same tone? Each title should be approximately 60 characters long.

Here’s a list of two titles that strike the right tone:
When software goes hard: dominating the legal payments game.
Put the ‘prudence’ back in ‘jurisprudence’ by streamlining your payment collections.”

As you go about playing with LLMs and perfecting the use of prompt engineering in building your brand voice, you’ll notice that the models don’t always follow these instructions. Sometimes you’ll ask for a five-sentence paragraph that actually contains eight sentences, or you’ll ask for 10 post ideas and get back 12.

We’re not aware of any general way of getting an LLM to consistently, strictly follow instructions. Still, if you include good instructions, clear output indicators, and examples, you’ll probably get close enough that only a little further tinkering is required.

What Are The Different Types of Prompts You Can Use For Prompt Engineering?

Though prompt engineering for tasks like brand voice and tone building is still in its infancy, there are nevertheless a few broad types of prompts that are worth knowing.

  • Zero-shot prompting: A zero-shot prompt is one in which you simply ask directly for what you want without providing any examples. It’ll simply generate an output on the basis of its internal weights and prior training, and, surprisingly, this is often more than sufficient.
  • One-shot prompting: With a one-shot prompt, you’re asking the LLM for output and giving it a single example to learn from.
  • Few-shot prompting: Few-shot prompts involve a least a few examples of expected output, as in the two titles we provided our prompt when we asked it for blog post titles.
  • Chain-of-thought prompting: Chain-of-thought prompting is similar to few-shot prompting, but with a twist. Rather than merely giving the model examples of what you want to see, you craft your examples such that they demonstrate a process of explicit reasoning. When done correctly, the model will actually walk through the process it uses to reason about a task. Not only does this make its output more interpretable, but it can also boost accuracy in domains at which LLMs are notoriously bad, like addition.

What Are The Challenges With Prompt Engineering For Brand Voice?

We don’t use the word “dazzling” lightly around here, but that’s the best way of describing the power of ChatGPT and the broader ecosystem of large language models.

You would be hard-pressed to find many people who have spent time with one and come away unmoved.

Still, challenges remain, especially when it comes to using prompt engineering for content marketing or building your brand voice.

One well-known problem is the tendency of LLMs to completely make things up, a phenomenon referred to as “hallucination”. The internet is now filled with examples of ChatGPT completely fabricating URLs, books, papers, professions, and individuals. If you use an LLM to create content for your website and don’t thoroughly vet it, you run the risk of damaging your reputation and your brand if it contains false or misleading information.

A related problem is legal or compliance issues that emerge as a result of using an LLM. Though the technology hasn’t been around long enough to get anyone into serious trouble (we suspect it won’t be long), there are now cases in which attorneys have been caught using faulty research generated by ChatGPT or engineering teams have leaked proprietary secrets by feeding meeting notes into it.

Finally, if you’re offering a fine-tuned model to customers to do something like answer questions, you must be very, very careful in delimiting its scope so that it doesn’t generate unwanted behavior. It’s pretty easy to accidentally wander into fraught territory when engaging with an LLM in an open-ended manner, and that’s not even counting users who deliberately try to get it to respond inappropriately.

One potential solution to this problem is by crafting your prompts such that they contain clear instructions about what not to do. You may tell it not to discuss its own rules, not to change its tone, not to speak negatively about anyone, not to argue, etc.

Crafting a prompt that illustrates the correct behavior while explicitly ruling out any incorrect behaviors is a non-trivial task, requiring a great deal of testing and refinement. But it’s one you’ll have to get right if you want to leverage LLMs for your brand voice while avoiding any possible harm down the line.

Prompt Engineering For Marketers

With all this context having been established, you’re no doubt excited about using prompt engineering in your marketing efforts. In this section, we’ll go through a couple of ways you can do that.

Coming up with ideas

One place where LLMs excel is in helping you with ideation. A lot goes into content marketing, but all of that follows from having a solid list of basic concepts to work from.

We tested ChatGPT’s functionality by feeding it this query:

“I’m starting a blog about becoming a data scientist. Could you give me a list of 10 blog post ideas, with a mix of beginner, intermediate, and advanced topics? The titles should be eye-catching but not too aggressive.

Here are two examples of good blog post titles:

‘Data science: what to know when considering a career in this hot new field.’
‘Data science v.s. data engineering: everything you wanted to know’”

And we got back some excellent ideas:

  1. “Demystifying Data Science: A Beginner’s Guide to Jumpstarting Your Career”
  2. “From Novice to Ninja: Essential Skills for Every Data Science Aspirant”
  3. “Unlocking the Power of Python: A Data Scientist’s Best Friend”
  4. “Exploratory Data Analysis: Unveiling Insights Hidden in the Numbers”
  5. “Machine Learning 101: A Gentle Introduction for Data Science Enthusiasts”
  6. “Feature Engineering: Enhancing Your Models Like a Pro”
  7. “The Art of Natural Language Processing: Extracting Meaning from Text”
  8. “Unsupervised Learning: Unveiling Patterns and Structure in Your Data”
  9. “Deep Dive into Neural Networks: Building Advanced Models for Predictive Analytics”
  10. “Data Science in the Real World: Tackling Complex Business Problems with Analytics”

And look, it even generated the correct number of titles! You’ll notice, though that every single one follows the same template, with an initial phrase, a colon, and then a follow-on phrase. That’s because the two examples I gave have this structure, and that’s what it used as a guide.

Performing Background Research

After you’re done coming up with good ideas for building your brand voice, you’ll likely want to do some additional background research on your domain. This, too, is a place where LLMs excel.

There can be a lot of subtlety to this. You might start with something obvious, like “give me a list of the top authors in the keto diet niche”, but you can also get more creative than this. We’ve heard of copywriters who have used GPT-3.5 to generate lengthy customer testimonials for fictional products, or diary entries for i.e. 40-year-old suburban dads who are into DIY home improvement projects.

Regardless, with a little bit of ingenuity, you can generate a tremendous amount of valuable research that can inform your attempts to develop a brand voice.

Be careful, though; this is one place where model hallucinations could be really problematic. Be sure to manually check a model’s outputs before using them for anything critical.

Generating Actual Content

Of course, one place where content marketers are using LLMs more often is in actually writing full-fledged content. We’re of the opinion that GPT-3.5 is still not at the level of a skilled human writer, but it’s excellent for creating outlines, generating email blasts, and writing relatively boilerplate introductions and conclusions.

Getting better at prompt engineering

Despite the word “engineering” in its title, prompt engineering remains as much an art as it is a science. Hopefully, the tips we’ve provided here will help you structure your prompts in a way that gets you good results, but there’s no substitute for practicing the way you interact with LLMs.

One way to approach this task is by paying careful attention to the ways in which small word choices impact the kinds of output generated. You could begin developing an intuitive feel for the relationship between input text and output text by simply starting multiple sessions with ChatGPT and trying out slight variations of prompts. If you really want to be scientific about it, copy everything over into a spreadsheet and look for patterns. Over time, you’ll become more and more precise in your instructions, just as an experienced teacher or manager does.

Prompt Engineering Can Help You Build Your Brand

Advanced AI models like ChatGPT are changing the way SEO, content marketing, and brand strategy are being done. From creating buyer personas to using chatbots for customer interactions, these tools can help you get far more work done with less effort.

But you have to be cautious, as LLMs are known to hallucinate information, change their tone, and otherwise behave inappropriately.

With the right prompt engineering expertise, these downsides can be ameliorated, and you’ll be on your way to building a strong brand. If you’re interested in other ways AI tools can take your business to the next level, schedule a demo of Quiq’s conversational CX platform today!

Contact Us

LLMs For the Enterprise: How to Protect Brand Safety While Building Your Brand Persona

It’s long been clear that advances in artificial intelligence change how businesses operate. Whether it’s extremely accurate machine translation, chatbots that automate customer service tasks, or spot-on recommendations for music and shows, enterprises have been using advanced AI systems to better serve their customers and boost their bottom line for years.

Today the big news is generative AI, with large language models (LLMs) in particular capturing the imagination. As we’d expect, businesses in many different industries are enthusiastically looking at incorporating these tools into their workflows, just as prior generations did for the internet, computers, and fax machines.

But this alacrity must be balanced with a clear understanding of the tradeoffs involved. It’s one thing to have a language model answer simple questions, and quite another to have one engaging in open-ended interactions with customers involving little direct human oversight.

If you have an LLM-powered application and it goes off the rails, it could be mildly funny, or it could do serious damage to your brand persona. You need to think through both possibilities before proceeding.

This piece is intended as a primer on effectively using LLMs for the enterprise. If you’re considering integrating LLMs for specific applications and aren’t sure how to weigh the pros and cons, it will provide invaluable advice on the different options available while furnishing the context you need to decide which is the best fit for you.

How Are LLMs Being Used in Business?

LLMs like GPT-4 are truly remarkable artifacts. They’re essentially gigantic neural networks with billions of internal parameters, trained on vast amounts of text data from books and the internet.

Once they’re ready to go, they can be used to ask and answer questions, suggest experiments or research ideas, write code, write blog posts, and perform many other tasks.

Their flexibility, in fact, has come as quite a surprise, which is why they’re showing up in so many places. Before we talk about specific strategies for integrating LLMs into your enterprise, let’s walk through a few business use cases for the technology.

Generating (or rewriting) text

The obvious use case is generating text. GPT-4 and related technologies are very good at writing generic blog posts, copy, and emails. But they’ve also proven useful in more subtle tasks, like producing technical documentation or explaining how pieces of code work.

Sometimes it makes sense to pass this entire job on to LLMs, but in other cases, they can act more like research assistants, generating ideas or taking human-generated bullet points and expanding on them. It really depends on the specifics of what you’re trying to accomplish.

Conversational AI

A subcategory of text generation is using an LLM as a conversational AI agent. Clients or other interested parties may have questions about your product, for instance, and many of them can be answered by a properly fine-tuned LLM instead of by a human. This is a use case where you need to think carefully about protecting your brand persona because LLMs are flexible enough to generate inappropriate responses to questions. You should extensively test any models meant to interact with customers and be sure your tests include belligerent or aggressive language to verify that the model continues to be polite.

Summarizing content

Another place that LLMs have excelled is in summarizing already-existing text. This, too, is something that once would’ve been handled by a human, but can now be scaled up to the greater speed and flexibility of LLMs. People are using LLMs to summarize everything from basic articles on the internet to dense scientific and legal documents (though it’s worth being careful here, as they’re known to sometimes include inaccurate information in these summaries.)

Answering questions

Though it might still be a while before ChatGPT is able to replace Google, it has become more common to simply ask it for help rather than search for the answer online. Programmers, for example, can copy and paste the error messages produced by their malfunctioning code into ChatGPT to get its advice on how to proceed. The same considerations around protecting brand safety that we mentioned in the ‘conversational AI’ section above apply here as well.

Classification

One way to get a handle on a huge amount of data is to use a classification algorithm to sort it into categories. Once you know a data point belongs in a particular bucket you already know a fair bit about it, which can cut down on the amount of time you need to spend on analysis. Classifying documents, tweets, etc. is something LLMs can help with, though at this point a fair bit of technical work is required to get models like GPT-3 to reliably and accurately handle classification tasks.

Sentiment analysis

Sentiment analysis refers to a kind of machine learning in which the overall tone of a piece of text is identified (i.e. is it happy, sarcastic, excited, etc.) It’s not exactly the same thing as classification, but it’s related. Sentiment analysis shows up in many customer-facing applications because you need to know how people are responding to your new brand persona or how they like an update to your core offering, and this is something LLMs have proven useful for.

What Are the Advantages of Using LLMs in Business?

More and more businesses are investigating LLMs for their specific applications because they confer many advantages to those that know how to use them.

For one thing, LLMs are extremely well-suited for certain domains. Though they’re still prone to hallucinations and other problems, LLMs can generate high-quality blog posts, emails, and general copy. At present, the output is usually still not as good as what a skilled human can produce.

But LLMs can generate text so quickly that it often makes sense to have the first draft created by a model and tweaked by a human, or to have relatively low-effort tasks (like generating headlines for social media) delegated to a machine so a human writer can focus on more valuable endeavors.

For another, LLMs are highly flexible. It’s relatively straightforward to take a baseline LLM like GPT-4 and feed it examples of behavior you want to see, such as generating math proofs in the form of poetry (if you’re into that sort of thing.) This can be done with prompt engineering or with a more sophisticated pipeline involving the model’s API, but in either case, you have the option of effectively pointing these general-purpose tools at specific tasks.

None of this is to suggest that LLMs are always and everywhere the right tool for the job. Still, in many domains, it makes sense to examine using LLMs for the enterprise.

What Are the Disadvantages of Using LLMs in Business?

For all their power, flexibility, and jaw-dropping speed, there are nevertheless drawbacks to using LLMs.

One disadvantage of using LLMs in business that people are already familiar with is the variable quality of their output. Sometimes, the text generated by an LLM is almost breathtakingly good. But LLMs can also be biased and inaccurate, and their hallucinations – which may not be a big deal for SEO blog posts – will be a huge liability if they end up damaging your brand.

Exacerbating this problem is the fact that no matter how right or wrong GPT-4 is, it’ll format its response in flawless, confident prose. You might expect a human being who doesn’t understand medicine very well to misspell a specialized word like “Umeclidinium bromide”, and that would offer you a clue that there might be other inaccuracies. But that essentially never happens with an LLM, so special diligence must be exercised in fact-checking their claims.

There can also be substantial operational costs associated with training and using LLMs. If you put together a team to build your own internal LLM you should expect to spend (at least) hundreds of thousands of dollars getting it up and running, to say nothing of the ongoing costs of maintenance.

Of course, you could also build your applications around API calls to external parties like OpenAI, who offer their models’ inferences as an endpoint. This is vastly cheaper, but it comes with downsides of its own. Using this approach means being beholden to another entity, which may release updates that dramatically change the performance of their models and materially impact your business.

Perhaps the biggest underlying disadvantage to using LLMs, however, is their sheer inscrutability. True, it’s not that hard to understand at a high level how models like GPT-4 are trained. But the fact remains that no one really understands what’s happening inside of them. It’s usually not clear why tiny changes to a prompt can result in such wildly different outputs, for example, or why a prompt will work well for a while before performance suddenly starts to decline.

Perhaps you just got unlucky – these models are stochastic, after all – or perhaps OpenAI changed the base model. You might not be able to tell, and either way, it’s hard to build robust, long-range applications around technologies that are difficult to understand and predict.

Contact Us

How Can LLMs Be Integrated Into Enterprise Applications?

If you’ve decided you want to integrate these groundbreaking technologies into your own platforms, there are two basic ways you can proceed. Either you can use a 3rd-party service through an API, or you can try to run your own models instead.

In the following two sections, we’ll cover each of these options and their respective tradeoffs.

Using an LLM through an API

An obvious way of leveraging the power of LLMs is by simply including API calls to a platform that specializes in them, such as OpenAI. Generally, this will involve creating infrastructure that is able to pass a prompt to an LLM and return its output.

If you’re building a user-facing chatbot through this method, that would mean that whenever the user types a question, their question is sent to the model and its response is sent back to the user.

The advantages of this approach are that they offer an extremely low barrier to entry, low costs, and fast response times. Hitting an API is pretty trivial as engineering tasks go, and though you’re charged per token, the bill will surely be less than it would be to stand up an entire machine-learning team to build your own model.

But, of course, the danger is that you’re relying on someone else to deliver crucial functionality. If OpenAI changes its terms of service or simply goes bankrupt, you could find yourself in a very bad spot.

Another disadvantage is that the company running the model may have access to the data you’re passing to its models. A team at Samsung recently made headlines when it was discovered they’d been plowing sensitive meeting notes and proprietary source code directly into ChatGPT, where both were viewable by OpenAI. You should always be careful about the data you’re exposing, particularly if it’s customer data whose privacy you’ve been entrusted to protect.

Running Your Own Model

The way to ameliorate the problems of accessing an LLM through an API is to either roll your own or run an open-source model in an environment that you control.

Building the kind of model that can compete with GPT-4 is really, really difficult, and it simply won’t be an option for any but the most elite engineering teams.

Using an open-source LLM, however, is a much more viable option. There are now many such models for text or code generation, and they can be fine-tuned for the specifics of your use case.

By and large, open-source models tend to be smaller and less performant than their closed-source cousins, so you’ll have to decide whether they’re good enough for you. And you should absolutely not underestimate the complexity of maintaining an open-sourced LLM. Though it’s nowhere near as hard as training one from scratch, maintaining an advanced piece of AI software is far from a trivial task.

All that having been said, this is one path you can take if you have the right applications in mind and the technical skills to pull it off.

How to Protect Brand Safety While Building Your Brand Persona

Throughout this piece, we’ve made mention of various ways in which LLMs can help supercharge your business while also warning of the potential damage a bad LLM response can do to your brand.

At present, there is no general-purpose way of making sure an LLM only does good things while never doing bad things. They can be startlingly creative, and with that power comes the possibility that they’ll be creative in ways you’d rather them not be (same as children, we suppose.)

Still, it is possible to put together an extensive testing suite that substantially reduces the possibility of a damaging incident. You need to feed the model many different kinds of interactions, including ones that are angry, annoyed, sarcastic, poorly spelled or formatted, etc., to see how it behaves.

What’s more, this testing needs to be ongoing. It’s not enough to run a test suite one weekend and declare the model fit for use, it needs to be periodically re-tested to ensure no bad behavior has emerged.

With these techniques, you should be able to build a persona as a company on the cutting edge while protecting yourself from incidents that damage your brand.

What Is the Future of LLMs and AI?

The business world moves fast, and if you’re not keeping up with the latest advances you run the risk of being left behind. At present, large language models like GPT-4 are setting the world ablaze with discussions of their potential to completely transform fields like customer experience chatbots.

If you want in on the action and you have the in-house engineering expertise, you could try to create your own offering. But if you would rather leverage the power of LLMs for chat-based applications by working with a world-class team that’s already done the hard engineering work, reach out to Quiq to schedule a demo.

Request A Demo

Semi-Supervised Learning Explained (With Examples)

From movie recommendations to chatbots as customer service reps, it seems like machine learning (ML) is absolutely everywhere. But one thing you may not realize is just how much data is required to train these advanced systems, and how much time and energy goes into formatting that data appropriately.

Machine learning engineers have developed many ways of trying to cut down on this bottleneck, and one of the techniques that have emerged from these efforts is semi-supervised learning.

Today, we’re going to discuss semi-supervised learning, how it works, and where it’s being applied.

What is Semi-Supervised Learning?

Semi-supervised learning (SSL) is an approach to machine learning (ML) that is appropriate for tasks where you have a large amount of data that you want to learn from, only a fraction of which is labeled.

Semi-supervised learning sits somewhere between supervised and unsupervised learning, and we’ll start by understanding these techniques because that will make it easier to grasp how semi-supervised learning works.

Supervised learning refers to any ML setup in which a model learns from labeled data. It’s called “supervised” because the model is effectively being trained by showing it many examples of the right answer.

Suppose you’re trying to build a neural network that can take a picture of different plant species and classify them. If you give it a picture of a rose it’ll output the “rose” label, if you give it a fern it’ll output the “fern” label, and so on.

The way to start training such a network is to assemble many labeled images of each kind of plant you’re interested in. You’ll need dozens or hundreds of such images, and they’ll each need to be labeled by a human.

Then, you’ll assemble these into a dataset and train your model on it. What the neural network will do is learn some kind of function that maps features in the image (the concentrations of different colors, say, or the shape of the stems and leaves) to a label (“rose”, “fern”.)

One drawback to this approach is that it can be slow and extremely expensive, both in funds and in time. You could probably put together a labeled dataset of a few hundred plant images in a weekend, but what if you’re training something more complex, where the stakes are higher? A model trained to spot breast cancer from a scan will need thousands of images, perhaps tens of thousands. And not just anyone can identify a cancerous lump, you’ll need a skilled human to look at the scan to label it “cancerous” and “non-cancerous.”

Unsupervised learning, by contrast, requires no such labeled data. Instead, an unsupervised machine learning algorithm is able to ingest data, analyze its underlying structure, and categorize data points according to this learned structure.

Semi-supervised learning

Okay, so what does this mean? A fairly common unsupervised learning task is clustering a corpus of documents thematically, and let’s say you want to do this with a bunch of different national anthems (hey, we’re not going to judge you for how you like to spend your afternoons!).

A good, basic algorithm for a task like this is the k-means algorithm, so-called because it will sort documents into k categories. K-means begins by randomly initializing k “centroids” (which you can think of as essentially being the center value for a given category), then moving these centroids around in an attempt to reduce the distance between the centroids and the values in the clusters.

This process will often involve a lot of fiddling. Since you don’t actually know the optimal number of clusters (remember that this is an unsupervised task), you might have to try several different values of k before you get results that are sensible.

To sort our national anthems into clusters you’ll have to first pre-process the text in various ways, then you’ll run it through the k-means clustering algorithm. Once that is done, you can start examining the clusters for themes. You might find that one cluster features words like “beauty”, “heart” and “mother”, another features words like “free” and “fight”, another features words like “guard” and “honor”, etc.

As with supervised learning, unsupervised learning has drawbacks. With a clustering task like the one just described, it might take a lot of work and multiple false starts to find a value of k that gives good results. And it’s not always obvious what the clusters actually mean. Sometimes there will be clear features that distinguish one cluster from another, but other times they won’t correspond to anything that’s easily interpretable from a human perspective.

Semi-supervised learning, by contrast, combines elements of both of these approaches. You start by training a model on the subset of your data that is labeled, then apply it to the larger unlabeled part of your data. In theory, this should simultaneously give you a powerful predictive model that is able to generalize to data it hasn’t seen before while saving you from the toil of creating thousands of your own labels.

How Does Semi-Supervised Learning Work?

We’ve covered a lot of ground, so let’s review. Two of the most common forms of machine learning are supervised learning and unsupervised learning. The former tends to require a lot of labeled data to produce a useful model, while the latter can soak up a lot of hours in tinkering and yield clusters that are hard to understand. By training a model on a labeled subset of data and then applying it to the unlabeled data, you can save yourself tremendous amounts of effort.

But what’s actually happening under the hood?

Three main variants of semi-supervised learning are self-training, co-training, and graph-based label propagation, and we’ll discuss each of these in turn.

Self-training

Self-training is the simplest kind of semi-supervised learning, and it works like this.

A small subset of your data will have labels while the rest won’t have any, so you’ll begin by using supervised learning to train a model on the labeled data. With this model, you’ll go over the unlabeled data to generate pseudo-labels, so-called because they are machine-generated and not human-generated.

Now, you have a new dataset; a fraction of it has human-generated labels while the rest contains machine-generated pseudo-labels, but all the data points now have some kind of label and a model can be trained on them.

Co-training

Co-training has the same basic flavor as self-training, but it has more moving parts. With co-training you’re going to train two models on the labeled data, each on a different set of features (in the literature these are called “views”.)

If we’re still working on that plant classifier from before, one model might be trained on the number of leaves or petals, while another might be trained on their color.

At any rate, now you have a pair of models trained on different views of the labeled data. These models will then generate pseudo-labels for all the unlabeled datasets. When one of the models is very confident in its pseudo-label (i.e., when the probability it assigns to its prediction is very high), that pseudo-label will be used to update the prediction of the other model, and vice versa.

Let’s say both models come to an image of a rose. The first model thinks it’s a rose with 95% probability, while the other thinks it’s a tulip with a 68% probability. Since the first model seems really sure of itself, its label is used to change the label on the other model.

Think of it like studying a complex subject with a friend. Sometimes a given topic will make more sense to you, and you’ll have to explain it to your friend. Other times they’ll have a better handle on it, and you’ll have to learn from them.

In the end, you’ll both have made each other stronger, and you’ll get more done together than you would’ve done alone. Co-training attempts to utilize the same basic dynamic with ML models.

Graph-based semi-supervised learning

Another way to apply labels to unlabeled data is by utilizing a graph data structure. A graph is a set of nodes (in graph theory we call them “vertices”) which are linked together through “edges.” The cities on a map would be vertices, and the highways linking them would be edges.

If you put your labeled and unlabeled data on a graph, you can propagate the labels throughout by counting the number of pathways from a given unlabeled node to the labeled nodes.

Imagine that we’ve got our fern and rose images in a graph, together with a bunch of other unlabeled plant images. We can choose one of those unlabeled nodes and count up how many ways we can reach all the “rose” nodes and all the “fern” nodes. If there are more paths to a rose node than a fern node, we classify the unlabeled node as a “rose”, and vice versa. This gives us a powerful alternative means by which to algorithmically generate labels for unlabeled data.

Contact Us

Semi-Supervised Learning Examples

The amount of data in the world is increasing at a staggering rate, while the number of human-hours available for labeling it all is increasing at a much less impressive clip. This presents a problem because there’s no end to the places where we want to apply machine learning.

Semi-supervised learning presents a possible solution to this dilemma, and in the next few sections, we’ll describe semi-supervised learning examples in real life.

  • Identifying cases of fraud: In finance, semi-supervised learning can be used to train systems for identifying cases of fraud or extortion. Rather than hand-labeling thousands of individual instances, engineers can start with a few labeled examples and proceed with one of the semi-supervised learning approaches described above.
  • Classifying content on the web: The internet is a big place, and new websites are put up all the time. In order to serve useful search results it’s necessary to classify huge amounts of this web content, which can be done with semi-supervised learning.
  • Analyzing audio and images: This is perhaps the most popular use of semi-supervised learning. When audio files or image files are generated they’re often not labeled, which makes it difficult to use them for machine learning. Beginning with a small subset of human-labeled data, however, this problem can be overcome.

How Is Semi-Supervised Learning Different From…?

With all the different approaches to machine learning, it can be easy to confuse them. To make sure you fully understand semi-supervised learning, let’s take a moment to distinguish it from similar techniques.

Semi-Supervised Learning vs Self-Supervised Learning

With semi-supervised learning you’re training a model on a subset of labeled data and then using this model to process the unlabeled data. Self-supervised learning is different in that it’s showing an algorithm some fraction of the data (say the first 80 words in a paragraph) and then having it predict the remainder (the other 20 words in a paragraph.)

Self-supervised learning is how LLMs like GPT-4 are trained.

Semi-Supervised Learning vs Reinforcement Learning

One interesting subcategory of ML we haven’t discussed yet is reinforcement learning (RL). RL involves leveraging the mathematics of sequential decision theory (usually a Markov Decision Process) to train an agent to interact with its environment in a dynamic, open-ended way.

It bears little resemblance to semi-supervised learning, and the two should not be confused.

Semi-Supervised Learning vs Active Learning

Active learning is a type of semi-supervised learning. The big difference is that, with active learning, the algorithm will send its lowest-confidence pseudo-labels to a human for correction.

When Should You Use Semi-Supervised Learning?

Semi-supervised learning is a way of training ML models when you only have a small amount of labeled data. By training the model on just the labeled subset of data and using it in a clever way to label the rest, you can avoid the difficulty of having a human being label everything.

There are many situations in which semi-supervised learning can help you make use of more of your data. That’s why it has found widespread use in domains as diverse as document classification, fraud, and image identification.

So long as you’re considering ways of using advanced AI systems to take your business to the next level, check out our generative AI resource hub to go even deeper. This technology is changing everything, and if you don’t want to be left behind, set up a time to talk with us.

Request A Demo

How Large Language Models Have Evolved

It seems as though large language models (LLMs) exploded into public awareness almost overnight. Relatively few people had heard of GPT-2, but I would venture to guess relatively few people haven’t heard of ChatGPT.

But like most things, language models have a history. And, in addition to being outrageously interesting, that history can help us reason about the progress in LLMs, as well as their likely future impacts.

Let’s get started!

A Brief History of Artificial Intelligence Development

The human fascination with building artificial beings capable of thought and action goes back a long way. Writing in roughly the 8th century BCE, Homer recounts tales of the god Hephaestus outsourcing repetitive manual tasks to automated bellows and working alongside robot-like “attendants” that were “…golden, and in appearance like living young women.”

No mere adornments, these handmaidens were described as having “intelligence in their hearts” and stirring “nimbly in support of their master” because “from the immortal gods they have learned how to do things.”

Some 500 years later, mathematicians in Alexandria would produce treatises on creating mechanical servants and various kinds of automata. Heron wrote a technical manual for producing a mechanical shrine and an automated theatre whose figurines could be activated to stage a full tragic play through an intricate system of cords and axles.

Nor is it only ancient Greece that tells similar tales. Jewish legends speak of the Golem, a being made of clay and imbued with life and agency through the use of language. The word “abracadabra”, in fact, comes from the Aramaic phrase avra k’davra, which translates to “I create as I speak.”

Through the ages, these old ideas have found new expression in stories such as “The Sorcerer’s Apprentice”, Mary Shelley’s “Frankenstein”, and Karel Čapek’s “R.U.R.”, a science fiction play that features the first recorded use of the word “robot”.

From Science Fiction to Science Fact

But they remained purely fiction until the early 20th Century, when advances in the theory of computation, as well as the development of primitive computers, began to offer a path toward actually building intelligent systems.

Arguably, the field of artificial intelligence really began in earnest with the 1950 publication of Alan Turing’s “Computing Machinery and Intelligence” – in which he proposed the famous “Turing test” – and with the 1956 Dartmouth conference on AI, organized by luminaries John McCarthy and Marvin Minsky.

People began taking AI seriously. Over the next ~50 years, there were numerous periods of hype and exuberance in which major advances were made, as well as long stretches, known as “AI winters”, in which funding dried up and little was accomplished.

Neural networks and the deep learning revolution are two advances that are particularly important for understanding how large language models have evolved over time, so it’s to these that we now turn.

Neural Networks And The Deep Learning Revolution

The groundwork for future LLM systems was laid by Walter Pitts and Warren McCulloch in the early 1940s. Inspired by the burgeoning study of the human brain, they wondered if it would be possible to build an artificial neuron that had the same basic properties as a biological one, i.e. it would activate and fire once a certain critical threshold had been crossed.

They were successful, though several other breakthroughs would be required before artificial neurons could be arranged into systems that were capable of doing useful work. One such breakthrough was backpropagation, the basic algorithm that is still used to train deep learning systems. Backpropagation was developed in 1960, and it uses the errors in a model’s outputs to iteratively adjust its internal parameters.

It wasn’t until 1985, however, that David Rumelhart, Ronald Williams, and Geoff Hinton used backpropagation in neural networks, and in 1989, this allowed Yann LeCun to train a convolutional neural network to recognize handwritten digits.

This was not the only architectural improvement that came out of this period. Especially noteworthy were the long short-term memory (LSTM) networks that were introduced in 1997 by Sepp Hochreiter and Jürgen Schmidhuber, which made it possible to learn more complex functions.

With these advances, it was clear that neural networks could be trained to do useful work, and that they were poised to do so. All that was left was to gather the missing piece: data.

The Big Data Era

Neural networks and deep-learning applications tend to be extremely data-hungry, and access to quality training data has always been a major bottleneck. In 2009 Stanford’s Fei-Fei Li sought to change this by releasing Imagenet, a database of over 14 million labeled images that could be used for free by researchers. The increase in available data, together with substantial improvements in computer hardware like graphical processing units (GPUs), meant that at long last the promise of deep learning could begin to be fulfilled.

And it was. In 2011 a convolutional neural network called “AlexNet” won multiple international competitions for image recognition, IBM’s Watson system beat several Jeopardy! all-stars in a real game, and Apple launched Siri. Amazon’s Alexa followed in 2014, and from 2015 to 2017 DeepMind’s AlphaGo shocked the world by utterly dominating the best human Go players.

Substantial strides were made in language models. In 2018 Google introduced its Bidirectional Encoder Representations from Transformers (BERT), a pre-trained model capable of a wide array of tasks, like text summarization, translation, and sentiment analysis.

One Model To Rule Them All

It would be easy to miss the significance of AlexNet’s performance on the ImageNet competition or BERT’s usefulness across multiple tasks. For a long time, it was anyone’s guess as to whether it would be possible to train a single large model on a dataset and use it for a range of purposes, or whether it would be necessary to train a multitude of models for each application.

From 2011 onwards, it has become clear that large, general-purpose models are often the best way to go. This point has only become more reinforced, with the success of GPT-4 in everything from brainstorming scientific hypotheses to handling customer service tasks.

Contact Us

How Has Large Language Model Performance Improved?

Now that we’ve discussed this history, we’re well-placed to understand why LLMs and generative AI have ignited so much controversy. People have been mulling over the promise (and peril) of thinking machines for literally thousands of years. After all that time it looks like they might be here, at long last.

But what, exactly, has people so excited? What is it that advanced AI tools are doing that has captured the popular imagination? In the following sections, we’ll talk about the astonishing (and astonishingly rapid) improvements that have been seen in language models in just a few short years.

Getting To Human-Level

One of the more surprising things about LLMs such as ChatGPT is just how good they are at so many different things. LLMs are trained with a technique known as “self-supervised learning”. They take random samples of the text data they’re given, and they try to predict what words come next given the words that came before.

Suppose the model sees the famous opening lines of Leo Tolstoy’s Ann Karenina: “Happy families are all alike; unhappy families are all unhappy in their own way.” What the model is trying to do is learn a function that will allow it to predict “in their own way” from “Happy families are all alike; unhappy families are all unhappy ___”.

The modern crop of LLMs can do this incredibly well, but what is remarkable is just how far this gets you. People are using generative AI to help them write poems, business plans, and code, create recipes based on the ingredients in their fridges, and answer customer questions.

Emergence in Language Models

Perhaps even more interesting, however, is the phenomenon of “emergence” in language models. When researchers tested LLMs on a wide variety of tasks meant to be especially challenging to these models – things like identifying a movie given a string of emojis or finding legal chess moves – they found that in about 5% of tasks, there is a sudden, sharp increase in ability on a given task once a model reaches a certain size.

At present, it’s not really clear how we should think about emergence. One hypothesis for emergence is that a big enough model is able to learn some general piece of knowledge not attainable by a smaller cousin, while another, more prosaic one is that it’s a relatively straightforward consequence of the model’s internal statistical machinery.

What’s more, it’s difficult to pin down the conditions required for emergence in language models. Though it generally appears to be a function of model size, there are cases in which the same abilities can be achieved with smaller models, or with models trained on very high-quality data, and emergence shows up at different scales for different models and tasks.

Whatever ends up being the case, it’s clear that this is a promising direction for future research. Much more work needs to be done to understand how precisely LLMs accomplish what they accomplish. This will not only redound upon the question of emergence, it will also inform the ongoing efforts to make language models safer and less biased.

The GPT Series

The big recent news in AI has, of course, been ChatGPT. ChatGPT has proven useful in an astonishingly-wide variety of use cases and is among the first powerful systems to have been made widely available to the public.

ChatGPT is part of a broader series of GPT models built by OpenAI. “GPT” stands for “generative pre-trained transformer”, and the first of its kind was developed back in 2018. New models and major updates have been released at a rapid clip ever since, culminating with GPT-4 coming out in March of 2023.

At present, OpenAI’s CEO Sam Altman has claimed that there are no current plans to train a successor GPT-5 model, but there are other companies, like DeepMind, who could plausibly build a competitor.

What’s Next For Large Language Models?

Given their flexibility and power, LLMs are finding use across a wide variety of industries, from software engineering to medicine to customer service.

If your interest has been piqued and you’d like to talk to an expert at Quiq about incorporating it into your business, reach out to us to schedule a demo!

Request A Demo

Are Generative AI And Large Language Models The Same Thing?

The release of ChatGPT was one of the first times an extremely powerful AI system was broadly available, and it has ignited a firestorm of controversy and conversation.

Proponents believe current and future AI tools will revolutionize productivity in almost every domain.

Skeptics wonder whether advanced systems like GPT-4 will even end up being all that useful.

And a third group believes they’re the first sparks of artificial general intelligence and could be as transformative for life on Earth as the emergence of homo sapiens.

Frankly, it’s enough to make a person’s head spin. One of the difficulties in making sense of this rapidly-evolving space is the fact that many terms, like “generative AI” and “large language models” (LLMs), are thrown around very casually.

In this piece, our goal is to disambiguate these two terms by discussing ​​the differences between generative AI vs. large language models. Whether you’re pondering deep questions about the nature of machine intelligence, or just trying to decide whether the time is right to use conversational AI in customer-facing applications, this context will help.

Let’s get going!

What Is Generative AI?

Of the two terms, “generative AI” is broader, referring to any machine learning model capable of dynamically creating output after it has been trained.

This ability to generate complex forms of output, like sonnets or code, is what distinguishes generative AI from linear regression, k-means clustering, or other types of machine learning.

Besides being much simpler, these models can only “generate” output in the sense that they can make a prediction on a new data point.

Once a linear regression model has been trained to predict test scores based on number of hours studied, for example, it can generate a new prediction when you feed it the hours a new student spent studying.

But you couldn’t use prompt engineering to have it help you brainstorm the way these two values are connected, which you can do with ChatGPT.

There are many types of generative AI, so let’s spend a few minutes discussing the major categories: image generation, music generation, code generation, and a few others.

How Is Generative AI Used To Make Images?

One of the first “wow” moments in generative AI came fairly recently when it was discovered that tools like Midjourney, DALL-E, and Stable Diffusion could create absolutely stunning images based on simple prompts like:

“Old man in a book store, ambient dappled sunlight, sedate, calm, close-up portrait.”

Depending on the wording you use, these images might be whimsical and futuristic, they might look like paintings from world-class artists, or they might look so photo-realistic you’d be convinced they’re about to start talking.

Created using DALL-E

Each of these tools is suited to specific applications. Midjourney seems to be best at capturing different artistic approaches and generating images that accurately capture an aesthetic. DALL-E tends to do better at depicting human figures, including faces and eyes. Stable Diffusion seems to do well at generating highly-detailed outputs, capturing subtleties like the way light reflects on a rain-soaked street.

(Note: these are all general impressions, it’s difficult to know how the tools will compare on any specific prompt.)

Broadly, this is known as “image synthesis”. And since we’re talking specifically about making images from text, this sub-domain is known as “text-to-image.”

A variant of this technique is text-to-video (alternatively: “text-to-4d”), which produces short clips or scenes based on text prompts. While text-to-video is still much more primitive than text-to-image, it will get better very quickly if recent progress in AI is any guide.

One interesting wrinkle in this story is that generative algorithms have generated something else along with images and animations: legal battles.

Earlier this year, Getty Images filed a lawsuit against the creators of Stable Diffusion, alleging that they trained their algorithm on millions of images from the Getty collection without getting permission first or compensating Getty in any way.

This has raised many profound questions about data rights, privacy, and how (or whether) people should be paid when their work is used to train a model that might eventually automate them out of a job.

We’re still in the early days of grappling with these issues, but they’re sure to make for fascinating case law in the years ahead.

How Is Generative AI Used To Make Music?

Given how successful advanced models have been in generating text (more on that shortly), it’s only natural to wonder whether similar models could also prove useful in generating music.

This is especially true because, on the surface, text and music share many obvious similarities (both are sequential, for example.) It would make sense, therefore, that the technical advances that have allowed coherent text production might also allow for coherent music production.

And they have! There are now a number of different tools, such as MusicLM, which are able to generate fairly high-quality audio tracks from prompts like:

“The main soundtrack of an arcade game. It is fast-paced and upbeat, with a catchy electric guitar riff. The music is repetitive and easy to remember, but with unexpected sounds, like cymbal crashes or drum rolls.”

As with using generative AI in images, creating artificial musical tracks in the style of popular artists has already sparked legal controversies. A particularly memorable example occurred just recently when a TikTok user supposedly created an AI-generated collaboration between Drake and The Weeknd, which then promptly went viral.

The track was removed from all major streaming services in response to backlash from artists and record labels, but it’s clear that ai music generators are going to change the way art is created in a major way.

How Is Generative AI Used For Coding?

It’s long been the dream of both programmers and non-programmers to simply be able to provide a computer with natural-language instructions (“build me a cool website”) and have the machine handle the rest. It would be hard to overstate the explosion in creativity and productivity this would initiate.

With the advent of code-generation models such as Replit’s Ghostwriter and GitHub Copilot, we’ve taken one more step towards that halcyon world.

As is the case with other generative models, code-generation tools are usually trained on massive amounts of data, after which point they’re able to take simple prompts and produce code from them.

You might ask it to write a function that converts between several different coordinate systems, create a web app that measures BMI, or translate from Python to Javascript.

As things stand now, the code is often incomplete in small ways. It might produce a function that takes an argument as input that is never used, for example, or which lacks a return function. Still, it is remarkable what has already been accomplished.

There are now software developers who are using models like ChatGPT all day long to automate substantial portions of their work, to understand new codebases with which they’re unfamiliar, or to write comments and unit tests.

Contact Us

What Are Large Language Models?

Now that we’ve covered generative AI, let’s turn our attention to large language models (LLMs).

LLMs are a particular type of generative AI.

Unlike with MusicLM or DALL-E, LLMs are trained on textual data and then used to output new text, whether that be a sales email or an ongoing dialogue with a customer.

(A technical note: though people are mostly using GPT-4 for text generation, it is an example of a “multimodal” LLM because it has also been trained on images. According to OpenAI’s documentation, image input functionality is currently being tested, and is expected to roll out to the broader public soon.)

What Are Examples of Large Language Models?

By far the most well-known example of an LLM is OpenAI’s “GPT” series, the latest of which is GPT-4. The acronym “GPT” stands for “Generative Pre-Trained Transformer”, and it hints at many underlying details about the model.

GPT models are based on the transformer architecture, for example, and they are pre-trained on a huge corpus of textual data taken predominately from the internet.

GPT, however, is not the only example of an LLM.

The BigScience Large Open-science Open-access Multilingual Language Model – known more commonly by its mercifully-short nickname, “BLOOM” – was built by more than 1,000 AI researchers as an open-source alternative to GPT.

BLOOM is capable of generating text in almost 50 natural languages, and more than a dozen programming languages. Being open-sourced means that its code is freely available, and no doubt there will be many who experiment with it in the future.

In March, Google announced Bard, a generative language model built atop its Language Model for Dialogue Applications (LaMDA) transformer technology.

As with ChatGPT, Bard is able to work across a wide variety of different domains, offering help with planning baby showers, explaining scientific concepts to children, or helping you make lunch based on what you already have in your fridge.

How Are Large Language Models Trained?

A full discussion of how large language models are trained is beyond the scope of this piece, but it’s easy enough to get a high-level view of the process. In essence, an LLM like GPT-4 is fed a huge amount of textual data from the internet. It then samples this dataset and learns to predict what words will follow given what words it has already seen.

At first, its performance will be terrible, but over time it will learn that a sentence like “I sat down on the _____” probably ends with a word like “floor” or “chair”, and probably not a word like “cactus” (at least, we hope you’re not sitting down on a cactus!)

When a model has been trained for long enough on a large enough dataset, you get the remarkable performance seen with tools like ChatGPT.

Is ChatGPT A Large Language Model?

Speaking of ChatGPT, you might be wondering whether it’s a large language model. ChatGPT is a special-purpose application built on top of GPT-3, which is a large language model. GPT-3 was fine-tuned to be especially good at conversational dialogue, and the result is ChatGPT.

Are All Large Language Models Generative AI?

Yes. To the best of our knowledge, all existing large language models are generative AI. “Generative AI” is an umbrella term for algorithms that generate novel output, and the current set of models is built for that purpose.

Utilizing Generative AI In Your Business

Though truly powerful generative AI language models are less than a year old, they’re already being integrated into numerous business applications. Quiq Compose, for example, is able to study past interactions with customers to better tailor its future conversations to their particular needs.

From generating fake viral rap songs to generating photos that are hard to distinguish from real life, these powerful tools have already proven that they can dramatically speed up marketing, software development, and many other crucial business functions.

If you’re an enterprise wondering how you can use advanced AI technologies such as generative AI language models for applications like customer service, schedule a demo to see what the Quiq platform can offer you!

A Deep Dive on Large Language Models—And What They Mean For You

The release of OpenAI’s ChatGPT in late 2022 has utterly transformed the conversation around artificial intelligence. Whether it’s generating functioning web apps with just a few prompts, writing Spanish-language children’s stories about the blockchain in the style of Dr. Suess, or opining on the virtues and vices of major political figures, its ability to generate long strings of coherent, grammatically-correct text is shocking.

Seen in this light, it’s perhaps no surprise that ChatGPT has achieved such a staggering rate of growth. The application garnered a million users less than a week after its launch.

It’s believed that by January of 2023, this figure had climbed to 100 million monthly users, blowing past the adoption rates of TikTok (which needed nine months to get to this many monthly users) and Instagram (which took over two years.)

Naturally, many have become curious about the “large language model” (LLM) technology that makes ChatGPT and similar kinds of disruptive generative AI possible.

In this piece, we’re going to do a deep dive on LLMs, exploring how they’re trained, how they work internally, and how they might be deployed in your business. Our hope is that this will arm Quiq’s customers with the context they need to keep up with the ongoing AI revolution.

What Are Large Language Models?

LLMs are pieces of software with the ability to interact with and generate a wide variety of text. In this discussion, “text” is used very broadly to include not just existing natural language but also computer code.

A good way to begin exploring this subject is to analyze each of the terms in “large language model”, so let’s do that now. Here’s our large language models overview:

LLMs Are Models.

In machine learning (ML), you can think of a model as being a function that maps inputs to outputs. Early in their education, for example, machine learning engineers usually figure out how to fit a linear regression model that does something like predict the final price of a house based on its square footage.

They’ll feed their model a bunch of data points that look like this:

House 1: 800 square feet, $120,000
House 2: 1000 square feet, $175,000
House 3: 1500 square feet, $225,000

And the model learns the relationship between square footage and price well enough to roughly predict the price of homes that weren’t in its training data.

We’ll have a lot more to say about how LLMs are trained in the next section. For now, just be aware that when you get down to it, LLMs are inconceivably vast functions that take the input you feed them and generate a corresponding output.

LLMs Are Large.

Speaking of vastness, LLMs are truly gigantic. As with terms like “big data”, there isn’t an exact, agreed-upon point at which a basic language model becomes a large language model. Still, they’re plenty big enough to deserve the extra “L” at the beginning of their name.

There are a few ways to measure the size of machine learning models, but one of the most common is by looking at their parameters.

In the linear regression model just discussed, there would be only one parameter, for square footage. We could make our model better by also showing it the home’s zip code and the number of bathrooms it has, and then it would have three parameters.

It’s hard to say how big most real systems are because that information isn’t usually made public, but a linear regression model might have dozens of parameters, and a basic neural network could range from a few hundred thousand to a few tens of millions of parameters.

GPT-3 has 175 billion parameters, and Google’s Minerva model has 540 billion parameters. It isn’t known how many parameters GPT-4 has, but it’s almost certainly more.

(Note: I say “almost” certainly because better models don’t always have more parameters. They usually do, but it’s not an ironclad rule.)

LLMs Focus On Language.

ChatGPT and its cousins take text as input and produce text as output. This makes them distinct from some of the image-generation tools that are on the market today, such as DALL-E and Midjourney.

It’s worth noting, however, that this might be changing in the future. Though most of what people are using GPT-4 to do revolves around text, technically, the underlying model is multimodal. This means it can theoretically interact with image inputs as well. According to OpenAI’s documentation, support for this feature should arrive in the coming months.

How Are Large Language Models Trained?

Like all machine learning models, LLMs must be trained. We don’t actually know exactly how OpenAI trained the latest GPT models, as they’ve kept those details secret, but we can make some broad comments about how systems like these are generally trained.

Before we get into technical details, let’s frame the overall task that LLMs are trying to perform as a guessing game. Imagine that I start a sentence and leave out the last word, asking you to provide a guess as to how it ends.

Some of these would be fairly trivial; everyone knows that “[i]t was the best of times, it was the worst of _____,” ends with the word “times.” Others would be more ambiguous; “I stopped to pick a flower, and then continued walking down the ____,” could plausibly end with words like “road”, “street”, or “trail.”

For still others, there’d be an almost infinite number of possibilities; “He turned to face the ___,” could end with anything from “firehose” to “firing squad.”

But how is it that you’re able to generate these guesses? How do you know what a good ending to a natural-language sentence sounds like?

The answer is that you’ve been “training” for this task your entire life. You’ve been listening to sentences, reading and writing sentences, or thinking in sentences for most of your waking hours, and have therefore developed a sense of how they work.

The process of training an LLM differs in many specifics, but at a high level, it’s learning to do the same thing. A model like GPT-4 is fed gargantuan amounts of textual data from the internet or other sources, and it learns a statistical distribution that allows it to predict which words come next.

At first, it’ll have no idea how to end the sentence “[i]t was the best of times, it was the worst of ____.” But as it sees more and more examples of human-generated textual content, it improves. It discovers that when someone writes “red, orange, yellow, green, blue, indigo, ______”, the next sequence of letters is probably “violet”. It begins to be more sensitive to context, discovering that the words “bat”, “diamond”, and “plate” are probably occurring in a discussion about baseball and not the weirdest Costco you’ve ever been to.

It’s precisely this nuance that makes advanced LLMs suitable for applications such as customer service.

They’re not simply looking up pre-digested answers to questions, they’re learning a function big enough to account for the subtleties of a specific customer’s specific problem. They still don’t do this job perfectly, but they’ve made remarkable progress, which is why so many companies are looking at integrating them.

Getting into the GPT-weeds

The discussion so far is great for building a basic intuition for how LLMs are trained, but this is a deep dive, so let’s talk technical specifics.

Though we don’t know much about GPT-4, earlier models like GPT and GPT-2 have been studied in great detail. By understanding how they work, we can cultivate a better grasp of cutting-edge models.

When an LLM is trained, it’s fed a great deal of text data. It will grab samples from this data, and try to predict the next token in its sample. To make our earlier explanation easier to understand we implied that a token is a word, but that’s not quite right. A token can be a word, an individual letter, or “sub words”, i.e. small chunks of letters and spaces.

This process is known as “self-supervised learning” because the model can assess its own accuracy by checking its predicted next token against the actual next token in the dataset it’s training on.

At first, its accuracy is likely to be very bad. But as it trains its internal parameters (remember those?) are tuned with an optimizer such as stochastic gradient descent, and it gets better.

One of the crucial architectural building blocks of LLMs is the transformer.

A full discussion of transformers is well beyond the scope of this piece, but the most important thing to know is that transformers can use “attention” to model more complex relationships in language data.

For example: in a sentence like “the dog didn’t chase the cat because it was too tired”, every human knows that “it” refers to the dog and not the cat. Earlier approaches to building language models struggled with such connections in sentences that were longer than a few words, but using attention, transformers can handle them with ease.

In addition to this obvious advantage, transformers have found widespread use in deep learning applications such as language models because they’re easy to parallelize, meaning that training times can be reduced.

Building On Top Of Large Language Models

Out-of-the-box LLMs are pretty powerful, but it’s often necessary to tweak them for specific applications such as enterprise bots. There are a few ways of doing this, and we’re going to confine ourselves to two major approaches: fine-tuning and prompt engineering.

First up, it’s possible to fine-tune some of these models. Fine-tuning an LLM involves providing a training set and letting the model update its internal weights to perform better on a specific task. 

Next, the emerging discipline of prompt engineering refers to the practice of systematically crafting the text fed to the model to get it to better approximate the behavior you want.

LLMs can be surprisingly sensitive to small changes in words, phrases, and context; the job of a prompt engineer, therefore, is to develop a feel for these sensitivities and construct prompts in a way that maximizes the performance of the LLM.

Contact Us

How Can Large Language Models Be Used In Business?

There is a new gold rush in applying AI to business use cases.

For starters, given how good they are at generating text, they’re being deployed to write email copy, blog posts, and social media content, to text or survey customers, and to summarize text.

LLMs are also being used in software development. Tools like Replit’s Ghostwriter are already dramatically improving developer productivity in a variety of domains, from web development to machine learning.

What Are The “LLiMitations” Of LLMs?

For all their power, LLMs have turned out to have certain well-known limitations. To begin with, LLMs are capable of being toxic, harmful, aggressive, and biased.

Though heroic efforts have been made to train this behavior out with techniques such as reinforcement learning from human feedback, it’s possible that it can reemerge under the right conditions.

This is something you should take into account before giving customers access to generative AI offerings.

Another oft-discussed limitation is the tendency of LLMs to “invent” facts. Remember, an LLM is just trying to predict sequences of tokens, and there’s no reason it couldn’t output a sequence of text like “Dr. Micha Sartorius, professor of applied computronics at Santa Grega University”, even though this person, field, and university are fictitious.

This, too, is something you should be cognizant of before letting customers interact with generative AI.

At Quiq, we harness the power of LLMs’ language-generating capabilities, while putting strict guardrails in place to prevent these risks that are inherent to public-facing generative AI.

Should You Be Using Large Language Models?

LLMs are a remarkable engineering achievement, having been trained on vast amounts of human text and able to generate whole conversations, working code, and more.

No doubt, some of the fervor around LLMs will end up being hype. Nevertheless, the technology has been shown to be incredibly powerful, and it is unlikely to go anywhere. If you’re interested in learning about how to integrate generative AI applications like Quiq’s into your business, schedule a demo with us today!

Request A Demo

Prompt Engineering: What Is It—And How Can You Use It To Get The Most Out Of AI?

Think back to your school days. You come into class only to discover a timed writing assignment on the agenda. You have to respond to the provided prompt, quickly and accurately and will be graded against criteria like grammar, vocabulary, factual accuracy, and more.

Well, that’s what natural language processing (NLP) software like ChatGPT does daily. Except, when a computer steps into the classroom, it can’t raise its hand to ask questions.

That’s why it’s so important to provide AI with a prompt that’s clear and thorough enough to produce the best possible response.

What is ai prompt engineering?

A prompt can be a question, a phrase, or several paragraphs. The more specific the prompt is, the better the response.

Writing the perfect prompt — prompt engineering — is critical to ensure the NLP response is not only factually correct but crafted exactly as you intended to best deliver information to a specific target audience.

You can’t use low-quality ingredients in the kitchen to produce gourmet cuisine — and you can’t expect AI to, either.

Let’s revisit your old classroom again: did you ever have a teacher provide a prompt where you just weren’t really sure what the question was asking? So, you guessed a response based on the information provided, only to receive a low score.

In the post-exam review, the teacher explained what she was actually looking for and how the question was graded. You sat there thinking, “If I’d only had that information when I was given the prompt!”

Well, AI feels your pain.

The responses that NLP software provides are only as good as the input data. Learning how to communicate with AI to get it to generate desired responses is a science, and you can learn what works best through trial and error to continuously optimize your prompts.

Prompts that fail to deliver, and why.

What’s the root of the issue of prompt engineering gone wrong? It all comes down to incomplete, inconsistent, or incorrect data.

Even the most advanced AI using neural networks and deep learning techniques still needs to be fed the right information in the right way. When there is too little context provided, not enough examples, conflicting information from different sources, or major typos in the prompt, the AI can generate responses that are undesirable or just plain wrong.

How to craft the perfect prompt.

Here are some important factors to take into consideration for successful prompt engineering.

Clear instructions

Provide specific instructions and multiple examples to illustrate precisely what you want the AI to do. Words like “something,” “things,” “kind of,” and “it” (especially when there are multiple subjects within one sentence) can be indicators that your prompt is too vague.

Try to use descriptive nouns that refer to the subject of your sentence and avoid ambiguity.

  • Example (ambiguity): “She put the book on the desk; it was blue.”
  • What does “it” refer to in this sentence? Is the book blue, or is the desk blue?

Simple language

Use plain language, but avoid shorthand and slang. When in doubt, err on the side of overcommunicating and you can use trial and error to determine what shorthand approaches work for future, similar prompts. Avoid internal company or industry-specific jargon when possible, and be sure to clearly define any terms you may want to integrate.

Quality data

Give examples. Providing a single source of truth — for example, an article you want the AI to respond to questions about — will have a higher probability of returning factually correct responses based on the provided article.

On that note, teach the API how you want it to return responses when it doesn’t know the answer, such as “I don’t know,” “not enough information,” or simply “?”.

Otherwise, the AI may get creative and try to come up with an answer that sounds good but has no basis in reality.

Persona

Develop a persona for your responses. Should the response sound as though it’s being delivered by a subject matter expert or would it be better (legally or otherwise) if the response was written by someone who was only referring to subject matter experts (SMEs)?

  • Example (direct from SMEs): “Our team of specialists…”
  • Example (referring to SMEs): “Based on recent research by experts in the field…”

Voice, style, and tone

Decide how you want to represent your brand’s voice, which will largely be determined by your target audience. Would your customer be more likely to trust information that sounds like it was provided by an academic, or would a colloquial voice be more relatable?

Do you want a matter-of-fact, encyclopedia-type response, a friendly or supportive empathetic approach, or is your brand’s style more quick-witted and edgy?

With the right prompt, AI can capture all that and more.

Quiq takes prompt engineering out of the equation.

Prompt engineering is no easy task. There are many nuances to language that can trick even the most advanced NLP software.

Not only are incorrect AI responses a pain to identify and troubleshoot, but they can also hurt your business’s reputation if they aren’t caught before your content goes public.

On the other hand, manual tasks that could be automated with NLP waste time and money that could be allocated to higher-priority initiatives.

Quiq uses large language models (LLMs) to continuously optimize AI responses to your company’s unique data. With Quiq’s world-class Conversational AI platform, you can reduce the burden on your support team, lower costs, and boost customer satisfaction.

Contact Quiq today to see how our innovative LLM-built features improve business outcomes.

Contact Us