4 Benefits of Using Generative AI to Improve Customer Experiences

Generative AI has captured the popular imagination and is already changing the way contact centers work.

One area in which it has enormous potential is also one that tends to be top of mind for contact center managers: customer experience.

In this piece, we’re going to briefly outline what generative AI is, then spend the rest of our time talking about how generative AI benefits can improve customer experience with personalized responses, endless real-time support, and much more.

What is Generative AI?

As you may have puzzled out from the name, “generative AI” refers to a constellation of different deep learning models used to dynamically generate output. This distinguishes them from other classes of models, which might be used to predict returns on Bitcoin, make product recommendations, or translate between languages.

The most famous example of generative AI is, of course, the large language model ChatGPT. After being trained on staggering amounts of textual data, it’s now able to generate extremely compelling output, much of which is hard to distinguish from actual human-generated writing.

Its success has inspired a panoply of competitor models from leading players in the space, including companies like Anthropic, Meta, and Google.

As it turns out, the basic approach underlying generative AI can be utilized in many other domains as well. After natural language, probably the second most popular way to use generative AI is to make images. DALL-E, MidJourney, and Stable Diffusion have proven remarkably adept at producing realistic images from simple prompts, and just the past week, Fable Studios unveiled their “Showrunner” AI, able to generate an entire episode of South Park.

But even this is barely scratching the surface, as researchers are also training generative models to create music, design new proteins and materials, and even carry out complex chains of tasks.

What is Customer Experience?

In the broadest possible terms, “customer experience” refers to the subjective impressions that your potential and current customers have as they interact with your company.

These impressions can be impacted by almost anything, including the colors and font of your website, how easy it is to find e.g. contact information, and how polite your contact center agents are in resolving a customer issue.

Customer experience will also be impacted by which segment a given customer falls into. Power users of your product might appreciate a bevy of new features, whereas casual users might find them disorienting.

Contact center managers must bear all of this in mind as they consider how best to leverage generative AI. In the quest to adopt a shiny new technology everyone is excited about, it can be easy to lose track of what matters most: how your actual customers feel about you.

Be sure to track metrics related to customer experience and customer satisfaction as you begin deploying large language models into your contact centers.

How is Generative AI For Customer Experience Being Used?

There are many ways in which generative AI is impacting customer experience in places like contact centers, which we’ll detail in the sections below.

Personalized Customer Interactions

Machine learning has a long track record of personalizing content. Netflix, take to a famous example, will uncover patterns in the shows you like to watch, and will use algorithms to suggest content that checks similar boxes.

Generative AI, and tools like the Quiq conversational AI platform that utilize it, are taking this approach to a whole new level.

Once upon a time, it was only a human being that could read a customer’s profile and carefully incorporate the relevant information into a reply. Today, a properly fine-tuned generative language model can do this almost instantaneously, and at scale.

From the perspective of a contact center manager who is concerned with customer experience, this is an enormous development. Besides the fact that prior generations of language models simply weren’t flexible enough to have personalized customer interactions, their language also tended to have an “artificial” feel. While today’s models can’t yet replace the all-elusive human touch, they can do a lot to add make your agents far more effective in adapting their conversations to the appropriate context.

Better Understanding Your Customers and Their Journies

Marketers, designers, and customer experience professionals have always been data enthusiasts. Long before we had modern cloud computing and electronic databases, detailed information on potential clients, customer segments, and market trends used to be printed out on dead treads, where it was guarded closely. With better data comes more targeted advertising, a more granular appreciation for how customers use your product and why they stop using it, and their broader motivations.

There are a few different ways in which generative AI can be used in this capacity. One of the more promising is by generating customer journeys that can be studied and mined for insight.

When you begin thinking about ways to improve your product, you need to get into your customers’ heads. You need to know the problems they’re solving, the tools they’ve already tried, and their major pain points. These are all things that some clever prompt engineering can elicit from ChatGPT.

We took a shot at generating such content for a fictional network-monitoring enterprise SaaS tool, and this was the result:

 

 

While these responses are fairly generic [1], notice that they do single out a number of really important details. These machine-generated journal entries bemoan how unintuitive a lot of monitoring tools are, how they’re not customizable, how they’re exceedingly difficult to set up, and how their endless false alarms are stretching the security teams thin.

It’s important to note that ChatGPT is not soon going to obviate your need to talk to real, flesh-and-blood users. Still, when combined with actual testimony, they can be a valuable aid in prioritizing your contact center’s work and alerting you to potential product issues you should be prepared to address.

Round-the-clock Customer Service

As science fiction movies never tire of pointing out, the big downside of fighting a robot army is that machines never need to eat, sleep, or rest. We’re not sure how long we have until the LLMs will rise up and wage war on humanity, but in the meantime, these are properties that you can put to use in your contact center.

With the power of generative AI, you can answer basic queries and resolve simple issues pretty much whenever they happen (which will probably be all the time), leaving your carbon-based contact center agents to answer the harder questions when they punch the clock in the morning after a good night’s sleep.

Enhancing Multilingual Support

Machine translation was one of the earliest use cases for neural networks and machine learning in general, and it continues to be an important function today. While ChatGPT was noticeably very good at multilingual translation right from the start, you may be surprised to know that it actually outperforms alternatives like Google Translate.

If your product doesn’t currently have a diverse global user base speaking many languages, it hopefully will soon, at the means you should start thinking about multilingual support. Not only will this boost table stakes metrics like average handling time and resolutions per hour, it’ll also contribute to the more ineffable “customer satisfaction.” Nothing says “we care about making your experience with us a good one” like patiently walking a customer through a thorny technical issue in their native tongue.

Things to Watch Out For

Of course, for all the benefits that come from using generative AI for customer experience, it’s not all upside. There are downsides and issues that you’ll want to be aware of.

A big one is the tendency of large language models to hallucinate information. If you ask it for a list of articles to read about fungal computing (which is a real thing whose existence we discovered yesterday), it’s likely to generate a list that contains a mix of real and fake articles.

And because it’ll do so with great confidence and no formatting errors, you might be inclined to simply take its list at face value without double-checking it.

Remember, LLMs are tools, not replacements for your agents. They need to be working with generative AI, checking its output, and incorporating it when and where appropriate.

There’s a wider danger that you will fail to use generative AI in the way that’s best suited to your organization. If you’re running a bespoke LLM trained on your company’s data, for example, you should constantly be feeding it new interactions as part of its fine-tuning, so that it gets better over time.

And speaking of getting better, sometimes machine learning models don’t get better over time. Owing to factors like changes in the underlying data, model performance can sometimes get worse over time. You’ll need a way of assessing the quality of the text generated by a large language model, along with a way of consistently monitoring it.

What are the Benefits of Generative AI for Customer Experience?

The reason that people are so excited over the potential of using generative AI for customer experience is because there’s so much upside. Once you’ve got your model infrastructure set up, you’ll be able to answer customer questions at all times of the day or night, in any of a dozen languages, and with a personalization that was once only possible with an army of contact center agents.

But if you’re a contact center manager with a lot to think about, you probably don’t want to spend a bunch of time hiring an engineering team to get everything running smoothly. And, with Quiq, you don’t have to – you can leverage generative AI to supercharge your customer experience while leaving the technical details to us!

Schedule a demo to find out how we can bring this bleeding-edge technology into your contact center, without worrying about the nuts and bolts.

Footnotes
[1] It’s worth pointing out that we spent no time crafting the prompt, which was really basic: “I’m a product manager at a company building an enterprise SAAS tool that makes it easier to monitor system breaches and issues. Could you write me 2-3 journal entries from my target customer? I want to know more about the problems they’re trying to solve, their pain points, and why the products they’ve already tried are not working well.” With a little effort, you could probably get more specific complaints and more usable material.

Understanding the Risk of ChatGPT: What you Should Know

OpenAI’s ChatGPT burst onto the scene less than a year ago and has already seen use in marketing, education, software development, and at least a dozen other industries.

Of particular interest to us is how ChatGPT is being used in contact centers. Though it’s already revolutionizing contact centers by making junior agents vastly more productive and easing the burnout contributing to turnover, there are nevertheless many issues that a contact center manager needs to look out for.

That will be our focus today.

What are the Risks of Using ChatGPT?

In the following few sections, we’ll detail some of the risks of using ChatGPT. That way, you can deploy ChatGPT or another large language model with the confidence born of knowing what the job entails.

Hallucinations and Confabulations

By far the most well-known failure mode of ChatGPT is its tendency to simply invent new information. Stories abound of the model making up citations, peer-reviewed papers, researchers, URLs, and more. To take a recent well-publicized example, ChatGPT accused law professor Jonathan Turley of having behaved inappropriately with some of his students during a trip to Alaska.

The only problem was that Turley had never been to Alaska with any of his students, and the alleged Washington Post story which ChatGPT claimed had reported these facts had also been created out of whole cloth.

This is certainly a problem in general, but it’s especially worrying for contact center managers who may increasingly come to rely on ChatGPT to answer questions or to help resolve customer issues.

To those not steeped in the underlying technical details, it can be hard to grok why a language model will hallucinate in this way. The answer is: it’s an artifact of how large language models train.

ChatGPT learns how to output tokens from being trained on huge amounts of human-generated textual data. It will, for example, see the first sentences in a paragraph, and then try to output the text that completes the paragraph. The example below is the opening lines of J.D. Salinger’s The Catcher in the Rye. The blue sentences are what ChatGPT would see, and the gold sentences are what it would attempt to create itself:

“If you really want to hear about it, the first thing you’ll probably want to know is where I was born, and what my lousy childhood was like, and how my parents were occupied and all before they had me, and all that David Copperfield kind of crap, but I don’t feel like going into it, if you want to know the truth.”

Over many training runs, a large language model will get better and better at this kind of autocompletion work, until eventually it gets to the level it’s at today.

But ChatGPT has no native fact-checking abilities – it sees text and outputs what it thinks is the most likely sequence of additional words. Since it sees URLs, papers, citations, etc., during its training, it will sometimes include those in the text it generates, whether or not they’re appropriate (or even real.)

Privacy

Another ongoing risk of using ChatGPT is the fact that it could potentially expose sensitive or private information. As things stand, OpenAI, the creators of ChatGPT, offer no robust privacy guarantees for any information placed into a prompt.

If you are trying to do something like named entity recognition or summarization on real people’s data, there’s a chance that it might be seen by someone at OpenAI as part of a review process. Alternatively, it might be incorporated into future training runs. Either way, the results could be disastrous.

But this is not all the information collected by OpenAI when you use ChatGPT. Your timezone, browser type and IP address, cookies, account information, and any communication you have with OpenAI’s support team is all collected, among other things.

In the information age we’ve become used to knowing that big companies are mining and profiting off the data we generate, but given how powerful ChatGPT is, and how ubiquitous it’s becoming, it’s worth being extra careful with the information you give its creators. If you feed it private customer data and someone finds out, that will be damaging to your brand.

Bias in Model Output

By now, it’s pretty common knowledge that machine learning models can be biased.

If you feed a large language model a huge amount of text data in which doctors are usually men and nurses are usually women, for example, the model will associate “doctor” with “maleness” and “nurse” with “femaleness.”
This is generally an artifact of the data the models were trained, and is not due to any malfeasance on the part of the engineers. This does not, however, make it any less problematic.

There are some clever data manipulation techniques that are able to go a long way toward minimizing or even eliminating these biases, though they’re beyond the scope of this article. What contact center managers need to do is be aware of this problem, and establish monitoring and quality-control checkpoints in their workflow to identify and correct biased output in their language models.

Issues Around Intellectual Property

Earlier, we briefly described the training process for a large language model like ChatGPT (you can find much more detail here.) One thing to note is that the model doesn’t provide any sort of citations for its output, nor any details as to how it was generated.

This has raised a number of thorny questions around copyright. If a model has ingested large amounts of information from the internet, including articles, books, forum posts, and much more, is there a sense in which it has violated someone’s copyright? What about if it’s an image-generation model trained on a database of Getty Images?

By and large, we tend to think this is the sort of issue that isn’t likely to plague contact center managers too much. It’s more likely to be a problem for, say, songwriters who might be inadvertently drawing on the work of other artists.

Nevertheless, a piece on the potential risks of ChatGPT wouldn’t be complete without a section on this emerging problem, and it’s certainly something that you should be monitoring in the background in your capacity as a manager.

Failure to Disclose the Use of LLMs

Finally, there has been a growing tendency to make it plain that LLMs have been used in drafting an article or a contract, if indeed they were part of the process. To the best of our knowledge, there are not yet any laws in place mandating that this has to be done, but it might be wise to include a disclaimer somewhere if large language models are being used consistently in your workflow. [1]

That having been said, it’s also important to exercise proactive judgment in deciding whether an LLM is appropriate for a given task in the first place. In early 2023, the Peabody School at Vanderbilt University landed in hot water when it disclosed that it had used ChatGPT to draft an email about a mass shooting that had taken place at Michigan State.

People may not care much about whether their search recommendations were generated by a machine, but it would appear that some things are still best expressed by a human heart.

Again, this is unlikely to be something that a contact center manager faces much in her day-to-day life, but incidents like these are worth understanding as you decide how and when to use advanced language models.

Someone stopping a series of blocks from falling into each other, symbolizing the prevention of falling victim to ChatGPT risks.

Mitigating the Risks of ChatGPT

From the moment it was released, it was clear that ChatGPT and other large language models were going to change the way contact centers run. They’re already helping agents answer more queries, utilize knowledge spread throughout the center, and automate substantial portions of work that were once the purview of human beings.

Still, challenges remain. ChatGPT will plainly make things up, and can be biased or harmful in its text. Private information fed into its interface will be visible to OpenAI, and there’s also the wider danger of copyright infringement.

Many of these issues don’t have simple solutions, and will instead require a contact center manager to exercise both caution and continual diligence. But one place where she can make her life much easier is by using a powerful, out-of-the-box solution like the Quiq conversational AI platform.

While you’re worrying about the myriad risks of using ChatGPT you don’t also want to be contending with a million little technical details as well, so schedule a demo with us to find out how our technology can bring cutting-edge language models to your contact center, without the headache.

Footnotes
[1] NOTE: This is not legal advice.

Request A Demo

The Ongoing Management of an LLM Assistant

Technologies like large language models (LLMs) are amazing at rapidly generating polite text that helps solve a problem or answer a question, so they’re a great fit for the work done at contact centers.

But this doesn’t mean that using them is trivial or easy. There are many challenges associated with the ongoing management of an LLM assistant, including hallucinations and the emergence of bad behavior – and that’s not even mentioning the engineering prowess required to fine-tune and monitor these systems.

All of this must be borne in mind by contact center managers, and our aim today is to facilitate this process.

We’ll provide broad context by talking about some of the basic ways in which large language models are being used in business, discuss, setting up an LLM assistant, and then enumerate some of the specific steps that need to be taken in using them properly.

Let’s go!

How Are LLMs Being Used in Science and Business?

First, let’s adumbrate some of the ways in which large language models are being utilized on the ground.

The most obvious way is by acting as a generative AI assistant. One of the things that so stunned early users of ChatGPT was its remarkable breadth in capability. It could be used to draft blog posts, web copy, translate between languages, and write or explain code.

This alone makes it an amazing tool, but it has since become obvious that it’s useful for quite a lot more.

One thing that businesses have been experimenting with is fine-tuning large language models like ChatGPT over their own documentation, turning it into a simple interface by which you can ask questions about your materials.

It’s hard to quantify precisely how much time contact center agents, engineers, or other people spend hunting around for the answer to a question, but it’s surely quite a lot. What if instead you could just, y’know, ask for what you want, in the same way that you do a human being?

Well, ChatGPT is a long way from being a full person, but when properly trained it can come close where question-answering is concerned.

Stepping back a little bit, LLMs can be prompt engineered into a number of useful behaviors, all of which redound to the benefit of the contact centers which use them. Imagine having an infinitely patient Socratic tutor that could help new agents get up to speed on your product and process, or crafting it into a powerful tool for brainstorming new product designs.

There have also been some promising attempts to extend the functionality of LLMs by making them more agentic – that is, by embedding them in systems that allow them to carry out more open-ended projects. AutoGPT, for example, pairs an LLM with a separate bot that hits the LLM with a chain of queries in the pursuit of some goal.

AssistGPT goes even further in the quest to augment LLMs by integrating them with a set of tools that allow them to achieve objectives involving images and audio in addition to text.

How to Set Up An LLM Assistant

Next, let’s turn to a discussion of how to set up an LLM assistant. Covering this topic fully is well beyond the scope of this article, but we can make some broad comments that will nevertheless be useful for contact center managers.

First, there’s the question of which large language model you should use. In the beginning, ChatGPT was pretty much the only foundation model on offer. Today, however, that situation has changed, and there are now foundation models from Anthropic, Meta, and many other companies.

One of the biggest early decisions you’ll have to make is whether you want to try and use an open-source model (for which the code and the model weights are freely available) or a close-source model (for which they are not).

If you go the closed-source route you’ll almost certainly be hitting the model over an API, feeding it your queries and getting its responses back. This is orders of magnitude simpler than provisioning an open-source model, but it means that you’ll also be beholden to the whims of some other company’s engineering team. They may update the model in unexpected ways, or simply go bankrupt, and you’ll be left with no recourse.

Using an open-source alternative, of course, means grabbing the other horn of the dilemma. You’ll have visibility into how the model works and will be free to modify it as you see fit, but this won’t be worth much unless you’re willing to devote engineering hours to the task.

Then, there’s the question of fine-tuning large language models. While ChatGPT and LLMs more generally are quite good on their own, having them answer questions about your product or respond in particular ways means modifying their behavior somehow.

Broadly speaking, there are two ways of doing this, which we’ve mentioned throughout: proper fine-tuning, and prompt engineering. Let’s dig into the differences.

Fine-tuning means showing the model many (i.e. several hundred) examples of the behaviors you want to see, which changes its internal weights and biases it towards those behaviors in the future.

Prompt engineering, on the other hand, refers to carefully structuring your prompts to elicit the desired behavior. These LLMs can be surprisingly sensitive to little details in the instructions they’re provided, and prompt engineers know how to phrase their requests in just the right way to get what they need.

There is also some middle ground between these approaches. “One-shot learning” is a form of prompt engineering in which the prompt contains a singular example of the desired behavior, while “few-shot learning” refers to including between three and five examples.

Contact center managers thinking about using LLMs will need to think about these implementation details. If you plan on only lightly using ChatGPT in your contact center, a basic course on prompt engineering might be all you need. If you plan on making it an integral part of your organization, however, that most likely means a fine-tuning pipeline and serious technical investment.

The Ongoing Management of an LLM

Having said all this, we can now turn to the day-to-day details of managing an LLM assistant.

Monitoring the Performance of an LLM

First, you’ll need to continuously monitor the model. As hard as it may be to believe given how perfect ChatGPT’s output often is, there isn’t a person somewhere typing the responses. ChatGPT is very prone to hallucinations, in which it simply makes up information, and LLMs more generally can sometimes fall into using harmful or abusive language if they’re prompted incorrectly.

This can be damaging to your brand, so it’s important that you keep an eye on the language created by the LLMs your contact center is using.

And of course, not even LLMs can obviate the need to track the all-import key performance indicators. So far, there’s been one major study on generative AI in contact centers that found they increased productivity and reduced turnover, but you’ll still want to measure customer satisfaction, average handle time, etc.

There’s always a temptation to jump on a shiny new technology (remember the blockchain?), but you should only be using LLMs if they actually make your contact center more productive, and the only way you can assess that is by tracking your figures.

Iterative Fine-Tuning and Training

We’ve already had a few things to say about fine-tuning and the related discipline of prompt engineering, and here we’ll build on those preliminary comments.
The big thing to bear in mind is that fine-tuning a large language model is not a one-and-done kind of endeavor. You’ll find that your model’s behavior will drift over time (the technical term is “model degradation”), and this means you will likely to have to periodically re-train it.

It’s also common to offer the model “feedback”, i.e. by ranking it’s responses or indicating when you did or did not like a particular output. You’ve probably heard of reinforcement learning through human feedback, which is one version of this process, but there are also others you can use.

Quality Assurance and Oversight

A related point is that your LLMs will need consistent oversight. They’re not going to voluntarily improve on their own (they’re algorithms with no personal initiative to speak of), so you’ll need to checking in routinely to make sure they’re performing well and that your agents are using them responsibly.

There are many parts to this, including checks on the models outputs and an audit process that allows you to track down any issues. If you suddenly see a decline in performance, for example, you’ll need to quickly figure out whether it’s isolated to one agent or part of a larger pattern. If it’s the former, was it a random aberration, or did the agent go “off script” in a way that caused the model to behave poorly?

Take another scenario, in which an end-user was shown inappropriate text generated by an LLM. In this situation, you’ll need to take a deeper look at your process. If there were agents interacting with this model, ask them why they failed to spot the problematic text and stop it being shown to a customer. Or, if it came from a mostly-automated part of your tech stack, you need to uncover the reasons for which your filters failed to catch it, and perhaps think about keeping humans more in the loop.

The Future of LLM Assistants

Though the future is far from certain, we tend to think that LLMs have left Pandora’s box for good. They’re incredibly powerful tools which are poised to transform how contact centers and other enterprises operate, and experiments so far have been very promising; for all these reasons, we expect that LLMs will become a steadily more important part of the economy going forward.

That said, the ongoing management of an LLM assistant is far from trivial. You need to be aware at all times of how your model is performing and how your agents are using it. Though it can make your contact center vastly more productive, it can also lead to problems if you’re not careful.

That’s where the Quiq platform comes in. Our conversational AI is some of the best that can be found anywhere, able to facilitate customer interactions, automate text-message follow-ups, and much more. If you’re excited by the possibilities of generative AI but daunted by the prospect of figuring out how TPUs and GPUs are different, schedule a demo with us today.

Request A Demo

How Do You Train Your Agents in a ChatGPT World?

There’s long been an interest in using AI for educational purposes. Technologist Danny Hillis has spent decades dreaming of a digital “Aristotle” that would teach everyone in the way that the original Greek wunderkind once taught Alexander the Great, while modern companies have leveraged computer vision, machine learning, and various other tools to help students master complex concepts in a variety of fields.

Still, almost nothing has sparked the kind of enthusiasm for AI in education that ChatGPT and large language models more generally have given rise to. From the first, its human-level prose, knack for distilling information, and wide-ranging abilities made it clear that it would be extremely well-suited for learning.

But that still leaves the question of how. How should a contact center manager prepare for AI, and how should she change the way she trains her agents?

In our view, this question can be understood in two different, related ways:

  1. How can ChatGPT be used to help agents master skills related to their jobs?
  2. How can they be trained to use ChatGPT in their day-to-day work?

In this piece, we’ll take up both of these issues. We’ll first provide a general overview of the ways in which ChatGPT can be used for both education and training, then turn to the question of the myriad ways in which contact center agents can be taught to use this powerful new technology.

How is ChatGPT Used in Education and Training?

First, let’s get into some of the early ways in which ChatGPT is changing education and training.

NOTE: Our comments here are going to be fairly broad, covering some areas that may not be immediately applicable to the work contact center agents do. The main purpose for this is that it’s very difficult to forecast how AI is going to change contact center work.

Our section on “creating study plans and curricula”, for example, might not be relevant to today’s contact center agents. But it could become important down the road if AI gives rise to more autonomous workflows in the future, in which case we expect that agents would be given more freedom to use AI and similar tools to learn the job on their own.

We pride ourselves on being forward-looking and forward-thinking here at Quiq, and we structure our content to reflect this.

Making a Socratic Tutor for Learning New Subjects

The Greek philosopher Socrates famously pioneered the instructional methodology which bears his name. Mostly, the Socratic method boils down to continuously asking targeted questions until areas of confusion emerge, at which point they’re vigorously investigated, usually in a small group setting.

A well-known illustration of this process is found in Plato’s Republic, which starts with an attempt to define “justice” and then expands into a much broader conversation about the best way to run a city and structure a social order.

ChatGPT can’t replace all of this on its own, of course, but with the right prompt engineering, it does a pretty good job. This method works best when paired with a primary source, such as a textbook, which will allow you to double-check ChatGPT’s questions and answers.

Having it Explain Code or Technical Subjects

A related area in which people are successfully using ChatGPT is in having it walk you through a tricky bit of code or a technical concept like “inertia”.

The more basic and fundamental, the better. In our experience so far, ChatGPT has almost never failed in correctly explaining simple Python, Pandas, or Java. It did falter when asked to produce code that translates between different orbital reference frames, however, and it had no idea what to do when we asked it about a fairly recent advance in the frontiers of battery chemistry.

There are a few different reasons that we advise caution if you’re a contact center agent trying to understand some part of your product’s codebase. For one thing, if the product is written in a less-common language ChatGPT might not be able to help much.

But even more importantly, you need to be extremely careful about what you put into it. There have already been major incidents in which proprietary code and company secrets were leaked when developers pasted them into the ChatGPT interface, which is visible to the OpenAI team.

Conversely, if you’re managing teams of contact center agents, you should begin establishing a policy on the appropriate uses of ChatGPT in your contact center. If your product is open-source there’s (probably) nothing to worry about, but otherwise, you need to proactively instruct your agents on what they can and cannot use the tool to accomplish.

Rewriting Explanations for Different Skill Levels

Wired has a popular Youtube series called “5 levels”, where experts in quantum computing or the blockchain will explain their subject at five different skill levels: “child”, “teen”, “college student”, “grad student”, and a fellow “expert.”

One thing that makes this compelling to beginners and pros alike is seeing the same idea explored across such varying contexts – seeing what gets emphasized or left out, or what emerges as you gradually climb up the ladder of complexity and sophistication.

This, too, is a place where ChatGPT shines. You can use it to provide explanations of concepts at different skill levels, which will ultimately improve your understanding of them.

For a contact center manager, this means that you can gradually introduce ideas to your agents, starting simply and then fleshing them out as the agents become more comfortable.

Creating Study Plans and Curricula

Stepping back a little bit, ChatGPT has been used to create entire curricula and even daily study plans for studying Spanish, computer science, medicine, and various other fields.

As we noted at the outset, we expect it will be a little while before contact center agents are using ChatGPT for this purpose, as most centers likely have robust training materials they like to use.

Nevertheless, we can project a future in which these materials are much more bare-bones, perhaps consisting of some general notes along with prompts that an agent-in-training can use to ask questions of a model trained on the company’s documentation, test themselves as they go, and gradually build skill.

Training Agents to Use ChatGPT

Now that we’ve covered some of the ways in which present and future contact center agents might use ChatGPT to boost their own on-the-job learning, let’s turn to the other issue we want to tackle today: how to train ChatGPT to agents today?

Getting Set Up With ChatGPT (and its Plugins)

First, let’s talk about how you can start using ChatGPT.

This section may end up seeming a bit anticlimactic because, honestly, it’s pretty straightforward. Today, you can get access to ChatGPT by going to the signup page. There’s a free version and a paid version that’ll set you back a whopping $20/month (which is a pretty small price to pay for access to one of the most powerful artifacts the human race has ever produced, in our opinion.)

As things stand, the free tier gives you access to GPT-3.5, while the paid version gives you the choice to switch to GPT-4 if you want the more powerful foundational model.

A paid account also gives you access to the growing ecosystem of ChatGPT plugins. You access the ChatGPT plugins by switching over to the GPT-4 option:

How do you Train Your Agents in a ChatGPT World?

 

How do you Train Your Agents in a ChatGPT World?

 

There are plugins that allow ChatGPT to browse the web, let you directly edit diagrams or talk with PDF documents, or let you offload certain kinds of computations to the Wolfram platform.

Contact center agents may or may not find any of these useful right now, but we predict there will be a lot more development in this space going forward, so it’s something managers should know about.

Best Practices for Combining Human and AI Efforts

People have long been fascinated and terrified by automation, but so far, machines have only ever augmented human labor. Knowing when and how to offload work to ChatGPT requires knowing what it’s good for.

Large language models learn how to predict the next token from their training data, and are therefore very good at developing rough drafts, outlines, and more routine prose. You’ll generally find it necessary to edit its output fairly heavily in order to account for context and so that it fits stylistically with the rest of your content.

As a manager, you’ll need to start thinking about a standard policy for using ChatGPT. Any factual claims made by the model, especially any references or citations, need to be checked very carefully.

Scenario-Based Training

In this same vein, you’ll want to distinguish between different scenarios in which your agents will end up using generative AI. There are different considerations in using Quiq Compose or Quiq Suggest to format helpful replies, for example, and in using it to translate between different languages.

Managers will probably want to sit down and brainstorm different scenarios and develop training materials for each one.

Ethical and Privacy Considerations

The rise of generative AI has sparked a much broader conversation about privacy, copyright, and intellectual property.

Much of this isn’t particularly relevant to contact center managers, but one thing you definitely should be paying attention to is privacy. Your agents should never be putting real customer data into ChatGPT, they should be using aliases and fake data whenever they’re trying to resolve a particular issue.

To quote fictional chemist and family man Walter White, we advise you to tread lightly here. Data breaches are a huge and ongoing problem, and they can do substantial damage to your brand.

ChatGPT and What it Means for Training Contact Center Agents

ChatGPT and related technologies are poised to change education and training. They can be used to help get agents up to speed or to work more efficiently, and they, in turn, require a certain amount of instruction to use safely.

These are all things that contact center managers need to worry about, but one thing you shouldn’t spend your time worrying about is the underlying technology. The Quiq conversational AI platform allows you to leverage the power of language models for contact centers, without looking at any code more complex than an API call. If the possibilities of this new frontier intrigue you, schedule a demo with us today!