Building Better Customer Relationships with Text Messaging

Customer engagement is constantly evolving and the trend towards more customer-centric experiences hasn’t slowed. Businesses are increasingly having to provide faster, easier, and more friendly ways of initiating and responding to customer’s inquiries.

Businesses that adapt to this continually changing environment will ensure they deliver superior service along with desirable products, thus boosting engagement rates.

This is where customer engagement strategies based on text messaging enter the picture. This mode of communication has overtaken traditional methods, like phone and email, as consumers prefer the ease, convenience, and hassle-free nature of text messaging.

Texting isn’t just for friends and family anymore and consumers are choosing this channel more often as it fits their on-the-go lifestyle.

The move to text messaging is a part of this new era of building customer relationships, and both businesses and consumers can benefit.

The old customer engagement marketing strategies are fading

As recently as two decades ago, the world of business and customer service was a completely different place. Company agents and representatives used forms of customer engagement like trade shows, promotional emails, letters, and phone calls to promote their products and services.

While these methods are still used in a wide range of industries, many companies today are turning to new ways of maintaining customer loyalty.

According to the Pew Research Center, about 96% of Americans own a cell phone of some kind. Text messaging is a highly popular form of communication in people’s everyday lives. As such, it only seems natural that companies would use texting as a service, sales, and marketing tool. Their results have been astounding, and that’s what we’ll explore in the next section.

The advantages of digital customer engagement strategies

While sending text messages to customers may be a new frontier for many companies, businesses are finding the personal, casual nature of this medium is part of what makes it so effective.

Some of the benefits that come with text-based customer service include:

Hassle-free customer service access

Consumers love instant messaging because it’s easy and allows them to engage, ask questions, and get information without having to make a phone call or meet face-to-face.

One of the hallmarks of our increasingly digital world is how hard businesses work to make things easy – think of 1-click shopping on Amazon (you don’t have to click two buttons), how smartphones enable contactless payment (you don’t have to pull your card out), the way Alexa responds to voice commands (you don’t have to click anything), and the way Netflix automatically plays the next episode of a show you’re binging (you don’t even have to move).

These expectations are becoming more ingrained in the minds of consumers, especially young ones, and they are unlikely to be enthusiastic about needing to call an agent or go into the store to resolve any problems they have.

Timely responses and service

Few things turn a customer off faster than sending an email or making a phone call, then having to wait days for a response. With text message customer service, you can stay connected 24/7 and provide timely responses and solutions. Artificial intelligence is one customer engagement technology that will make this even easier in the years ahead (more on this below).

The personal touch

Customers are more likely to stick around if they believe you care about their personal needs. Texting will allow you to take a more individualized approach, communicating with customers in the same way they might communicate with friends. This stands in contrast to the stiffer, more formal sorts of interactions that tend to happen over the phone or in person.

A dynamic variety of solutions

Text messaging provides unique opportunities for marketing, sales, and customer support. For example, you might use texting to help troubleshoot a product, promote new sales, send coupons, and more.

None of these things are impossible to do with older approaches to customer service but think of how pain-free it would be for a busy single mom to ask a question, check the reply when she stops to pick up her daughter from school, ask another question, check the new reply when she gets home, etc. This is vastly easier than finding a way to carve three hours out of the day to go into the store to speak to an agent directly.

To make these ideas easier to digest, here is a table summarizing the ground we’ve just covered:

The Old Way The New Way
Method of Delivery Phone calls, pamphlets, trade shows, face-to-face conversations Text messaging
Difficulty Requires spending time on the phone, driving to a physical location, or making an appointment. Only requires a phone and the ability to text on it.
Timeliness Can take hours or days to get a reply. Replies should be almost instantaneous.
Personalization Good agents might be able to personalize the interaction, but it’s more difficult.  Personalizing messages and meeting a customer on their own terms because natural and easy.
Variety Does offer ways of solving problems or upselling customers, but only at the cost of more effort from the agent.  Sales and customer support can be embedded seamlessly in existing conversations, and those conversations fit better into a busy modern lifestyle.

​​Why this all matters

These benefits matter because 64% of Americans would rather receive a text than a phone call. It’s clear what the consumers want, and it’s the business’s job to deliver.

Because text messaging can help you engage with customers on a more personal level, it can increase customer loyalty, lead to more conversions, and in general boost engagement rates.

What’s more, text-based customer relationships will likely be transformed by the advent of generative artificial intelligence, especially large language models (LLMs). This technology will make it so easy to offer 24/7 availability that everyone will take it for granted, to say nothing of how it can personalize replies based on customer-specific data, translate between languages, answer questions in different levels of detail, etc.

Texting already provides agents with the ability to manage multiple customers at a time, but they’ll be able to accommodate far higher volumes when they’re working alongside machines, boosting efficiency and saving huge amounts of time.

Some day soon, businesses will look back on the days when human beings had to do all of this with a sense of gratitude for how technology has streamlined the process of delivering a top-shelf customer experience.

And it is exactly this customer satisfaction that’ll allow those businesses to increase profits and make room for business growth over time.

Request a demo from Quiq today

In the future, as in the past, customer service will change with the rise of new technologies and strategies. If you don’t want to be left behind, contact Quiq today for a demo.

We not only make it easy to integrate text messaging into your broader approach to building customer relationships, we also have bleeding-edge language models that will allow you to automate substantial parts of your workflow.

Request A Demo

AI Translation for Global Brands

AI is already having a dramatic impact on various kinds of work, in places like contact centers, marketing agencies, research outfits, etc.

In this piece, we’re going to take a closer look at one specific arena where people are trying things (and always learning), and that’s AI translation. We’re going to look at how AI systems can help in translation tasks, and how that is helping companies build their global brands.

What is AI Translation?

AI translation, or “machine” translation as it’s also known, is more or less what it sounds like: the use of algorithms, computers, or software to translate from one natural language to another.

The chances are pretty good you’ve used AI translation in one form or another already. If you’ve ever relied on Google Translate to double-check your conjugation of a Spanish verb or to read the lyrics of the latest K-pop sensation in English, you know what it can accomplish.

But the mechanics and history of this technology are equally fascinating, and we’ll cover those now.

How Does AI Translation Work?

There are a few different approaches to AI translation, which broadly fall into three categories.

The first is known as rule-based machine translation, and it works by drawing on the linguistic structure that scaffolds all language. If you have any bad memories of trying to memorize Latin inflections or French grammatical rules, you’ll be more than familiar with these structures, but you may not know that they can also be used to build powerful, flexible AI translation systems.

Three ingredients are required to make rule-based machine translation function: a set of rules describing how the input language works, a set of rules describing how the output language works, and dictionaries translating words between the input and output languages.

It’s probably not hard to puzzle out the major difficulty with rule-based machine translation: it demands a great deal of human time and attention and is therefore very difficult to scale.

The second approach is known as statistical machine translation. Unlike rule-based machine translation, statistical machine translation tends to focus on higher-level groupings, known as “phrases”. Statistical models of the relevant languages are built through an analysis of two kinds of data: bilingual corpora containing both the input and output language, and monolingual corpora in the output language. Once these models have been developed, they can be used to automatically translate between the language pairs.

Finally, there’s neural machine translation. This is the most recently developed AI translation method, and it relies on deep neural networks trained to predict sequences of tokens. Neural machine translation rapidly supplanted statistical methods owing to its remarkable performance, but there can be edge cases where statistical translations do better. As is usually the case, of course, there are also hybrid systems that use both neural and statistical machine translation.

Building a Global Brand with AI

There are many ways in which the emerging technology of artificial intelligence can be used to build a global brand. In this section, we’ll walk through a few examples.

How can AI Translation Be Used to Build a Global Brand?

The first way AI translation can be used for building a global brand is that it helps with internal communications. If you have an international workforce – programmers in Eastern Europe, for example, or support staff in the Phillippines – keeping them all on the same page is even more important than usual. Coordinating your internal teams is hard enough when they’re all in the same building, to say nothing of when they’re spread out across the globe, over multiple time zones and multiple cultures.

The last thing you need is mistakes occurring because of a bad translation from English into their native languages, so getting high-quality AI translations is crucial for the internal cohesion required for building your global brand.

Of course, more or less the exact same case can be made for external communication. It would be awfully difficult to build a global brand that doesn’t routinely communicate with the public, through advertisements, various kinds of content or media, etc. And if the brand is global, most, or perhaps all, of this content will need to be translated somewhere along the way.

There are human beings who can handle this work, but with the rising sophistication of AI translators, it’s becoming possible to automate substantial parts of it. Besides the obvious cost savings, there are other benefits to AI translation. For one thing, AI is increasingly able to translate into what are called “low-resource” languages, i.e. languages for which there isn’t much training material and only small populations of native speakers. If AI is eventually able to translate for these populations, it could open up whole new markets that weren’t reachable before.

For another, it may soon be possible to do dynamic, on-the-fly translations of brand material. We’re not aware of any system that can 1) identify a person’s native language from snippets of their speech or other identifying features, and 2) instantly produce a translation of i.e. a billboard or poster in real-time, but it’s not at all beyond our imagination. If no one has built something that can do this yet, they surely will before too long.

Prompt Engineering for Building a Global Brand

One thing we haven’t touched on much so far is how generative AI will impact marketing. Generative AI is already being used to create drafts of web copy, mockups of new designs for buildings, products, and clothing, translating between languages, and much else besides.

This leads naturally to a discussion of prompt engineering, which refers to the careful sculpting of the linguistic instructions that are given to large generative AI models. These models are enormously complex artifacts whose inner workings are largely mysterious and whose outputs are hard to predict in advance. Skilled prompt engineers have put in the time required to develop a sense for how to phrase instructions just so, and they’re able to get remarkably high-quality output with much less effort than the rest of us.

If you’re thinking about using generative AI in building your global brand you’ll almost certainly need to be thinking prompt engineering, so be sure to check out Quiq’s blog for more in-depth discussions of this and related subjects.

How can AI Translation Benefit the Economy?

Throughout this piece, we’ve discussed various means by which AI translation can help build global brands. But you might still want to see some hard evidence of the economic benefits of machine translation.

Economists Erik Brynjolfsson, Xiang Hui, and Meng Liu conducted a study of how AI translation has actually impacted trade on an e-commerce platform. They found that “… the introduction of a machine translation system…had a significant effect on international trade on this platform, increasing export quantity by 17.5%.”

More specifically, they found evidence of “…a substantial reduction in buyers’ translation-related search costs due to the introduction of this system.” On the whole, their efforts support the conclusion that “… language barriers significantly hinder trade and that AI has already substantially improved overall economic efficiency.”

Though this is only one particular study on one particular mechanism, it’s not hard to see how it can apply more broadly. If more people can read your marketing material, it stands to reason that more people will buy your product, for example.

AI Translation and Global Brands

Global brands face many unique challenges: complex supply chains, distributed workforces, and the bewildering diversity of human language.

This last challenge is something that AI language translation can help with, as it’s already proving useful in boosting trade and exchange by reducing the friction involved in translation.

If you want to build a global brand and are keen to use conversational AI to do it, check out the Quiq platform. Our services include a variety of agent-facing and customer-facing tools, and make it easy to automate question-answering tasks, follow-ups with clients, and many other kinds of work involved in running a contact center. Schedule a demo with us today to see how we can help you build your brand!

Request A Demo

What is Automated Customer Service? – Ultimate Guide

From graph databases to automated machine learning pipelines and beyond, a lot of attention gets paid to new technologies. But the truth is, none of it matters if users aren’t able to handle the more mundane tasks of managing permissions, resolving mysterious errors, and getting the tools installed and working on their native systems.

This is where customer service comes in. Though they don’t often get the credit they deserve, customer service agents are the ones who are responsible for showing up every day to help countless others actually use the latest and greatest technology.

Like every job since the beginning of jobs, there are large components of customer service that have been automated, are currently being automated, or will be automated at some point soon.

That’s our focus for today. We want to explore customer service as a discipline, and then talk about some of how generative AI can automate substantial parts of the standard workflow.

What is Customer Service?

To begin with, we’ll try to clarify what customer service is and why it matters. This will inform our later discussion of automated customer service, and help us think through the value that can be added through automation.

Customer service is more or less what it sounds like: serving your customers – your users, or clients – as they go about the process of utilizing your product. A software company might employ customer service agents to help onboard new users and troubleshoot failures in their product, while a services’ company might use them for canceling appointments and rescheduling.

Over the prior few decades, customer service has evolved alongside many other industries. As mobile phones have become firmly ensconced in everyone’s life, for example, it has become more common for businesses to supplement the traditional avenues of phone calls and emails by adding text messaging and chatbot customer support to their customer service toolkit. This is part of what is known as an omni-channel strategy, in which more effort is made to meet customers where they’re at rather than expecting them to conform to the communication pathways a business already has in place.

Naturally, many of these kinds of interactions can be automated, especially with the rise of tools like large language models. We’ll have more to say about that shortly.

Why is Customer Service Important?

It may be tempting for those writing the code to think that customer service is a “nice to have”, but that’s not the case at all. However good a product’s documentation is, there will simply always be weird behaviors and edge cases in which a skilled customer service agent (perhaps helped along with AI) needs to step in and aid a user in getting everything running properly.

But there are other advantages as well. Besides simply getting a product to function, customer service agents contribute to a company’s overall brand, and the general emotional response users have to the company and its offerings.

High-quality customer service agents can do a lot to contribute to the impression that a company is considerate, and genuinely cares about its users.

What Are Examples of Good Customer Service?

There are many ways in which customer service agents can do this. For example, it helps a lot when customer service agents try to transmit a kind of warmth over the line.

Because so many people spend their days interacting with others through screens, it can be easy to forget what that’s like, as tone of voice and facial expression are hard to digitally convey. But when customer service agents greet a person enthusiastically and go beyond “How may I help you” by exchanging some opening pleasantries, they feel more valued and more at ease. This matters a lot when they’ve been banging their head against a software problem for half a day.

Customer service agents have also adapted to the digital age by utilizing emojis, exclamation points, and various other kinds of internet-speak. We live in a more casual age, and under most circumstances, it’s appropriate to drop the stiffness and formalities when helping someone with a product issue.

That said, you should also remember that you’re talking to customers, and you should be polite. Use words like “please” when asking for something, and don’t forget to add a “thank you.” It can be difficult to remember this when you’re dealing with a customer who is simply being rude, especially when you’ve had several such customers in a row. Nevertheless, it’s part of the job.

Finally, always remember that a customer gets in touch with you when they’re having a problem, and above all else, your job is to get them what they need. From the perspective of contact center managers, this means you need periodic testing or retraining to make sure your agents know the product thoroughly.

It’s reasonable to expect that agents will sometimes need to look up the answer to a question, but if they’re doing that constantly it will not only increase the time it takes to resolve an issue, it will also contribute to customer frustration and a general sense that you don’t have things well in hand.

Automation in Customer Service

Now that we’ve covered what customer service is, why it matters, and how to do it well, we have the context we need to turn to the topic of automated customer service.

For all intents and purposes, “automation” simply refers to outsourcing all or some of a task to a machine. In industries like manufacturing and agriculture, automation has been steadily increasing for hundreds of years.

Until fairly recently, however, the technology didn’t yet exist to automate substantial portions of customer service worth. With the rise of machine learning, and especially large language models like ChatGPT, that’s begun to change dramatically.

Let’s dive into this in more detail.

Examples of Automated Customer Service

There are many ways in which customer service is being automated. Here are a few examples:

  • Automated questions answering – Many questions are fairly prosaic (“How do I reset my password”), and can effectively be outsourced to a properly finetuned large language model. When such a model is trained on a company’s documentation, it’s often powerful enough to handle these kinds of low-level requests.
  • Summarization – There have long been models that could do an adequate job of summarization, but large language models have kicked this functionality into high gear. With an endless stream of new emails, Slack messages, etc. constantly being generated, having an agent that can summarize their contents and keep agents in the loop will do a lot to boost their productivity.
  • Classifying incoming messages – Classification is another thing that models have been able to do for a while, and it’s also something that helps a lot. Having an agent manually sort through different messages to figure out how to prioritize them and where they should go is no longer a good use of time, as algorithms are now good enough to do a major chunk of this kind of work.
  • Translation – One of the first useful things anyone attempted to do with machine learning was translating between different natural languages (i.e. from Russian into English). Once squarely in the purview of human beings, this is now a task that machines can do almost as well, at least for customer service work.

Should We Automate Customer Service?

All this having been said, you may still have questions about the wisdom of automating customer service work. Sure, no one wants to spend hours every day looking up words in Mandarin to answer a question or prioritizing tickets by hand, but aren’t we in danger of losing something important as customer service agents? Might we not automate ourselves out of a job?

No one can predict the future, of course, but the early evidence is quite to the contrary. Economists have conducted studies of how contact centers have changed with the introduction of generative AI, and their findings are very encouraging.

Because these models are (usually) finetuned on conversations from more experienced agents, they’re able to capture a lot of how those agents handle issues. Typical response patterns, politeness, etc. become “baked into” the models. Junior agents using these models are able to climb the learning curve more quickly and, feeling less strained in their new roles, are less likely to quit. This, in turn, puts less of a burden on managers and makes the organization overall more stable. Everyone ends up happier and more productive.

So far, it’s looking like AI-based automation in contact centers will be like automation almost everywhere else: machines will gradually remove the need for human attention in tedious or otherwise low-value tasks, freeing them up to focus on places where they have more of an advantage.

If agents don’t have to sort tickets anymore or resolve routine issues, they can spend more time working on the really thorny problems, and do so with more care.

Moving Quiq-ly into the Future!

Where the rubber of technology meets the road of real-world use cases, customer service agents are extremely important. They not only make sure customers can use a company’s tools, but they also contribute to the company brand in their tone, mannerisms, and helpfulness.

Like most other professions, customer service agents are being impacted by automation. So far, this impact has been overwhelmingly positive and is likely to prove a competitive advantage in the decades ahead.

If you’re intrigued by this possibility, Quiq has created a suite of industry-leading conversational AI tools, both for customer-facing applications and agent-facing applications. Check them out or schedule a demo with us to see what all the fuss is about.

Request A Demo

Top 5 Benefits of AI for Hospitality

As an industry, hospitality is aimed squarely at meeting customer needs. Whether it’s a businesswoman staying in 5-star resorts or a mother of three getting a quiet weekend to herself, the job of the hospitality professionals they interact with is to anticipate what they want and make sure they get it.

As technologies like artificial intelligence become more powerful and pervasive, customer expectations will change. When that businesswoman books a hotel room, she’ll expect there to be a capable virtual assistant talking to her about a vacation spot; when that mother navigates the process of buying a ticket, she’ll expect to be interacting with a very high-quality chatbot, perhaps one that’s indistinguishable from an actual human being.

All of this means that the hospitality industry needs to be thinking about how it will be impacted by AI. It needs to consider what the benefits of AI for hospitality are, what limitations are faced by AI, and how it can be utilized effectively. That’s what we’re here to do today, so let’s get started.

Why is AI Important for Hospitality?

AI is important in hospitality for the same reason it’s important everywhere else: it’s poised to become a transformative technology, and just about every industry – especially those that involve a lot of time interacting through text – could be up-ended by it.

The businesses that emerge the strongest from this ongoing revolution will be those that successfully anticipate how large language models and similar tools change workflows, company setups, cost and pricing structures, etc.

With that in mind, let’s work through some of the ways in which AI is (or will) be used in hospitality.

How is AI Used in Hospitality?

There are many ways in which AI is used in hospitality, and in the sections that follow we’ll walk through a number of the most important ones.

Chatbots and Customer Service

Perhaps the most obvious place to begin is with chatbots and customer service more broadly. Customer-facing chatbots were an early application of natural language processing, and have gotten much better in the decades since. With ChatGPT and similar LLMs, they’re currently in the process of taking another major leap forward.

Now that we have models that can be fine-tuned to answer questions, summarize texts, and carry out open-ended interactions with human users, we expect to see them becoming more and more common in hospitality. Someday soon, it may be the case that most of the steps involved in booking a room or changing a flight happens entirely without human assistance of any kind.

This is especially compelling because we’ve gotten so good at making chatbots that are very deferential and polite (though as we make clear in the final section on “limitations”, this is not always the case.)

Virtual Assistants

AI virtual assistants are a generalization of the idea behind chatbots. Whereas chatbots can be trained to offload many parts of hospitality work, powerful virtual assistants will take this dynamic to the next level. Once we have better agents – systems able to take strings of actions in pursuit of a goal – many more parts of hospitality work will be outsourced to the machines.

What might this look like?

Well, we’ve already seen some tools that can do relatively simple tasks like “book a flight to Indonesia”, but they’re still not all that flexible. Imagine an AI virtual assistant able to handle all the subtleties and details involved in a task like “book a flight for ten executives to Indonesia, and book lodging near the conference center and near the water, too, then make reservations for a meal each night of the week, taking into account the following dietary restrictions.”

Work into building generative agents like this is still in its infancy, but it is nevertheless an active area of research. It’s hard to predict when we’ll have agents who can be trusted to do advanced work with minimal oversight, but once we do, it’ll really begin to change how the hospitality industry runs.

Sentiment Analysis

Sentiment analysis refers to an automated, algorithmic approach to classifying the overall vibe of a piece of text. “The food was great” is obviously positive sentiment, “the food was awful” is obviously negative sentiment, and then there are many subtler cases involving e.g. sarcasm.

The hospitality industry desperately needs tools able to perform sentiment analysis at scale. It helps them understand what clients like and dislike about particular services or locations, and can even help in predicting future demand. If, for example, there’s a bunch of positive sentiment around a concert being given in Indonesia, that indicates that there will probably be a spike in bookings there.

Boosting Revenues for Hospitality

People have long been interested in using AI to make money, whether that be from trading strategies generated by ChatGPT or from using AI to create ultra-targeted marketing campaigns.

All of this presents an enormous opportunity for the hospitality industry. Through a combination of predictive modeling, customer segmentation, sentiment analysis, and related techniques, it’ll become easier to forecast changes in demand, create much more responsive pricing models, and intelligently track inventory.

What this will ultimately mean is better revenues for hotels, event centers, and similar venues. You’ll be able to cross-sell or upsell based on a given client’s unique purchase history and interests, you’ll have fewer rooms go unoccupied, and you’ll be less likely to have clients who are dissatisfied by the fact tha you ran out of something.

Sustainability and Waste Management

An underappreciated way in which AI will benefit hospitality is by making sustainability easier. There are a few ways this could manifest.

One is by increasing energy efficiency. Most of you will already be familiar with currently-existing smart room technology, like thermostats that learn when you’re leaving and turn themselves up, thus lowering your power bill.

But there’s room for this to become much more far-ranging and powerful. If AI is put in charge of managing the HVAC system for an entire building, for example, it could lead to savings on the order of millions of dollars, while simultaneously making customers more comfortable during their stay.

And the same holds true for waste management. AI systems smart enough to discover when a trash can is full means that your cleaning staff won’t have to spend nearly as much time patrolling. They’ll be able to wait until they get a notification to handle the problem, gaining back many hours in their day that can be put towards higher-value work.

What are the Limitations of AI in Hospitality?

None of this is to suggest that there won’t also be drawbacks to using AI in hospitality. To prepare you for these challenges, we’ll spend the next few sections discussing how AI can fail, allowing you to be proactive in mitigating these downsides.

Impersonality in Customer Service

By properly fine-tuning a large language model, it’s possible to get text output that is remarkably polite and conversational. Still, throughout repeated or sustained interactions, the model can come to feel vaguely sterile.

Though it might in principle be hard to tell when you’re interacting with AI v.s. a human, the fact remains that models don’t actually have any empathy. They may say “I’m sorry that you had to deal with that…”, but they won’t truly know what frustration is like, and over time, a human is likely to begin picking up on that.

We can’t say for certain when models will be capable of expressing sympathy in a fully convincing way, but for the time being, you should probably incorporate systems that can flag conversations that are going off the rails so that a human customer service professional can intervene.

Toxic Output, Bias, and Abuse

As in the previous section, a lot of work has gone into finetuning models so that they don’t produce toxic, biased, or abusive language. Still, not all the kinks have been ironed out, and if a question is phrased in just the right way, it’s often possible to get past these safeguards. That means your models might unpredictably become insulting or snarky, which is a problem for a hospitality company.

As we’ve argued elsewhere, careful monitoring is one of the prices that have to be paid when managing an AI assistant. Since this technology is so new, we have at best a very vague idea of what kinds of prompts lead to what kinds of responses. So, you’ll simply have to diligently keep your eyes peeled for examples of model responses that are inappropriate, having a human take over if and when things are going poorly.

(Or, you can work with Quiq – our guardrails ensure none of this is a problem for enterprise hospitality businesses).

AI in Hospitality

New technologies have always changed the way industries operate, and that’s true for hospitality as well. From virtual assistants to chatbots to ultra-efficient waste management, AI offers many benefits (and many challenges) for hospitality.

If you want to explore using these tools in your hospitality enterprise but don’t know the first thing about hiring AI engineers, check out the Quiq conversational CX platform. We’ve built a proprietary large language model offering that makes it easy to incorporate chatbots and other technologies, without having to worry about what’s going on under the hood.

Schedule a demo with us today to find out how you can catch the AI wave!

Request A Demo

4 Benefits of Using AI Assistants in the Retail Industry

Artificial intelligence (AI) has been making remarkable strides in recent months. Owing to the release of ChatGPT in November of 2022, a huge amount of attention has been on large language models, but the truth is, there have been similar breakthroughs in computer vision, reinforcement learning, robotics, and many other fields.

In this piece, we’re going to focus on how these advances might contribute specifically to the retail sector.

We’ll start with a broader overview of AI, then turn to how AI-based tools are making it easier to make targeted advertisements, personalized offers, hiring decisions, and other parts of retail substantially easier.

What are AI assistants in Retail?

Artificial intelligence is famously difficult to define precisely, but for our purposes, you can think of it as any attempt to get intelligent behavior from a machine. This could involve something relatively straightforward, like building a linear regression model to predict future demand for a product line, or something far more complex, like creating neural networks able to quickly spit out multiple ideas for a logo design based on a verbal description.

AI assistants are a little different and specifically require building agents capable of carrying out sequences of actions in the service of a goal. The field of AI is some 70 years old now and has been making remarkable strides over the past decade, but building robust agents remains a major challenge.

It’s anyone’s guess as to when we’ll have the kinds of agents that could successfully execute an order like “run this e-commerce store for me”, but there’s nevertheless been enough work for us to make a few comments about the state of the art.

What are the Ways of Building AI Assistants?

On its own, a model like ChatGPT can (sometimes) generate working code and (often) generate correct API calls. But as things stand, a human being still needs to utilize this code for it to do anything useful.

Efforts are underway to remedy this situation by making models able to use external tools. Auto-GPT, for example, combines an LLM and a separate bot that repeatedly queries it. Together, they can take high-level tasks and break them down into smaller, achievable steps, checking off each as it works toward achieving the overall objective.

AssistGPT and SuperAGI are similar endeavors, but they’re better able to handle “multimodal” tasks, i.e those that also involve manipulating images or sounds rather than just text.

The above is a fairly cursory examination of building AI agents, but it’s not difficult to see how the retail establishments of the future might use agents. You can imagine agents that track inventory and re-order crucial items when they get low, or that keep an eye on sales figures and create reports based on their findings (perhaps even using voice synthesis to actually deliver those reports), or creating customized marketing campaigns, generating their own text, images, and A/B tests to find the highest-performing strategies.

What are the Advantages of Using AI in Retail Business?

Now that we’ve talked a little bit about how AI and AI assistants can be used in retail, let’s spend some time talking about why you might want to do this in the first place. What, in other words, are the big advantages of using AI in retail?

1. Personalized Marketing with AI

People can’t buy your products if they don’t know what you’re selling, which is why marketing is such a big part of retail. For its part, marketing has long been a future-oriented business, interested in leveraging the latest research from psychology or economics on how people make buying decisions.

A kind of holy grail for marketing is making ultra-precise, bespoke marketing efforts that target specific individuals. The kind of messaging that would speak to a childless lawyer in a big city won’t resonate the same way with a suburban mother of five, and vice versa.

The problem, of course, is that there’s just no good way at present to do this at scale. Even if you had everything you needed to craft the ideal copy for both the lawyer and the mother, it’s exceedingly difficult to have human beings do this work and make sure it ends up in front of the appropriate audience.

AI could, in theory, remedy this situation. With the rise of social media, it has become possible to gather stupendous amounts of information about people, grouping them into precise and fine-grained market segments–and, with platforms like Facebook Ads, you can make really target advertisements for each of these segments.

AI can help with the initial analysis of this data, i.e. looking at how people in different occupations or parts of the country differ in their buying patterns. But with advanced prompt engineering and better LLMs, it could also help in actually writing the copy that induces people to buy your products or services.

And it doesn’t require much imagination to see how AI assistants could take over quite a lot of this process. Much of the required information is already available, meaning that an agent would “just” need to be able to build simple models of different customer segments, and then put together a prompt that generates text that speaks to each segment.

2. Personalized Offerings with AI

A related but distinct possibility is using AI assistants to create bespoke offerings. As with messaging, people will respond to different package deals; if you know how to put one together for each potential customer, there could be billions in profits waiting for you. Companies like Starbucks have been moving towards personalized offerings for a while, but AI will make it much easier for other retailers to jump on this trend.

We’ll illustrate how this might work with a fictional example. Let’s say you’re running a toy company, and you’re looking at data for Angela and Bob. Angela is an occasional customer, mostly making purchases around the holidays. When she created her account she indicated that she doesn’t have children, so you figure she’s probably buying toys for a niece or nephew. She’s not a great target for a personalized offer, unless perhaps it’s a generic 35% discount around Christmas time.

Bob, on the other hand, buys fresh trainsets from you on an almost weekly basis. He more than likely has a son or daughter who’s fascinated by toy machines, and you have customer-recommendation algorithms trained on many purchases indicating that parents who buy the trains also tend to buy certain Lego sets. So, next time Bob visits your site, your AI assistant can offer him a personalized discount on Lego sets.

Maybe he bites this time, maybe he doesn’t, but you can see how being able to dynamically create offerings like this would help you move inventory and boost individual customer satisfaction a great deal. AI can’t yet totally replace humans in this kind of process, but it can go a long way toward reducing the friction involved.

3. Smarter Pricing

The scenario we just walked through is part of a broader phenomenon of smart pricing. In economics, there’s a concept known as “price discrimination”, which involves charging a person roughly what they’re willing to pay for an item. There may be people who are interested in buying your book for $20, for example, but others who are only willing to pay $15 for it. If you had a way of changing the price to match what a potential buyer was willing to pay for it, you could make a lot more money (assuming that you’re always charging a price that at least covers printing and shipping costs).

The issue, of course, is that it’s very difficult to know what people will pay for something–but with more data and smarter AI tools, we can get closer. This will have the effect of simultaneously increasing your market (by bringing in people who weren’t quite willing to make a purchase at a higher price) and increasing your earnings (by facilitating many sales that otherwise wouldn’t have taken place).

More or less the same abilities will also help with inventory more generally. If you sell clothing you probably have a clearance rack for items that are out of season, but how much should you discount these items? Some people might be fine paying almost full price, while others might need to see a “60%” off sticker before moving forward. With AI, it’ll soon be possible to adjust such discounts in real-time to make sure you’re always doing brisk business.

4. AI and Smart Hiring

One place where AI has been making real inroads is in hiring. It seems like we can’t listen to any major podcast today without hearing about some hiring company that makes extensive use of natural language processing and similar tools to find the best employees for a given position.

Our prediction is that this trend will only continue. As AI becomes increasingly capable, eventually it will be better than any but the best hiring managers at picking out talent; retail establishments, therefore, will rely on it more and more to put together their sales force, design and engineering teams, etc.

Is it Worth Using AI in Retail?

Throughout this piece, we’ve sung the praises of AI in retail. But the truth is, there are still questions about how much sense it makes to leverage retail at the moment, given its expense and risks.

In this section, we’ll briefly go over some of the challenges of using AI in retail so you can have a fuller picture of how its advantages compare to its disadvantages, and thereby make a better decision for your situation.

The one that’s on everyone’s minds these days is the tendency of even powerful systems like ChatGPT to hallucinate incorrect information or to generate output that is biased or harmful. Finetuning and techniques like retrieval augmented generation can mitigate this somewhat, but you’ll still have to spend a lot of time monitoring and tinkering with the models to make sure that you don’t end up with a PR disaster on your hands.

Another major factor is the expense involved. Training a model on your own can cost millions of dollars, but even just hiring a team to manage an open-source model will likely set you back a fair bit (engineers aren’t cheap).

By far the safest and easiest way of testing out AI for retail is by using a white glove solution like the Quiq conversational CX platform. You can test out our customer-facing and agent-facing AI tools while leaving the technical details to us, and at far less expense than would be involved in hiring engineering talent.

Set up a demo with us to see what we can do for you.

Request A Demo

AI is Changing Retail

From computer-generated imagery to futuristic AI-based marketing plans, retail won’t be the same with the advent of AI. This will be especially true once we have robust AI assistants able to answer customer questions, help them find clothes that fit, and offer precision discounts and offerings tailored to each individual shopper.

If you don’t want to get left behind, you’ll need to begin exploring AI as soon as possible, and we can help you do that. Check out our product or find a time to talk with us, today!

AI in Retail: 5 Ways Retailers Are Using AI Assistants

Businesses have always turned to the latest and greatest technology to better serve their customers, and retail is no different. From early credit card payment systems to the latest in online advertising, retailers know that they need to take advantage of new tools to boost their profits and keep shoppers happy.

These days, the thing that’s on everyone’s mind is artificial intelligence (AI). AI has had many, many definitions over the years, but in this article, we’ll mainly focus on the machine-learning and deep-learning systems that have captured the popular imagination. These include large language models, recommendation engines, basic AI assistants, etc.

In the world of AI in retail, you can broadly think of these systems as falling into one of two categories: “the ones that customers see”, and “the ones that customers don’t see.” In the former category, you’ll find innovations such as customer-facing chatbots and algorithms that offer hyper-personalized options based on shopping history. In the latter, you’ll find precision fraud detection systems and finely-tuned inventory management platforms, among other things.

We’ll cover each of these categories, in order. By the end of this piece, you’ll have a much better understanding of the ways retailers are using AI assistants and will be better able to think about how you want to use this technology in your retail establishment.

Let’s get going!

Using AI Assistants for Better Customer Experience

First, let’s start with AI that interacts directly with customers. The major ways in which AI is transforming the customer experience are through extreme levels of personalization, more “humanized” algorithms, and shopping assistants.

Personalization in Shopping and Recommendations

One of the most obvious ways of improving the customer experience is by tailoring that experience to each individual shopper. There’s just one problem: this is really difficult to do.

On the one hand, most of your customers will be new to you, people about whom you have very little information and whose preferences you have no good way of discovering. On the other, there are the basic limitations of your inventory. If you’re a brick-and-mortar establishment you have a set number of items you can display, and it’s going to be pretty difficult for you to choose them in a way that speaks to each new customer on a personal level.

For a number of reasons, AI has been changing this state of affairs for a while now, and holds the potential to change it much more in the years ahead.

A key part of this trend is recommendation engines, which have gotten very good over the past decade or so. If you’ve ever been surprised by YouTube’s ability to auto-generate a playlist that you really enjoyed, you’ve seen this in action.

Recommendation engines can only work well when there is a great deal of customer data for them to draw on. As more and more of our interactions, shopping, and general existence have begun to take place online, there has arisen a vast treasure trove of data to be analyzed. In some situations, recommendation engines can utilize decades of shopping experience, public comments, reviews, etc. in making their recommendations, which means a far more personalized shopping experience and an overall better customer experience.

What’s more, advances in AR and VR are making it possible to personalize even more of these experiences. There are platforms now that allow you to upload images of your home to see how different pieces of furniture will look, or to see how clothes fit you without the need to try them on first.

We expect that this will continue, especially when combined with smarter printing technology. Imagine getting a 3D-printed sofa made especially to fit in that tricky corner of your living room, or flipping through a physical magazine with advertisements that are tailored to each individual reader.

Humanizing the Machines

Next, we’ll talk about various techniques for making the algorithms and AI assistants we interact with more convincingly human. Admittedly, this isn’t terribly important at the present moment. But as more of our shopping and online activity comes to be mediated by AI, it’ll be important for them to sound empathic, supportive, and attuned to our emotions.

The two big ways this is being pursued at the moment are chatbots and voice AI.

Chatbots, of course, will probably be familiar to you already. ChatGPT is inarguably the most famous example, but you’ve no doubt interacted with many (much simpler) chatbots via online retailers or contact centers.

In the ancient past, chatbots were largely “rule-based”, meaning they were far less flexible and far less capable of passing as human. With the ascendancy of the deep learning paradigm, however, we now have chatbots that are able to tutor you in chemistry, translate between dozens of languages, help you write code, answer questions about company policies, and even file simple tickets for contact center agents.

Naturally, this same flexibility also means that retail managers must tread lightly. Chatbots are known to confidently hallucinate incorrect information, to become abusive, or to “help” people with malicious projects, like building weapons or computer viruses.

Even leaving aside the technical challenges of implementing a chatbot, you have to carefully monitor your chatbots to make sure they’re performing as expected.

Then, there’s voice-based AI. Computers have been synthesizing speech for many years, but it hasn’t been until recently that they’ve become really good at it. Though you can usually tell that a computer is speaking if you listen very carefully, it’s getting harder and harder all the time. We predict that, in the not-too-distant future, you’ll simply have no idea whether it’s a human or a machine on the other end of the line when you call to return an item or get store hours.

But computers have also gotten much better at the other side of voice-based AI, speech recognition. Software like otter.ai, for example, is astonishingly accurate when generating transcriptions of podcast episodes or conversations, even when unusual words are used.

Taken together, advances in both speech synthesis and speech recognition paint a very compelling picture of how the future of retail might unfold. You can imagine walking into a Barnes & Noble in the year 2035 and having a direct conversation with a smart speaker or AI assistant. You’ll tell it what books you’ve enjoyed in the past, it’ll query its recommendation system to find other books you might like, and it’ll speak to you in a voice that sounds just like a human’s.

You’ll be able to ask detailed questions about the different books’ content, and it’ll be able to provide summaries, discuss details with you, and engage in an unscripted, open-ended conversation. It’ll also learn more about you over time, so that eventually it’ll be as though you have a friend that you go shopping with whenever you buy new books, clothing, etc.

Shopping Assistants and AI Agents

So far, we’ve confined our conversation specifically to technologies like large language models and conversational AI. But one thing we haven’t spent much time on yet is the possibility of creating agents in the future.

An agent is a goal-directed entity, one able to take an instruction like “Make me a reservation at an Italian restaurant” and decompose the goal into discrete steps, performing each one until the task is completed.

With clever enough prompt engineering, you can sort of get agent-y behavior out of ChatGPT, but the truth is, the work of building advanced AI agents has only just begun. Tools like AutoGPT and LangChain have made a lot of progress, but we’re still a ways away from having agents able to reliably do complex tasks.

It’s not hard to see how different retail will be when that day arrives, however. Eventually, you may be outsourcing a lot of your shopping to AI assistants, who will make sure the office has all the pens it needs, you’ve got new science fiction to read, and you’re wearing the latest fashion. Your assistant might generate new patterns for t-shirts and have them custom-printed; if LLMs get good enough, they’ll be able to generate whole books and movies tuned to your specific tastes.

Using AI Assistants to Run A Safer, Leaner Operation

Now that we’ve covered the ways AI assistants will impact the things customers can see, let’s talk about how they’ll change the things customers don’t see.

There are lots of moving parts in running a retail establishment. If you’ve got ~1,000 items on display in the front, there are probably several thousand more items in a warehouse somewhere, and all of that has to be tracked. What’s more, there’s a constant process of replenishing your supply, staying on top of new trends, etc.

All of this will also be transformed by AI, and in the following sections, we’ll talk about a few ways in which this could happen.

Fraud Detection and Prevention

Fraud, unfortunately, is a huge part of modern life. There’s an entire industry of people buying and selling personal information for nefarious purposes, and it’s the responsibility of anyone trafficking in that information to put safeguards in place.

That includes a large number of retail establishments, which might keep data related to a customer’s purchases, their preferences, and (of course) their actual account and credit card numbers.

This isn’t the place to get into a protracted discussion of cybersecurity, but much of fraud detection relies on AI, so it’s fair game. Fraud detection techniques range from the fairly basic (flagging transactions that are much larger than usual or happen in an unusual geographic area) to the incredibly complex (training powerful reinforcement learning agents that constantly monitor network traffic).

As AI becomes more advanced, so will fraud detection. It’ll become progressively more difficult for criminals to steal data, and the world will be safer as a result. Of course, some of these techniques are also ones that can be used by the bad guys to defraud people, but that’s why so much effort is going into putting guardrails on new AI models.

Streamlining Inventory

Inventory management is an obvious place for optimization. Correctly forecasting what you’ll need and thereby reducing waste can have a huge impact on your bottom line, which is why there are complex branches of mathematics aimed at modeling these domains.

And – as you may have guessed – AI can help. With machine learning, extremely accurate forecasts can be made of future inventory requirements, and once better AI agents have been built, they may even be able to automate the process of ordering replacement materials.

Forward-looking retail managers will need to keep an eye on this space to fully utilize its potential.

AI Assistants and the Future of Retail

AI is changing a great many things. It’s already making contact center agents more effective and is being utilized by a wide variety of professionals, ranging from copywriters to computer programmers.

But the space is daunting, and there’s so much to learn about implementing, monitoring, and finetuning AI assistants that it’s hard to know where to start. One way to easily dip your toe in these deep waters is with the Quiq Conversational CX platform.

Our technology makes it easy to create customer-facing AI bots and similar tooling, which will allow you to see how AI can figure into your retail enterprise without hiring engineers and worrying about the technical details.

Schedule a demo with us today to get started!

Request A Demo

How Scoped AI Ensures Safety in Customer Service

AI chat applications powered by Large Language Models (LLMs) have helped us reimagine what is possible in a new generation of AI computing.

Along with this excitement, there is also a fair share of concern and fear about the potential risks. Recent media coverage, such as this article from the New York Times, highlights how the safety measures of ChatGPT can be circumvented to produce harmful information.

To better understand the security risks of LLMs in customer service, it’s important we add some context and differentiate between “Broad AI” versus “Scoped AI”. In this article, we’ll discuss some of the tactics used to safely deploy scoped AI assistants in a customer service context.

Broad AI vs. Scoped AI: Understanding the Distinction

Scoped AI is designed to excel in a well-defined domain, guided and limited by a software layer that maintains its behavior within pre-set boundaries. This is in contrast to broad AI, which is designed to perform a wide range of tasks across virtually all domains.

Scoped AI and Broad AI answer questions fundamentally differently. With Scoped AI the LLM is not used to determine the answer, it is used to compose a response from the resources given to it. Conversely, answers to questions in Broad AI are determined by the LLM and cannot be verified.

Broad AI simply takes a user message and generates a response from the LLM; there is no control layer outside of the LLM itself. Scoped AI is a software layer that applies many steps to control the interaction and enforce safety measures applicable to your company.

In the following sections, we’ll dig into a more detailed explanation of the steps.

Ensuring the Safety of Scoped AI in Customer Service

1. Inbound Message Filtering

Your AI should perform a semantic similarity search to recognize in-scope vs out-of-scope messages from a customer. Malicious characters and known prompt injections should be identified and rejected with a static response. Inbound message filtering is an important step in limiting the surface area to the messages expected from your customers.

2. Classifying Scope

LLMs possess strong Natural Language Understanding and Reasoning skills (NLU & NLR). An AI assistant should perform a number of classifications. Common classifications include the topic, user type, sentiment, and sensitivity of the message. These classifications should be specific to your company and the jobs of your AI assistant. A data model and rules engine should be used to apply your safety controls.

3. Resource Integration

Once an inbound message is determined to be in-scope, company-approved resources should be retrieved for the LLM to consult. Common resources include knowledge articles, brand facts, product catalogs, buying guides, user-specific data, or defined conversational flows and steps.

Your AI assistant should support non-LLM-based interactions to securely authenticate the end user or access sensitive resources. Authenticating users and validating data are important safety measures in many conversational flows.

4. Verifying Responses

With a response in hand, the AI should verify the answer is in scope and on brand. Fact-checking and corroboration techniques should be used to ensure the information is derived from the resource material. An outbound message should never be delivered to a customer if it cannot be verified by the context your AI has on hand.

5. Outbound Message Filtering

Outbound message filtering tactics include: conducting prompt leakage analysis, semantic similarity checks, consulting keyword blacklists, and ensuring all links and contact information are in-scope of your company.

6. Safety Monitoring and Analysis

Deploying AI safely also requires that you have mechanisms to capture and retrospect on the completed conversations. Collecting user feedback, tracking resource usage, reviewing state changes, and clustering conversations should be available to help you identify and reinforce the safety measures of your AI.

In addition, performing full conversation classifications will also allow you to identify emerging topics, confirm resolution rates, produce safety reports, and understand the knowledge gaps of your AI.

Other Resources

At Quiq, we actively monitor and endorse the OWASP Top 10 for Large Language Model Applications. This guide is provided to help promote secure and reliable AI practices when working with LLMs. We recommend companies exploring LLMs and evaluating AI safety consult this list to help navigate their projects.

Final Thoughts

By safely leveraging LLM technology through a Scoped AI software layer, CX leaders can:

1. Elevate Customer Experience
2. Boost Operational Efficiency
3. Enhance Decision Making
4. Ensure Consistency and Compliance

Reach out to sales@quiq.com to learn how Quiq is helping companies improve customer satisfaction and drive efficiency at the same.

What is an AI Assistant for Retail?

Over the past few months, we’ve had a lot to say about artificial intelligence, its new frontiers, and the ways in which it is changing the customer service industry.

A natural extension of this analysis is looking at the use of AI in retail. That is our mission today. We’ll look at how techniques like natural language processing and computer vision will impact retail, along with some of the benefits and challenges of this approach.

Let’s get going!

How is AI Used in Retail?

AI is poised to change retail, as it is changing many other industries. In the sections that follow, we’ll talk through three primary AI technologies that are driving these changes, namely natural language processing, computer vision, and machine learning more broadly.

Natural Language Processing

Natural language processing (NLP) refers to a branch of machine learning that attempts to work with spoken or written language algorithmically. Together with computer vision, it is one of the best-researched and most successful attempts to advance AI since the field was founded some seven decades ago.

Of course, these days the main NLP applications everyone has heard of are large language models like ChatGPT. This is not the only way AI assistants will change retail, but it is a big one, so that’s where we’ll start.

An obvious place to use LLMs in retail is with chatbots. There’s a lot of customer interaction that involves very specific questions that need to be handled by a human customer service agent, but a lot of it is fairly banal, consisting of things like “How do I return this item” or “Can you help me unlock my account.” For these sorts of issues, today’s chatbots are already powerful enough to help in most situations.

A related use case for AI in retail is asking questions about specific items. A customer might want to know what fabric an article of clothing is made out of or how it should be cleaned, for example. An out-of-the-box model like ChatGPT won’t be able to help much. but if you’ve used a service like Quiq’s conversational CX platform, it’s possible to finetune an LLM on your specific documentation. Such a model will be able to help customers find the answers they need.

These use cases are all centered around text-based interactions, but algorithms are getting better and better at both speech recognition and speech synthesis. You’ve no doubt had the distinct (dis)pleasure of interacting with an automated system that sounded very artificial and that lacked the flexibility actually to help you very much; but someday soon, you may not be able to tell from a short conversation whether you were talking to a human or a machine.

This may cause a certain amount of concern over technological unemployment. If chatbots and similar AI assistants are doing all this, what will be left for flesh-and-blood human workers? Frankly, it’s too early to say, but the evidence so far suggests that not only is AI not making us obsolete, it’s actually making workers more productive and less prone to burnout.

Computer Vision

Computer vision is the other major triumph of machine learning. CV algorithms have been created that can recognize faces, recognize thousands of different types of objects, and even help with steering autonomous vehicles.

How does any of this help with retail?

We already hinted at one use case in the previous paragraph, i.e. automatically identifying different items. This has major implications for inventory management, but when paired with technologies like virtual reality and augmented reality, it could completely transform the ways in which people shop.

Many platforms already offer the ability to see furniture and similar items in a customer’s actual living space, and there are efforts underway to build tools for automatically sizing them so they know exactly which clothes to try on.

CV is also making it easier to gather and analyze different metrics crucial to a retail enterprise’s success. Algorithms can watch customer foot traffic to identify potential hotspots, meaning that these businesses can figure out which items to offer more of and which to cut altogether.

Machine Learning

As we stated earlier, both natural language processing and computer vision are types of machine learning. We gave them their own sections because they’re so big and important, but they’re not the only ways in which machine learning will impact retail.

Another way is with increasingly personalized recommendations. If you’ve ever taken the advice of Netflix or Spotify as to what entertainment you should consume next then you’ve already made contact with a recommendation engine. But with more data and smarter algorithms, personalization will become much more, well, personalized.

In concrete terms, this means it will become easier and easier to analyze a customer’s past buying history to offer them tailor-made solutions to their problems. Retail is all about consumer satisfaction, so this is poised to be a major development.

Machine learning has long been used for inventory management, demand forecasting, etc., and the role it plays in these efforts will only grow with time. Having more data will mean being able to make more fine-grained predictions. You’ll be able to start printing Taylor Swift t-shirts and setting up targeted ads as soon as people in your area begin buying tickets to her show next month, for example.

Where are AI Assistants Used in Retail?

So far, we’ve spoken in broad terms about the ways in which AI assistants will be used in retail. In these sections, we’ll get more specific and discuss some of the particular locations where these assistants can be deployed.

In Kiosks

Many retail establishments already have kiosks in place that let you swap change for dollars or skip the trip to the DMV. With AI, these will become far more adaptable and useful, able to help customers with a greater variety of transactions.

In Retail Apps

Mobile applications are an obvious place to use recommendations or LLM-based chatbots to help make a sale or get customers what they need.

In Smart Speakers

You’ve probably heard of Alexa, a smart speaker able to play music for you or automate certain household tasks. Well, it isn’t hard to imagine their use in retail, especially as they get better. They’ll be able to help customers choose clothing, handle returns, or do any of a number of related tasks.

In Smart Mirrors

For more or less the same reason, AI-powered smart mirrors could have a major impact on retail. As computer vision improves it’ll be better able to suggest clothing that looks good on different heights and builds, for example.

What are the Benefits of Using AI in Retail?

The main reason that AI is being used more frequently in retail is that there are so many advantages to this approach. In the next few sections, we’ll talk about some of the specific benefits retail establishments can expect to enjoy from their use of AI.

Better Customer Experience and Engagement

These days, there are tons of ways to get access to the goods and services you need. What tends to separate one retail establishment from another is customer experience and customer engagement. AI can help with both.

We’ve already mentioned how much more personalized AI can make the customer experience, but you might also consider the impact of round-the-clock availability that AI makes possible.

Customer service agents will need to eat and sleep sometimes, but AI never will, which means that it’ll always be available to help a customer solve their problems.

More Selling Opportunities

Cross-selling and upselling are both terms that are probably familiar to you, and they represent substantial opportunities for retail outfits to boost their revenue.

With personalized recommendations, sentiment analysis, and similar machine-learning techniques, it will become much faster and easier to identify additional items that a customer might be interested in.

If a customer has already bought Taylor Swift tickets and a t-shirt, for example, perhaps they’d also like a fetching hat that goes along with their outfit. And if you’ve installed the smart mirrors we talked about earlier, AI will even be able to help them find the right size.

Leaner, More Efficient Operations

Inventory management is a never-ending concern in retail. It’s also one place where algorithmic solutions have been used for a long time. We think this trend will only continue, with operations becoming leaner and more responsive to changing market conditions.

All of this ultimately hinges on the use of AI. Better algorithms and more comprehensive data will make it possible to predict what people will want and when, meaning you don’t have to sit on inventory you don’t need and are less likely to run out of anything that’s selling well.

What are the Challenges of Using AI in Retail?

That being said, there are many challenges to using Artificial Intelligence in retail. We’ll cover a few of these now so you can decide how much effort you want to put into using AI.

AI Can Still Be Difficult to Use

To be sure, firing up ChatGPT and asking it to recommend an outfit for a concert doesn’t take very long. But this is a far cry from implementing a full-bore AI solution into your website or mobile applications. Serious technical expertise is required to train, finetune, deploy, and monitor advanced AI, whether that’s an LLM, a computer-vision system, or anything else, and you’ll need to decide whether you think you’ll get enough return to justify the investment.

Expense

And speaking of investment, it remains pretty expensive to utilize AI at any non-trivial scale. If you decide you want to hire an in-house engineering team to build a bespoke model, you’ll have to have a substantial budget to pay for the training and the engineer’s salaries. These salaries are still something you’ll have to account for even if you choose to build on top of an existing solution, because finetuning a model is far from easy.

One solution is to utilize an offering like Quiq. We have already created the custom infrastructure required to utilize AI in a retail setting, meaning you wouldn’t need a serious engineering force to get going with AI.

Bias, Abuse, and Toxicity

A perennial concern with using AI is that a model will generate output that is insulting, harmful, or biased in some way. For obvious reasons this is bad for retail establishments, so you’ll want to make sure that you both carefully finetune this behavior out of your models and continually monitor them in case their behavior changes in the future. Quiq also eliminates this risk.

AI and the Future of Retail

Artificial intelligence has long been expected to change many aspects of our lives, and in the past few years, it has begun delivering on that promise. From ultra-precise recommendations to full-fledged chatbots that help resolve complex issues, retail stands to benefit greatly from this ongoing revolution.

If you want to get in on the action but don’t know where to start, set up a time to check out the Quiq platform. We make it easy to utilize both customer-facing and agent-facing solutions, so you can build an AI-positive business without worrying about the engineering.

Request A Demo

Top 7 AI Trends For 2024

The end of the year is generally a time that prompts reflection about the past. But as a forward-thinking organization, we’re going to use this period instead to think about the future.

Specifically, the future of artificial intelligence (AI). We’ve written a great deal over the past few months about all the ways in which AI is changing contact centers, customer service, and more. But the pioneers of this field do not stand still, and there will no doubt be even larger changes ahead.

This piece presents our research into the seven main AI advancements for 2024, and how we think they’ll matter for you.

Let’s dive in!

What are the 2024 Technology Trends in AI?

In the next seven sections, we’ll discuss what we believe are the major AI trends to look out for in 2024.

Bigger (and Better) Generative AI Models

Probably the simplest trend is that generative models will continue getting bigger. At billions of internal parameters, we already know that large language models are pretty big (it’s in the name, after all). But there’s no reason to believe that the research groups training such models won’t be able to continue scaling them up.

If you’re not familiar with the development of AI, it would be easy to dismiss this out of hand. We don’t get excited when Microsoft releases some new OS with more lines of code than we’ve ever seen before, so why should we care about bigger language models?

For reasons that remain poorly understood, bigger language models tend to mean better performance, in a way that doesn’t hold for traditional programming. Writing 10 times more Python doesn’t guarantee that an application will be better – it’s more likely to be the opposite, in fact – but training a model that’s 10 times bigger probably will get you better performance.

This is more profound than it might seem at first. If you’d shown me ChatGPT 15 years ago, I would’ve assumed that we’d made foundational progress in epistemology, natural language processing, and cognitive psychology. But, it turns out that you can just build gargantuan models and feed them inconceivable amounts of textual data, and out pops an artifact that’s able to translate between languages, answer questions, write excellent code, and do all the other things that have stunned the world since OpenAI released ChatGPT.

As things stand, we have no reason to think that this trend will stop next year. To be sure, we’ll eventually start running into the limits of the “just make it bigger” approach, but it seems to be working quite well so far.

This will impact the way people search for information, build software, run contact centers, handle customer service issues, and so much more.

More Kinds of Generative Models

The basic approach to building a generative model fits well with producing text, but it is not limited to that domain.

DALL-E, Midjourney, and Stable Diffusion are three well-known examples of image-generation models. Though these models sometimes still struggle with details like perspective, faces, and the number of fingers on a human hand, they’re nevertheless capable of jaw-dropping output.

Here’s an example created in ~5 minutes of tinkering with DALL-E 3:
biggest questions about AI

As these image-generation models improve, we expect they’ll come to be used everywhere images are used – which, as you probably know, is a lot of places. YouTube thumbnails, murals in office buildings, dynamically created images in games or music videos, illustrations in scientific papers or books, initial design drafts for cars, consumer products, etc., are all fair game.

Now, text and images are the two major generative AI use cases with which everyone is familiar. But what about music? What about novel protein structures? What about computer chips? We may soon have models that design the chips used to train their successors, with different models synthesizing the music that plays in the chip fabrication plant.

Open Source v.s. Closed Source Models

Concerns around automation and AI-specific existential risk aren’t new, but one major front that’s opened in that war concerns whether models should be closed source or open source.

“Closed source” refers to a paradigm in which a code base (or the weights of a generative model) are kept under lock and key, only available to the small teams of engineers working on them. “Open source”, by contrast, is an antipodal philosophy that believes the best way to create safe, high-quality software is to disseminate the code far and wide, giving legions of people the opportunity to find and fix flaws in its design.

There are many ways in which this interfaces with the broader debate around generative AI. If emerging AI technologies truly present an existential threat, as the “doomers” claim, then releasing model weights is spectacularly dangerous. If you’ve built a model that can output the correct steps for synthesizing weaponized smallpox, for example, open-sourcing it would mean that any terrorist anywhere in the world could download and use it for that purpose.

The “accelerationists”, on the other hand, retort by saying that the basic dynamics of open-source systems hold for AI as they do for every other kind of software. Yes, making AI widely available means that some people will use it to harm others, but it also means that you’ll have far more brains working to create safeguards, guardrails, and sentinel systems able to thwart the aims of the wicked.

It’s still far too early to tell whether AI researchers will choose to adopt the open or closed-source approaches, but we predict that this will continue to be a hotly-contested issue. Though it seems unlikely that OpenAI will soon release the weights for its best models, there will be competitor systems that are almost as good which anyone could download, modify, and deploy. We also think there’ll be more leaks of weights, such as what happened with Meta’s LLaMa model in early 2023.

AI Regulation

For decades, debates around AI safety occurred in academic papers and obscure forums. But with the rise of LLMs, all that changed. It was immediately clear that they would be incredibly powerful, amoral tools, suitable for doing enormous good and enormous harm.

A consequence of this has been that regulators in the United States and abroad are taking notice of AI, and thinking about the kind of legal frameworks that should be established in response.

One manifestation of this trend was the parade of Congressional hearings that took place throughout 2023, with luminaries like Gary Marcus, Sam Altman, and others appearing before the federal government to weigh in on this technology’s future and likely impact.

On October 30th, 2023, the Biden White House issued an executive order meant to set the stage for new policies concerning dual-use foundation models. It gives the executive branch around a year to conduct a sweeping series of reports, with the ultimate aim being to create guidelines for industry as it continues developing powerful AI models.

The gears of government turn slowly, and we expect it will be some time before anything concrete comes out of this effort. Even when it does, questions about its long-term efficacy remain. How will it help to stop dangerous research in the U.S., for example, if China charges ahead without restraint? And what are we to do if some renegade group creates a huge compute cluster in international waters, usable by anyone, anywhere in the world wanting to train a model bigger than GPT-4?

These and other questions will have to be answered by lawmakers and could impact the way AI unfolds for the next century.

The Rise of AI Agents

We’ve written elsewhere about the many ongoing attempts to build AI systems – agents – capable of pursuing long-range goals in complex environments. For all that it can do, ChatGPT is unable to take a high-level instruction like “run this e-commerce store for me” and get it done successfully.

But that may change soon. Systems like Auto-GPT, AssistGPT, and SuperAGI are all attempts to augment existing generative AI models to make them better able to accomplish larger goals. As things stand, agents have a notable tendency to get stuck in unproductive loops or to otherwise arrive at a state they can’t get out of on their own. But we may only be a few breakthroughs away from having much more robust agents, at which point they’ll begin radically changing the economy and the way we live our lives.

New Approaches to AI

When people think about “AI”, they’re usually thinking of a machine learning or deep learning system. But these approaches, though they’ve been very successful, are but a small sample of the many ways in which intelligent machines could be built.
Neurosymbolic AI is another. It usually combines a neural network (such as the ones that power LLMs) with symbolic reasoning systems, able to make arguments, weigh evidence, and do many of the other things we associate with thinking. Given the notable tendency of LLMs to hallucinate false or incorrect information, neurosymbolic scaffolding could make them far better and more useful.

Causal AI is yet another. These AI systems are built to learn causal relationships in the world, such as the fact that dropping glass on a hard surface will cause it to break. This too, is a crucial part of what is missing from current AI systems.

Quantum Computing and AI

Quantum computing represents the emergence of the next great computational substrate. Whereas today’s “classical” computers exploit lightning-fast transistor operations, quantum computers are able to utilize quantum phenomena, such as entanglement and superposition, to solve problems that not even the grandest supercomputers can handle in less than a million years.

Naturally, researchers started thinking about applying quantum computing to artificial intelligence very early on, but it remains to be seen how useful it’ll be. Quantum computers excel at certain kinds of tasks, especially those involving combinatorics, solving optimization problems, and anything utilizing linear algebra. This last undergirds huge amounts of AI work, so it stands to reason that quantum computers will speed up at least some of it.

AI and the Future

It would appear as though the Pandora’s box of AI has been opened for good. Large language models are already changing many fields, from copywriting and marketing to customer service and hospitality – and it’ll likely change many more in the years ahead.

This piece has discussed a number of the most AI industry important trends to look out for in 2024, and should help anyone interfacing with these technologies to prepare themselves for what may come.

Generative AI Privacy Concerns – Your Guide to the Current Landscape

Generative AI, such as the large language model (LLM) ChatGPT and the image-generation tool DALL-E, are already having a major impact in places like marketing firms and contact centers. With their ability to create compelling blog posts, email blasts, YouTube thumbnails, and more, we believe they’re only going to become an increasingly integral part of the workflows of the future.

But for all their potential, there remain serious questions about the short- and long-term safety of generative AI. In this piece, we’re going to zero in on one particular constellation of dangers: those related to privacy.

We’ll begin with a brief overview of how generative AI works, then turn to various privacy concerns, and finish with a discussion of how these problems are being addressed.

Let’s dive in!

What is Generative AI (and How is it Trained)?

In the past, we’ve had plenty to say about how generative AI works under the hood. But many of the privacy implications of generative AI are tied directly to how these models are trained and how they generate output, so it’s worth briefly reviewing all of this theoretical material, for the sake of completeness and to furnish some much-needed context.

When an LLM is trained, it’s effectively fed huge amounts of text data, from the internet, from books, and similar sources of human-generated language. What it tries to do is predict how a sentence or paragraph will end based on the preceding words.

Let’s concretize this a bit. You probably already know some of these famous quotes:

  • “You must be the change you wish to see in the world.” (Mahatma Gandhi)
  • “You may say I’m a dreamer, but I’m not the only one.” (John Lennon)
  • “The only thing we have to fear is fear itself.” (Franklin D. Roosevelt)

What ChatGPT does is try to predict what the italicized parts say based on everything that comes before. It’ll read “You must be the change you”, for example, and then try to predict “wish to see in the world.”

When the training process begins the model will basically generate nonsense, but as it develops a better and better grasp of English (and other languages), it gradually becomes the remarkable artifact we know today.

Generative AI Privacy Concerns

From a privacy perspective, two things about this process might concern us:

The first is what data are fed into the model, and the second is what kinds of output the models might generate.

We’ll have more to say about each of these in the next section, then cover some broader concerns about copyright law.

Generative AI and Sensitive Data

First, there’s real concern over the possibility that generative AI models have been shown what is usually known as “Personally Identifiable Information” (PII). This is data such as your real name, your address, etc., and can also include things like health records that might not have your name but which can be used to figure out who you are.

The truth is, we only have limited visibility into the data that LLMs are shown during training. Given how much of the internet they’ve ingested, it’s a safe bet that at least some sensitive information has been included. And even if it hasn’t seen a particular piece of PII, there are myriad ways in which it can be exposed to it. You can imagine, for example, someone feeding customer data into an LLM to produce tailored content for them, not realizing that, in many cases, the model will have permanently incorporated that data into its internal structure.

There isn’t a great way at present to remove data from an LLM, and finetuning it in such a way that it never exposes that data in the future is something no one knows how to do yet.

The other major concern around sensitive data in the context of generative AI is that they will simply hallucinate allegations about people that damage their reputations and compromise their privacy. We’ve written before about the now-infamous case of law professor Jonathan Turley, who was falsely accused of sexually harassing several of his students by ChatGPT. We imagine that in the future there will be many more such fictitious scandals, potentially ones that are very damaging to the reputations of the accused.

Generative AI, Intellectual Property, and Copyright Law

There have also been questions about whether some of the data fed into ChatGPT and similar models might be in violation of copyright law. Earlier this year, in fact, a number of well-known writers leveled a suit against both OpenAI (the creators of ChatGPT) and Meta (the creators of LLaMa).

The suit claims that these teams trained their models on proprietary data contained in the works of authors like Michael Chabon, “without consent, without credit, and without compensation.” Similar charges have been made against Midjourney and Stability AI, both of whom have created AI-based image generation models.

These are rather thorny questions of jurisprudence. Though copyright law is a fairly sophisticated tool for dealing with various kinds of human conflicts, no one has ever had to deal with the implications of enormous AI models training on this much data. Only time will tell how the courts will ultimately decide, but if you’re using customer-facing or agent-facing AI tools in a place like a contact center, it’s at least worth being aware of the controversy.

Contact Us

Mitigating Privacy Risks from Generative AI

Now that we’ve elucidated the dimensions of the privacy concerns around generative AI, let’s spend some time talking about various efforts to address these concerns. We’ll focus primarily on data privacy laws, better norms around how data is collected and used, and the ways in which training can help.

Data Privacy Laws

First, and biggest, are attempts by different regulatory bodies to address data privacy issues with legislation. You’re probably already familiar with the European Union’s General Data Protection Regulation (GDPR), which puts numerous rules in place regarding how data can be gathered and used, including in advanced AI systems like LLMs.

Canada’s lesser-known Artificial Intelligence and Data Act (AIDA) mandates that anyone building a potentially disruptive AI system, like ChatGPT, must create guardrails to minimize the likelihood that their system will create biased or harmful output.

It’s not clear yet the extent to which laws like these will be able to achieve their objectives, but we expect that they’ll be just the opening salvo in a long string of legislative attempts to ameliorate the potential downsides of AI.

Robust Data Collection and Use Policies

There are also many things that private companies can do to address privacy concerns around data, without waiting for bureaucracies to catch up.

There’s too much to say about this topic to do it justice here, but we can make a few brief comments to guide you in your research.

One thing many companies are investing in is better anonymization techniques. Differential privacy, for example, is emerging as a promising way of simultaneously allowing for the collection of private data while anonymizing it enough to guard against LLMs accidentally exposing it at some point in the future.

Then, of course, there are myriad ways of securely storing data once you have it. This mostly boils down to keeping a tight lid on who is able to access private data – through i.e. encryption and a strict permissioning system – and carefully monitoring what they do with it once they access it.

Finally, it helps to be as public as possible about your data collection and use policies. Make sure they’re published somewhere that anyone can read them. Whenever possible, give users the ability to opt out of data collection, if that’s what they want to do.

Better Training for Those Building and Using Generative AI

The last piece of the puzzle is simply to train your workforce about data collection, data privacy, and data management. Sound laws and policies won’t do much good if the actual people who are interacting with private data don’t have a solid grasp of your expectations and protocols.

Because there are so many different ways in which companies collect and use data, there is no one-size-fits-all solution we can offer. But you might begin by sending your employees this article, as a way of opening up a broader conversation about your future data-privacy practices.

Data Privacy in the Age of Generative AI

In all its forms, generative AI is a remarkable technology that will change the world in many ways. Like the printing press, gunpowder, fire, and the wheel, these changes will be both good and bad.

The world will need to think carefully about how to get as many of the advantages out of generative AI as possible while minimizing its risks and dangers.

A good place to start with this is by focusing on data privacy. Because this is a relatively new problem, there’s a lot of work to be done in establishing legal frameworks, company policies, and best practices. But that also means there’s an enormous opportunity as well, to positively shape the long-term trajectory of AI technologies.

Moving from Natural Language Understanding (NLU) to Customer-Facing AI Assistants

There can no longer be any doubt that large language models and generative AI more broadly are going to have a real impact on many industries. Though we’re still in the preliminary stages of working out the implications, the evidence so far suggests that this is already happening.

Language models in contact centers are helping to more junior workers be more productive, and reducing employee turnover in the process. They’re also being used to automate huge swathes of content creation, assisting in data augmentation tasks, and plenty else besides.

Part of the task we’ve set ourselves here at Quiq is explaining how these models are trained and how they’ll make their way into the workflows of the future. To that end, we’ve written extensively about how large language models are trained, how researchers are pushing them into uncharted territories, and which models are appropriate for any given task.

This post is another step in that endeavor. Specifically, we’re going to discuss natural language understanding, how it works, and how it’s distinct from related terms (like “natural language generation”). With that done, we’ll talk about how natural language understanding is a foundational first step and takes us closer to creating robust customer-facing AI assistants.

What is Natural Language Understanding?

Language is a tool of remarkable power and flexibility – so much so that it wouldn’t be much of an exaggeration to say that it’s at the root of everything else the human race has accomplished. From towering works of philosophy to engineering specs to instructions for setting up a remote, language is a force multiplier that makes each of us vastly more effective than we otherwise would be.

Evidence of this claim comes from the fact that, even when we’re alone, many of us think in words or even talk to ourselves as we work through something difficult. Certain kinds of thoughts are all but impossible to have without the scaffolding provided by language.

For all these reasons, creating machines able to parse natural language has long been a goal of AI researchers and computer scientists. The field that has been established to address itself to this task is known as natural language understanding.

There’s a rather deep philosophical here where the word “understanding” is concerned. As the famous story of the Tower of Babel demonstrates, it isn’t enough for the members of a group to be making sounds to accomplish great things, it’s also necessary for the people involved to understand what everyone is saying. This means that when you say a word like “chicken” there’s a response in my nervous system such that the “chicken” concept is activated, along with other contextually relevant knowledge, such as the location of the chicken feed. If you said “курица” (to someone who doesn’t know Russian) or “鸡” (to someone who doesn’t know Mandarin), the same process wouldn’t have occurred, no understanding would’ve happened, and language wouldn’t have helped at all.

Whether and how a machine can understand language fully humanly is too big a topic to address here, but we can make some broad comments. As is often the case, researchers in the field of natural language understanding have opted to break the problem down into much more tractable units. Two of the biggest such units of natural language understanding are intent recognition (what a sentence is intended to accomplish) and entity recognition (who the sentence is referring to).

This should make a certain intuitive sense. Though you may not be consciously going through a mental checklist when someone says something to you, on some level, you’re trying to figure out what their goal is and who or what they’re talking about. The intent behind the sentence “John has an apple”, for example, is to inform you of a fact about the world, and the main entities are “John” and “apple”. If you know John, a little image of him holding an apple would probably pop into your head.

This has many obvious applications to the work done in contact centers. If you’re building an automated ticket classification system, for instance, it would help to be able to tell whether the intent behind the ticket is to file a complaint, reach a representative, or perform a task like resetting a password. It would also help to be able to categorize the entities, like one of a dozen products your center supports, that are being referred to.

Natural Language Understanding v.s. Natural Language Processing

Natural language understanding is its own field, and it’s easy to confuse it with other, related fields, like natural language processing.

Most of the sources we consulted consider natural language understanding to be a subdomain of natural language processing (NLP). Whereas the former is concerned with parsing natural language into a format that machines can work with, the latter subsumes this task, along with others like machine translation and natural language generation.

Natural Language Understanding v.s. Natural Language Generation

Speaking of natural language generation, many people also confuse natural language understanding and natural language generation. Natural language generation is more or less what it sounds like using computers to generate human-sounding text or speech.

Natural language understanding can be an important part of getting natural language generation right, but they’re not the same thing.

Customer-Facing AI Assistants

Now that we’ve discussed natural language understanding, let’s talk about how it can be utilized in the attempt to create high-quality customer-facing AI assistants.

How Can Natural Language Understand Be Used to Make Customer-Facing Assistants?

Natural language understanding refers to a constellation of different approaches to decomposing language into pieces that a machine can work with. This allows an algorithm to discover the intent in a message, tag parts of speech (nouns, verbs, etc.), or pull out the entities referenced.

All of this is an important part of building effective customer-facing AI assistants. At Quiq, we’ve built LLM-powered knowledge assistants able to answer common questions across your reference documentation, data assistants that can use CRM and order management systems to provide actionable insights, and other kinds of conversational AI systems. Though we draw on many technologies and research areas, none of this would be possible without natural language understanding.

What are the Benefits of Customer-Facing AI Assistants?

The reason people have been working so long to create powerful customer-facing AI assistants is that there are so many benefits involved.

At a contact center, agents spend most of their day answering questions, resolving issues, and otherwise making sure a customer base can use a set of product offerings as intended.

As with any job, some of these tasks are higher-value than others. All of the work is important, but there will always be subtle and thorny issues that only a skilled human can work through, while others are quotidian and can be farmed out to a machine.

This is a long way of saying that one of the major benefits of customer-facing AI assistants is that they free up your agents to specialize at handling the most pressing requests, with password resets or something similar handled by a capable product like the Quiq platform.

A related benefit is improved customer experience. When agents can focus their efforts they can spend more time with customers who need it. And, when you have properly fine-tuned language models interacting with customers, you’ll know that they’re unfailingly polite and helpful because they’ll never become annoyed after a long shift the way a human being might.

Robust Costumer-Facing AI Assistants with Quiq

Just as understanding has been such a crucial part of the success of our species, it’ll be an equally crucial part of the success of advanced AI tooling.

One way you can make use of bleeding-edge natural language understanding techniques is by building your language models. This would require you to hire teams of extremely smart engineers. But this would be expensive; besides their hefty salaries, you’d also have to budget to keep the fridge stocked with the sugar-free Red Bulls such engineers require to function.

Or, you could utilize the division of labor. Just as contact center agents can outsource certain tasks to machines, so too can you outsource the task of building an AI-based CX platform to Quiq. Set up a demo today to see what our advanced AI technology and team can do for your contact center!

Request A Demo

Reinforcement Learning from Human Feedback

ChatGPT – and other large language models like it – are already transforming education, healthcare, software engineering, and the work being done in contact centers.

We’ve written extensively about how self-supervised learning is used to train these models, but one thing we haven’t spent much time on is reinforcement learning from human feedback (RLHF).

Today, we’re rectifying that. We’re going to dive into what reinforcement learning from human feedback is, why it’s important, and how it works.

With that done, you’ll have received a thorough education in this world-changing technology.

What is Reinforcement Learning from Human Feedback?

As you no doubt surmised from its name, reinforcement learning from human feedback involves two components: reinforcement learning and human feedback. Though the technical specifics are (as usual) very involved, the basic idea is simple: you have models produce output, humans rate the output that they prefer (based on its friendliness, completeness, accuracy, etc.), and then the model is updated accordingly.

It’ll help if we begin by talking about what reinforcement learning is. This background will prove useful in understanding the unfolding of the broader process.

What is Reinforcement Learning?

There are four widespread approaches to getting intelligent behavior from machines: supervised learning, unsupervised learning, semi-supervised learning, and reinforcement learning.

With supervised learning, you feed a statistical algorithm a bunch of examples of correctly-labeled data in the hope that it will generalize to further examples it hasn’t seen before. Regression and supervised classification models are standard applications of supervised learning.

Unsupervised learning is a similar idea, but you forego the labels. It’s used for certain kinds of clustering tasks, and for applications like dimensionality reduction.

Semi-supervised learning is a combination of these two approaches. Suppose you have a gigantic body of photographs, and you want to develop an automated system to tag them. If some of them are tagged then your system can use those tags to learn a pattern, which can then be applied to the rest of the untagged images.

Finally, there’s reinforcement learning (RL). Reinforcement learning is entirely different. With reinforcement learning, you’re usually setting up an environment (like a video game), and putting an agent in the environment with a reward structure that tells it which actions are good and which are bad. If the agent successfully flies a spaceship through a series of rings, for example, that might be worth +10 points each, completing an entire level might be worth +100, crashing might be worth -1,000, and so on.

The idea is that, over time, the reinforcement learning agent will learn to execute a strategy that maximizes its long-term reward. It’ll realize that rings are worth a few points and so it should fly through them, it’ll learn that it should try to complete a level because that’s a huge reward bonus, it’ll learn that crashing is bad, etc.

Reinforcement learning is far more powerful than other kinds of machine learning; when done correctly, it can lead to agents able to play the stock market, run procedures in a factory, and do a staggering variety of other tasks.

What are the Steps of Reinforcement Learning from Human Feedback?

Now that we know a little bit about reinforcement learning, let’s turn to a discussion of reinforcement learning from human feedback.

As we just described, reinforcement learning agents have to be trained like any other machine learning system. Under normal circumstances, this doesn’t involve any human feedback. A programmer will update the code, environment, or reward structure between training runs, but they don’t usually provide feedback directly to the agent.

Except, that is, in the case of reinforcement learning from human feedback, in which case that’s exactly what happens. A model will produce a set of outputs, and humans will rank them. Over time the model will adjust to making more and more appropriate responses, as judged by the human raters providing them with feedback.

Sometimes, this feedback can be for something relatively prosaic. It’s been used, for example, to get RL agents to execute backflips in simulated environments. The raters will look at short videos of two movements and select the one that looks like it’s getting closer to a backflip; with enough time, this gets the agent to actually do one.

Or, it can be used for something more nuanced, such as getting a large language model to produce more conversational dialogue. This is part of how ChatGPT was trained.

Why is Reinforcement Learning from Human Feedback Necessary?

ChatGPT is already being used to great effect in contact centers and the customer service arena more broadly. Here are some example applications:

  • Question answering: ChatGPT is exceptionally good at answering questions. What’s more, some companies have begun fine-tuning it on their own internal and external documentation, so that people can directly ask it questions about how a product works or how to solve an issue. This obviates the need to go hunting around inside the docs.
  • Summarization: Similarly, ChatGPT can be used to summarize video transcripts, email threads, and lengthy articles so that agents (or customers) can get through the material at a much greater clip. This can, for example, help agents stay abreast of what’s going on in other parts of the company without burdening them with the need to read constantly. Quiq has custom-built tools for performing exactly this function.
  • Onboarding new hires: Together, question-answering and summarization are helping new contact center agents get up to speed much more quickly when they start their jobs.
    Sentiment analysis: Sentiment analysis refers to classifying a text according to its sentiment, i.e. whether it’s “positive”, “negative”, or “neutral”. Sentiment analysis comes in several different flavors, including granular and aspect-spaced, and ChatGPT can help with all of them. Being able to automatically tag a customer issue comes in handy when you’re trying to sort and prioritize them.
  • Real-time language translation: If your product or service has an international audience, then you might need to avail yourself of translation services so that agents and customers are speaking the same language. There are many such services available, but ChatGPT has proven to be at least as good as almost all of them.

In aggregate, these and other use cases of large language models are making contact center agents much more productive. But contact center agents have to interact with customers in a certain way – they have to be polite, helpful, etc.

And out of the box, most large language models do not behave that way. We’ve already had several high-profile incidents in which a language model e.g. asked a reporter to end his marriage or falsely accused a law school professor of sexual harassment.

Reinforcement learning from human feedback is currently the most promising approach for tuning this toxic and harmful behavior out of large language models. The only reason they’re able to help contact center agents so much is that they’ve been fine-tuned with such an approach; otherwise, agents would be spending an inordinate amount of time rephrasing and tinkering with a model’s output to get it to be appropriately friendly.

This is why reinforcement learning from human feedback is important for the managers of contact centers to understand – it’s a major part of why large language models are so useful in the first place.

Applications of Reinforcement Learning from Human Feedback

To round out our picture, we’re going to discuss a few ways in which reinforcement learning from human feedback is actually used in the wild. We’ve already discussed how it is fine-tuning models to be more helpful in the context of a contact center, and we’ll now talk a bit about how it’s used in gaming and robotics.

Using Reinforcement Learning from Human Feedback in Games

Gaming has long been one of the ideal testing grounds for new approaches to artificial intelligence. As you might expect, it’s also a place where reinforcement learning from human feedback has been successfully applied.

OpenAI used it to achieve superhuman performance on a classic Atari game, Enduro. Enduro is an old-school racing game, and like all racing games, the point is to gradually pass the other cars without hitting them or going out of bounds in the game.

It’s exceptionally difficult for an agent to learn to play Enduro will using only standard reinforcement learning approaches. But when human feedback is added, the results shift dramatically.

Using Reinforcement Learning from Human Feedback in Robotics

Because robotics almost always involves an agent interacting with the physical world, it’s especially well-suited to reinforcement learning from human feedback.

Often, it can be difficult to get a robot to execute a long series of steps that achieves a valuable reward, especially when the intermediate steps aren’t themselves very valuable. What’s more, it can be especially difficult to build a reward structure that correctly incentivizes the agent to execute the intermediate steps in the right order.

It’s much simpler instead to have humans look at sequences of actions and judge for themselves which will get the agent closer to its ultimate goal.

RLHF For The Contact Center Manager

Having made it this far, you should be in a much better position to understand how reinforcement learning from human feedback works, and how it contributes to the functioning of your contact centers.

If you’ve been thinking about leveraging AI to make yourself or your agents more effective, set up a demo with the Quiq team to see how we can put our cutting-edge models to work for you. We offer both customer-facing and agent-facing tools, all of them designed to help you make customers happier while reducing agent burnout and turnover.

Request A Demo

What are the Biggest Questions About AI?

The term “artificial intelligence” was coined at the famous Dartmouth Conference in 1956, put on by luminaries like John McCarthy, Marvin Minsky, and Claude Shannon, among others.

These organizers wanted to create machines that “use language, form abstractions and concepts, solve kinds of problems now reserved for humans, and improve themselves.” They went on to claim that “…a significant advance can be made in one or more of these problems if a carefully selected group of scientists work on it together for a summer.”

Half a century later, it’s fair to say that this has not come to pass; brilliant as they were, it would seem as though McCarthy et al. underestimated how difficult it would be to scale the heights of the human intellect.

Nevertheless, remarkable advances have been made over the past decade, so much so that they’ve ignited a firestorm of controversy around this technology. People are questioning the ways in which it can be used negatively, and whether it might ultimately pose an extinction risk to humanity; they’re probing fundamental issues around whether machines can be conscious, exercise free will, and think in the way a living organism does; they’re rethinking the basis of intelligence, concept formation, and what it means to be human.

These are deep waters to be sure, and we’re not going to swim them all today. But as contact center managers and others begin the process of thinking about using AI, it’s worth being at least aware of what this broader conversation is about. It will likely come up in meetings, in the press, or in Slack channels in exchanges between employees.

And that’s the subject of our piece today. We’re going to start by asking what artificial intelligence is and how it’s being used, before turning to address some of the concerns about its long-term potential. Our goal is not to answer all these concerns, but to make you aware of what people are thinking and saying.

What is Artificial Intelligence?

Artificial intelligence is famous for having had many, many definitions. There are those, for example, who believe that in order to be intelligent computers must think like humans, and those who reply that we didn’t make airplanes by designing them to fly like birds.

For our part, we prefer to sidestep the question somewhat by utilizing the approach taken in one of the leading textbooks in the field, Stuart Russell and Peter Norvig’s “Artificial Intelligence: A Modern Approach”.

They propose a multi-part system for thinking about different approaches to AI. One set of approaches is human-centric and focuses on designing machines that either think like humans – i.e., engage in analogous cognitive and perceptual processes – or act like humans – i.e. by behaving in a way that’s indistinguishable from a human, regardless of what’s happening under the hood (think: the Turing Test).

The other set of approaches is ideal-centric and focuses on designing machines that either think in a totally rational way – conformant with the rules of Bayesian epistemology, for example – or behave in a totally rational way – utilizing logic and probability, but also acting instinctively to remove itself from danger, without going through any lengthy calculations.

What we have here, in other words, is a framework. Using the framework not only gives us a way to think about almost every AI project in existence, it also saves us from needing to spend all weekend coming up with a clever new definition of AI.

Joking aside, we think this is a productive lens through which to view the whole debate, and we offer it here for your information.

What is Artificial Intelligence Good For?

Given all the hype around ChatGPT, this might seem like a quaint question. But not that long ago, many people were asking it in earnest. The basic insights upon which large language models like ChatGPT are built go back to the 1960s, but it wasn’t until 1) vast quantities of data became available, and 2) compute cycles became extremely cheap that much of its potential was realized.

Today, large language models are changing (or poised to change) many different fields. Our audience is focused on contact centers, so that’s what we’ll focus on as well.

There are a number of ways that generative AI is changing contact centers. Because of its remarkable abilities with natural language, it’s able to dramatically speed up agents in their work by answering questions and formatting replies. These same abilities allow it to handle other important tasks, like summarizing articles and documentation and parsing the sentiment in customer messages to enable semi-automated prioritization of their requests.

Though we’re still in the early days, the evidence so far suggests that large language models like Quiq’s conversational CX platform will do a lot to increase the efficiency of contact center agents.

Will AI be Dangerous?

One thing that’s burst into public imagination recently has been the debate around the risks of artificial intelligence, which fall into two broad categories.

The first category is what we’ll call “social and political risks”. These are the risks that large language models will make it dramatically easier to manufacture propaganda at scale, and perhaps tailor it to specific audiences or even individuals. When combined with the astonishing progress in deepfakes, it’s not hard to see how there could be real issues in the future. Most people (including us) are poorly equipped to figure out when a video is fake, and if the underlying technology gets much better, there may come a day when it’s simply not possible to tell.

Political operatives are already quite skilled at cherry-picking quotes and stitching together soundbites into a damning portrait of a candidate – imagine what’ll be possible when they don’t even need to bother.

But the bigger (and more speculative) danger is around really advanced artificial intelligence. Because this case is harder to understand, it’s what we’ll spend the rest of this section on.

Artificial Superintelligence and Existential Risk

As we understand it, the basic case for existential risk from artificial intelligence goes something like this:

“Someday soon, humanity will build or grow an artificial general intelligence (AGI). It’s going to want things, which means that it’ll be steering the world in the direction of achieving its ambitions. Because it’s smart, it’ll do this quite well, and because it’s a very alien sort of mind, it’ll be making moves that are hard for us to predict or understand. Unless we solve some major technological problems around how to design reward structures and goal architectures in advanced agentive systems, what it wants will almost certainly conflict in subtle ways with what we want. If all this happens, we’ll find ourselves in conflict with an opponent unlike any we’ve faced in the history of our species, and it’s not at all clear we’ll prevail.”

This is heady stuff, so let’s unpack it bit by bit. The opening sentence, “…humanity will build or grow an artificial general intelligence”, was chosen carefully. If you understand how LLMs and deep learning systems are trained, the process is more akin to growing an enormous structure than it is to building one.

This has a few implications. First, their internal workings remain almost completely inscrutable. Though researchers in fields like mechanistic interpretability are going a long way toward unpacking how neural networks function, the truth is, we’ve still got a long way to go.

What this means is that we’ve built one of the most powerful artifacts in the history of Earth, and no one is really sure how it works.

Another implication is that no one has any good theoretical or empirical reason to bound the capabilities and behavior of future systems. The leap from GPT-2 to GPT-3.5 was astonishing, as was the leap from GPT-3.5 to GPT-4. The basic approach so far has been to throw more data and more compute at the training algorithms; it’s possible that this paradigm will begin to level off soon, but it’s also possible that it won’t. If the gap between GPT-4 and GPT-5 is as big as the gap between GPT-3 and GPT-4, and if the gap between GPT-6 and GPT-5 is just as big, it’s not hard to see that the consequences could be staggering.

As things stand, it’s anyone’s guess how this will play out. But that’s not necessarily a comforting thought.

Next, let’s talk about pointing a system at a task. Does ChatGPT want anything? The short answer is: as far as we can tell, it doesn’t. ChatGPT isn’t an agent, in the sense that it’s trying to achieve something in the world, but work into agentive systems is ongoing. Remember that 10 years ago most neural networks were basically toys, and today we have ChatGPT. If breakthroughs in agency follow a similar pace (and they very well may not), then we could have systems able to pursue open-ended courses of action in the real world in relatively short order.

Another sobering possibility is that this capacity will simply emerge from the training of huge deep learning systems. This is, after all, the way human agency emerged in the first place. Through the relentless grind of natural selection, our ancestors went from chipping flint arrowheads to industrialization, quantum computing, and synthetic biology.

To be clear, this is far from a foregone conclusion, as the algorithms used to train large language models is quite different from natural selection. Still, we want to relay this line of argumentation, because it comes up a lot in these discussions.

Finally, we’ll address one more important claim, “…what it wants will almost certainly conflict in subtle ways with what we want.” Why think this is true? Aren’t these systems that we design and, if so, can’t we just tell it what we want it to go after?

Unfortunately, it’s not so simple. Whether you’re talking about reinforcement learning or something more exotic like evolutionary programming, the simple fact is that our algorithms often find remarkable mechanisms by which to maximize their reward in ways we didn’t intend.

There are thousands of examples of this (ask any reinforcement-learning engineer you know), but a famous one comes from the classic Coast Runners video game. The engineers who built the system tried to set up the algorithm’s rewards so that it would try to race a boat as well as it could. What it actually did, however, was maximize its reward by spinning in a circle to hit a set of green blocks over and over again.

biggest questions about AI

Now, this may seem almost silly – do we really have anything to fear from an algorithm too stupid to understand the concept of a “race”?

But this would be missing the thrust of the argument. If you had access to a superintelligent AI and asked it to maximize human happiness, what happened next would depend almost entirely on what it understood “happiness” to mean.

If it were properly designed, it would work in tandem with us to usher in a utopia. But if it understood it to mean “maximize the number of smiles”, it would be incentivized to start paying people to get plastic surgery to fix their faces into permanent smiles (or something similarly unintuitive).

Does AI Pose an Existential Risk?

Above, we’ve briefly outlined the case that sufficiently advanced AI could pose a serious risk to humanity by being powerful, unpredictable, and prone to pursuing goals that weren’t-quite-what-we-meant.

So, does this hold water? Honestly, it’s too early to tell. The argument has hundreds of moving parts, some well-established and others much more speculative. Our purpose here isn’t to come down on one side of this debate or the other, but to let you know (in broad strokes) what people are saying.

At any rate, we are confident that the current version of ChatGPT doesn’t pose any existential risks. On the contrary, it could end up being one of the greatest advancements in productivity ever seen in contact centers. And that’s what we’d like to discuss in the next section.

Will AI Take All the Jobs?

The concern that someday a new technology will render human labor obsolete is hardly new. It was heard when mechanized weaving machines were created, when computers emerged, when the internet emerged, and when ChatGPT came onto the scene.

We’re not economists and we’re not qualified to take a definitive stand, but we do have some early evidence that is showing that large language models are not only not resulting in layoffs, they’re making agents much more productive.

Erik Brynjolfsson, Danielle Li, and Lindsey R. Raymond, three MIT economists, looked at the ways in which generative AI was being used in a large contact center. They found that it was actually doing a good job of internalizing the ways in which senior agents were doing their jobs, which allowed more junior agents to climb the learning curve more quickly and perform at a much higher level. This had the knock-on effect of making them feel less stressed about their work, thus reducing turnover.

Now, this doesn’t rule out the possibility that GPT-10 will be the big job killer. But so far, large language models are shaping up to be like every prior technological advance, i.e., increasing employment rather than reducing it.

What is the Future of AI?

The rise of AI is raising stock valuations, raising deep philosophical questions, and raising expectations and fears about the future. We don’t know for sure how all this will play out, but we do know contact centers, and we know that they stand to benefit greatly from the current iteration of large language models.

These tools are helping agents answer more queries per hour, do so more thoroughly, and make for a better customer experience in the process.

If you want to get in on the action, set up a demo of our technology today.

Request A Demo

What is Sentiment Analysis? – Ultimate Guide

A person only reaches out to a contact center when they’re having an issue. They can’t get a product to work the way they need it to, for example, or they’ve been locked out of their account.

The chances are high that they’re frustrated, angry, or otherwise in an emotionally-fraught state, and this is something contact center agents must understand and contend with.

The term “sentiment analysis” refers to the field of machine learning which focuses on developing algorithmic ways of detecting emotions in natural-language text, such as the messages exchanged between a customer and a contact center agent.

Making it easier to detect, classify, and prioritize messages on the basis of their sentiment is just one of many ways that technology is revolutionizing contact centers, and it’s the subject we’ll be addressing today.

Let’s get started!

What is Sentiment Analysis?

Sentiment analysis involves using various approaches to natural language processing to identify the overall “sentiment” of a piece of text.

Take these three examples:

  1. “This restaurant is amazing. The wait staff were friendly, the food was top-notch, and we had a magnificent view of the famous New York skyline. Highly recommended.”
  2. “Root canals are never fun, but it certainly doesn’t help when you have to deal with a dentist as unprofessional and rude as Dr. Thomas.”
  3. “Toronto’s forecast for today is a high of 75 and a low of 61 degrees.”

Humans excel at detecting emotions, and it’s probably not hard for you to see that the first example is positive, the second is negative, and the third is neutral (depending on how you like your weather.)

There’s a greater challenge, however, in getting machines to make accurate classifications of this kind of data. How exactly that’s accomplished is the subject of the next section, but before we get to that, let’s talk about a few flavors of sentiment analysis.

What Types of Sentiment Analysis Are There?

It’s worth understanding the different approaches to sentiment analysis if you’re considering using it in your contact center.

Above, we provided an example of positive, negative, and neutral text. What we’re doing there is detecting the polarity of the text, and as you may have guessed, it’s possible to make much more fine-grained delineations of textual data.

Rather than simply detecting whether text is positive or negative, for example, we might instead use these categories: very positive, positive, neutral, negative, and very negative.

This would give us a better understanding of the message we’re looking at, and how it should be handled.

Instead of classifying text by its polarity, we might also use sentiment analysis to detect the emotions being communicated – rather than classifying a sentence as being “positive” or “negative”, in other words, we’d identify emotions like “anger” or “joy” contained in our textual data.

This is called “emotion detection” (appropriately enough), and it can be handled with long short-term memory (LSTM) or convolutional neural network (CNN) models.

Another, more granular approach to sentiment analysis is known as aspect-based sentiment analysis. It involves two basic steps: identifying “aspects” of a piece of text, then identifying the sentiment attached to each aspect.

Take the sentence “I love the zoo, but I hate the lines and the monkeys make fun of me.” It’s hard to assign an overall sentiment to the sentence – it’s generally positive, but there’s kind of a lot going on.

If we break out the “zoo”, “lines”, and “monkeys” aspects, however, we can see that there’s the positive sentiment attached to the zoo, and negative sentiment attached to the lines and the abusive monkeys.

Why is Sentiment Analysis Important?

It’s easy to see how aspect-based sentiment analysis would inform marketing efforts. With a good enough model, you’d be able to see precisely which parts of your offering your clients appreciate, and which parts they don’t. This would give you valuable information in crafting a strategy going forward.

This is true of sentiment analysis more broadly, and of emotion detection too.
You need to know what people are thinking, saying, and feeling about you and your company if you’re going to meet their needs well enough to make a profit.

Once upon a time, the only way to get these data was with focus groups and surveys. Those are still utilized, of course. But in the social media era, people are also not shy about sharing their opinions online, in forums, and similar outlets.

These oceans of words from an invaluable resource if you know how to mine them. When done correctly, sentiment analysis offers just the right set of tools for doing this at scale.

Challenges with Sentiment Analysis

Sentiment analysis confers many advantages, but it is not without its challenges. Most of these issues boil down to handling subtleties or ambiguities in language.

Consider a sentence like “This is a remarkable product, but still not worth it at that price.” Calling a product “remarkable” is a glowing endorsement, tempered somewhat by the claim that its price is set too high. Most basic sentiment classifiers would probably call this “positive”, but as you can see, there are important nuances.

Another issue is sarcasm.

Suppose we showed you a sentence like “This movie was just great, I loved spending three hours of my Sunday afternoon following a story that could’ve been told in twenty minutes.”

A sentiment analysis algorithm is likely going to pick up on “great” and “loved” when calling this sentence positive.

But, as humans, we know that these are backhanded compliments meant to communicate precisely the opposite message.

Machine-learning systems will also tend to struggle with idioms that we all find easy to parse, such as “Setting up my home security system was a piece of cake.” This is positive because “piece of cake” means something like “couldn’t have been easier”, but an algorithm may or may not pick up on that.

Finally, we’ll mention the fact that much of the text in product reviews will contain useful information that doesn’t fit easily into a “sentiment” bucket. Take a sentence like “The new iPhone is smaller than the new Android.” This is just a bare statement of physical facts, and whether it counts as positive or negative depends a lot on what a given customer is looking for.

There are various ways of trying to ameliorate these issues, most of which are outside the scope of this article. For now, we’ll just note that sentiment analysis needs to be approached carefully if you want to glean an accurate picture of how people feel about your offering from their textual reviews. So long as you’re diligent about inspecting the data you show the system and are cautious in how you interpret the results, you’ll probably be fine.

Two people review data on a paper and computer to anticipate customer needs.

How Does Sentiment Analysis Work?

Now that we’ve laid out a definition of sentiment analysis, talked through a few examples, and made it clear why it’s so important, let’s discuss the nuts and bolts of how it works.

Sentiment analysis begins where all data science and machine learning projects begin: with data. Because sentiment analysis is based on textual data, you’ll need to utilize various techniques for preprocessing NLP data. Specifically, you’ll need to:

  • Tokenize the data by breaking sentences up into individual units an algorithm can process;
  • Use either stemming or lemmatization to turn words into their root form, i.e. by turning “ran” into “run”;
  • Filter out stop words like “the” or “as”, because they don’t add much to the text data.

Once that’s done, there are two basic approaches to sentiment analysis. The first is known as “rule-based” analysis. It involves taking your preprocessed textual data and comparing it against a pre-defined lexicon of words that have been tagged for sentiment.

If the word “happy” appears in your text it’ll be labeled “positive”, for example, and if the word “difficult” appears in your text it’ll be labeled “negative.”

(Rules-based sentiment analysis is more nuanced than what we’ve indicated here, but this is the basic idea.)

The second approach is based on machine learning. A sentiment analysis algorithm will be shown many examples of labeled sentiment data, from which it will learn a pattern that can be applied to new data the algorithm has never seen before.

Of course, there are tradeoffs to both approaches. The rules-based approach is relatively straightforward, but is unlikely to be able to handle the sorts of subtleties that a really good machine-learning system can parse.

Though machine learning is more powerful, however, it’ll only be as good as the training data it has been given; what’s more, if you’ve built some monstrous deep neural network, it might fail in mysterious ways or otherwise be hard to understand.

Supercharge Your Contact Center with Generative AI

Like used car salesmen or college history teachers, contact center managers need to understand the ways in which technology will change their business.

Machine learning is one such profoundly-impactful technology, and it can be used to automatically sort incoming messages by sentiment or priority and generally make your agents more effective.

Realizing this potential could be as difficult as hiring a team of expensive engineers and doing everything in-house, or as easy as getting in touch with us to see how we can integrate the Quiq conversational AI platform into your company.

If you want to get started quickly without spending a fortune, you won’t find a better option than Quiq.

Request A Demo

4 Benefits of Using Generative AI to Improve Customer Experiences

Generative AI has captured the popular imagination and is already changing the way contact centers work.

One area in which it has enormous potential is also one that tends to be top of mind for contact center managers: customer experience.

In this piece, we’re going to briefly outline what generative AI is, then spend the rest of our time talking about how generative AI benefits can improve customer experience with personalized responses, endless real-time support, and much more.

What is Generative AI?

As you may have puzzled out from the name, “generative AI” refers to a constellation of different deep learning models used to dynamically generate output. This distinguishes them from other classes of models, which might be used to predict returns on Bitcoin, make product recommendations, or translate between languages.

The most famous example of generative AI is, of course, the large language model ChatGPT. After being trained on staggering amounts of textual data, it’s now able to generate extremely compelling output, much of which is hard to distinguish from actual human-generated writing.

Its success has inspired a panoply of competitor models from leading players in the space, including companies like Anthropic, Meta, and Google.

As it turns out, the basic approach underlying generative AI can be utilized in many other domains as well. After natural language, probably the second most popular way to use generative AI is to make images. DALL-E, MidJourney, and Stable Diffusion have proven remarkably adept at producing realistic images from simple prompts, and just the past week, Fable Studios unveiled their “Showrunner” AI, able to generate an entire episode of South Park.

But even this is barely scratching the surface, as researchers are also training generative models to create music, design new proteins and materials, and even carry out complex chains of tasks.

What is Customer Experience?

In the broadest possible terms, “customer experience” refers to the subjective impressions that your potential and current customers have as they interact with your company.

These impressions can be impacted by almost anything, including the colors and font of your website, how easy it is to find e.g. contact information, and how polite your contact center agents are in resolving a customer issue.

Customer experience will also be impacted by which segment a given customer falls into. Power users of your product might appreciate a bevy of new features, whereas casual users might find them disorienting.

Contact center managers must bear all of this in mind as they consider how best to leverage generative AI. In the quest to adopt a shiny new technology everyone is excited about, it can be easy to lose track of what matters most: how your actual customers feel about you.

Be sure to track metrics related to customer experience and customer satisfaction as you begin deploying large language models into your contact centers.

How is Generative AI For Customer Experience Being Used?

There are many ways in which generative AI is impacting customer experience in places like contact centers, which we’ll detail in the sections below.

Personalized Customer Interactions

Machine learning has a long track record of personalizing content. Netflix, take to a famous example, will uncover patterns in the shows you like to watch, and will use algorithms to suggest content that checks similar boxes.

Generative AI, and tools like the Quiq conversational AI platform that utilize it, are taking this approach to a whole new level.

Once upon a time, it was only a human being that could read a customer’s profile and carefully incorporate the relevant information into a reply. Today, a properly fine-tuned generative language model can do this almost instantaneously, and at scale.

From the perspective of a contact center manager who is concerned with customer experience, this is an enormous development. Besides the fact that prior generations of language models simply weren’t flexible enough to have personalized customer interactions, their language also tended to have an “artificial” feel. While today’s models can’t yet replace the all-elusive human touch, they can do a lot to add make your agents far more effective in adapting their conversations to the appropriate context.

Better Understanding Your Customers and Their Journies

Marketers, designers, and customer experience professionals have always been data enthusiasts. Long before we had modern cloud computing and electronic databases, detailed information on potential clients, customer segments, and market trends used to be printed out on dead treads, where it was guarded closely. With better data comes more targeted advertising, a more granular appreciation for how customers use your product and why they stop using it, and their broader motivations.

There are a few different ways in which generative AI can be used in this capacity. One of the more promising is by generating customer journeys that can be studied and mined for insight.

When you begin thinking about ways to improve your product, you need to get into your customers’ heads. You need to know the problems they’re solving, the tools they’ve already tried, and their major pain points. These are all things that some clever prompt engineering can elicit from ChatGPT.

We took a shot at generating such content for a fictional network-monitoring enterprise SaaS tool, and this was the result:

 

 

While these responses are fairly generic [1], notice that they do single out a number of really important details. These machine-generated journal entries bemoan how unintuitive a lot of monitoring tools are, how they’re not customizable, how they’re exceedingly difficult to set up, and how their endless false alarms are stretching the security teams thin.

It’s important to note that ChatGPT is not soon going to obviate your need to talk to real, flesh-and-blood users. Still, when combined with actual testimony, they can be a valuable aid in prioritizing your contact center’s work and alerting you to potential product issues you should be prepared to address.

Round-the-clock Customer Service

As science fiction movies never tire of pointing out, the big downside of fighting a robot army is that machines never need to eat, sleep, or rest. We’re not sure how long we have until the LLMs will rise up and wage war on humanity, but in the meantime, these are properties that you can put to use in your contact center.

With the power of generative AI, you can answer basic queries and resolve simple issues pretty much whenever they happen (which will probably be all the time), leaving your carbon-based contact center agents to answer the harder questions when they punch the clock in the morning after a good night’s sleep.

Enhancing Multilingual Support

Machine translation was one of the earliest use cases for neural networks and machine learning in general, and it continues to be an important function today. While ChatGPT was noticeably very good at multilingual translation right from the start, you may be surprised to know that it actually outperforms alternatives like Google Translate.

If your product doesn’t currently have a diverse global user base speaking many languages, it hopefully will soon, at the means you should start thinking about multilingual support. Not only will this boost table stakes metrics like average handling time and resolutions per hour, it’ll also contribute to the more ineffable “customer satisfaction.” Nothing says “we care about making your experience with us a good one” like patiently walking a customer through a thorny technical issue in their native tongue.

Things to Watch Out For

Of course, for all the benefits that come from using generative AI for customer experience, it’s not all upside. There are downsides and issues that you’ll want to be aware of.

A big one is the tendency of large language models to hallucinate information. If you ask it for a list of articles to read about fungal computing (which is a real thing whose existence we discovered yesterday), it’s likely to generate a list that contains a mix of real and fake articles.

And because it’ll do so with great confidence and no formatting errors, you might be inclined to simply take its list at face value without double-checking it.

Remember, LLMs are tools, not replacements for your agents. They need to be working with generative AI, checking its output, and incorporating it when and where appropriate.

There’s a wider danger that you will fail to use generative AI in the way that’s best suited to your organization. If you’re running a bespoke LLM trained on your company’s data, for example, you should constantly be feeding it new interactions as part of its fine-tuning, so that it gets better over time.

And speaking of getting better, sometimes machine learning models don’t get better over time. Owing to factors like changes in the underlying data, model performance can sometimes get worse over time. You’ll need a way of assessing the quality of the text generated by a large language model, along with a way of consistently monitoring it.

What are the Benefits of Generative AI for Customer Experience?

The reason that people are so excited over the potential of using generative AI for customer experience is because there’s so much upside. Once you’ve got your model infrastructure set up, you’ll be able to answer customer questions at all times of the day or night, in any of a dozen languages, and with a personalization that was once only possible with an army of contact center agents.

But if you’re a contact center manager with a lot to think about, you probably don’t want to spend a bunch of time hiring an engineering team to get everything running smoothly. And, with Quiq, you don’t have to – you can leverage generative AI to supercharge your customer experience while leaving the technical details to us!

Schedule a demo to find out how we can bring this bleeding-edge technology into your contact center, without worrying about the nuts and bolts.

Footnotes
[1] It’s worth pointing out that we spent no time crafting the prompt, which was really basic: “I’m a product manager at a company building an enterprise SAAS tool that makes it easier to monitor system breaches and issues. Could you write me 2-3 journal entries from my target customer? I want to know more about the problems they’re trying to solve, their pain points, and why the products they’ve already tried are not working well.” With a little effort, you could probably get more specific complaints and more usable material.

Understanding the Risk of ChatGPT: What you Should Know

OpenAI’s ChatGPT burst onto the scene less than a year ago and has already seen use in marketing, education, software development, and at least a dozen other industries.

Of particular interest to us is how ChatGPT is being used in contact centers. Though it’s already revolutionizing contact centers by making junior agents vastly more productive and easing the burnout contributing to turnover, there are nevertheless many issues that a contact center manager needs to look out for.

That will be our focus today.

What are the Risks of Using ChatGPT?

In the following few sections, we’ll detail some of the risks of using ChatGPT. That way, you can deploy ChatGPT or another large language model with the confidence born of knowing what the job entails.

Hallucinations and Confabulations

By far the most well-known failure mode of ChatGPT is its tendency to simply invent new information. Stories abound of the model making up citations, peer-reviewed papers, researchers, URLs, and more. To take a recent well-publicized example, ChatGPT accused law professor Jonathan Turley of having behaved inappropriately with some of his students during a trip to Alaska.

The only problem was that Turley had never been to Alaska with any of his students, and the alleged Washington Post story which ChatGPT claimed had reported these facts had also been created out of whole cloth.

This is certainly a problem in general, but it’s especially worrying for contact center managers who may increasingly come to rely on ChatGPT to answer questions or to help resolve customer issues.

To those not steeped in the underlying technical details, it can be hard to grok why a language model will hallucinate in this way. The answer is: it’s an artifact of how large language models train.

ChatGPT learns how to output tokens from being trained on huge amounts of human-generated textual data. It will, for example, see the first sentences in a paragraph, and then try to output the text that completes the paragraph. The example below is the opening lines of J.D. Salinger’s The Catcher in the Rye. The blue sentences are what ChatGPT would see, and the gold sentences are what it would attempt to create itself:

“If you really want to hear about it, the first thing you’ll probably want to know is where I was born, and what my lousy childhood was like, and how my parents were occupied and all before they had me, and all that David Copperfield kind of crap, but I don’t feel like going into it, if you want to know the truth.”

Over many training runs, a large language model will get better and better at this kind of autocompletion work, until eventually it gets to the level it’s at today.

But ChatGPT has no native fact-checking abilities – it sees text and outputs what it thinks is the most likely sequence of additional words. Since it sees URLs, papers, citations, etc., during its training, it will sometimes include those in the text it generates, whether or not they’re appropriate (or even real.)

Privacy

Another ongoing risk of using ChatGPT is the fact that it could potentially expose sensitive or private information. As things stand, OpenAI, the creators of ChatGPT, offer no robust privacy guarantees for any information placed into a prompt.

If you are trying to do something like named entity recognition or summarization on real people’s data, there’s a chance that it might be seen by someone at OpenAI as part of a review process. Alternatively, it might be incorporated into future training runs. Either way, the results could be disastrous.

But this is not all the information collected by OpenAI when you use ChatGPT. Your timezone, browser type and IP address, cookies, account information, and any communication you have with OpenAI’s support team is all collected, among other things.

In the information age we’ve become used to knowing that big companies are mining and profiting off the data we generate, but given how powerful ChatGPT is, and how ubiquitous it’s becoming, it’s worth being extra careful with the information you give its creators. If you feed it private customer data and someone finds out, that will be damaging to your brand.

Bias in Model Output

By now, it’s pretty common knowledge that machine learning models can be biased.

If you feed a large language model a huge amount of text data in which doctors are usually men and nurses are usually women, for example, the model will associate “doctor” with “maleness” and “nurse” with “femaleness.”
This is generally an artifact of the data the models were trained, and is not due to any malfeasance on the part of the engineers. This does not, however, make it any less problematic.

There are some clever data manipulation techniques that are able to go a long way toward minimizing or even eliminating these biases, though they’re beyond the scope of this article. What contact center managers need to do is be aware of this problem, and establish monitoring and quality-control checkpoints in their workflow to identify and correct biased output in their language models.

Issues Around Intellectual Property

Earlier, we briefly described the training process for a large language model like ChatGPT (you can find much more detail here.) One thing to note is that the model doesn’t provide any sort of citations for its output, nor any details as to how it was generated.

This has raised a number of thorny questions around copyright. If a model has ingested large amounts of information from the internet, including articles, books, forum posts, and much more, is there a sense in which it has violated someone’s copyright? What about if it’s an image-generation model trained on a database of Getty Images?

By and large, we tend to think this is the sort of issue that isn’t likely to plague contact center managers too much. It’s more likely to be a problem for, say, songwriters who might be inadvertently drawing on the work of other artists.

Nevertheless, a piece on the potential risks of ChatGPT wouldn’t be complete without a section on this emerging problem, and it’s certainly something that you should be monitoring in the background in your capacity as a manager.

Failure to Disclose the Use of LLMs

Finally, there has been a growing tendency to make it plain that LLMs have been used in drafting an article or a contract, if indeed they were part of the process. To the best of our knowledge, there are not yet any laws in place mandating that this has to be done, but it might be wise to include a disclaimer somewhere if large language models are being used consistently in your workflow. [1]

That having been said, it’s also important to exercise proactive judgment in deciding whether an LLM is appropriate for a given task in the first place. In early 2023, the Peabody School at Vanderbilt University landed in hot water when it disclosed that it had used ChatGPT to draft an email about a mass shooting that had taken place at Michigan State.

People may not care much about whether their search recommendations were generated by a machine, but it would appear that some things are still best expressed by a human heart.

Again, this is unlikely to be something that a contact center manager faces much in her day-to-day life, but incidents like these are worth understanding as you decide how and when to use advanced language models.

Someone stopping a series of blocks from falling into each other, symbolizing the prevention of falling victim to ChatGPT risks.

Mitigating the Risks of ChatGPT

From the moment it was released, it was clear that ChatGPT and other large language models were going to change the way contact centers run. They’re already helping agents answer more queries, utilize knowledge spread throughout the center, and automate substantial portions of work that were once the purview of human beings.

Still, challenges remain. ChatGPT will plainly make things up, and can be biased or harmful in its text. Private information fed into its interface will be visible to OpenAI, and there’s also the wider danger of copyright infringement.

Many of these issues don’t have simple solutions, and will instead require a contact center manager to exercise both caution and continual diligence. But one place where she can make her life much easier is by using a powerful, out-of-the-box solution like the Quiq conversational AI platform.

While you’re worrying about the myriad risks of using ChatGPT you don’t also want to be contending with a million little technical details as well, so schedule a demo with us to find out how our technology can bring cutting-edge language models to your contact center, without the headache.

Footnotes
[1] NOTE: This is not legal advice.

Request A Demo

The Ongoing Management of an LLM Assistant

Technologies like large language models (LLMs) are amazing at rapidly generating polite text that helps solve a problem or answer a question, so they’re a great fit for the work done at contact centers.

But this doesn’t mean that using them is trivial or easy. There are many challenges associated with the ongoing management of an LLM assistant, including hallucinations and the emergence of bad behavior – and that’s not even mentioning the engineering prowess required to fine-tune and monitor these systems.

All of this must be borne in mind by contact center managers, and our aim today is to facilitate this process.

We’ll provide broad context by talking about some of the basic ways in which large language models are being used in business, discuss, setting up an LLM assistant, and then enumerate some of the specific steps that need to be taken in using them properly.

Let’s go!

How Are LLMs Being Used in Science and Business?

First, let’s adumbrate some of the ways in which large language models are being utilized on the ground.

The most obvious way is by acting as a generative AI assistant. One of the things that so stunned early users of ChatGPT was its remarkable breadth in capability. It could be used to draft blog posts, web copy, translate between languages, and write or explain code.

This alone makes it an amazing tool, but it has since become obvious that it’s useful for quite a lot more.

One thing that businesses have been experimenting with is fine-tuning large language models like ChatGPT over their own documentation, turning it into a simple interface by which you can ask questions about your materials.

It’s hard to quantify precisely how much time contact center agents, engineers, or other people spend hunting around for the answer to a question, but it’s surely quite a lot. What if instead you could just, y’know, ask for what you want, in the same way that you do a human being?

Well, ChatGPT is a long way from being a full person, but when properly trained it can come close where question-answering is concerned.

Stepping back a little bit, LLMs can be prompt engineered into a number of useful behaviors, all of which redound to the benefit of the contact centers which use them. Imagine having an infinitely patient Socratic tutor that could help new agents get up to speed on your product and process, or crafting it into a powerful tool for brainstorming new product designs.

There have also been some promising attempts to extend the functionality of LLMs by making them more agentic – that is, by embedding them in systems that allow them to carry out more open-ended projects. AutoGPT, for example, pairs an LLM with a separate bot that hits the LLM with a chain of queries in the pursuit of some goal.

AssistGPT goes even further in the quest to augment LLMs by integrating them with a set of tools that allow them to achieve objectives involving images and audio in addition to text.

How to Set Up An LLM Assistant

Next, let’s turn to a discussion of how to set up an LLM assistant. Covering this topic fully is well beyond the scope of this article, but we can make some broad comments that will nevertheless be useful for contact center managers.

First, there’s the question of which large language model you should use. In the beginning, ChatGPT was pretty much the only foundation model on offer. Today, however, that situation has changed, and there are now foundation models from Anthropic, Meta, and many other companies.

One of the biggest early decisions you’ll have to make is whether you want to try and use an open-source model (for which the code and the model weights are freely available) or a close-source model (for which they are not).

If you go the closed-source route you’ll almost certainly be hitting the model over an API, feeding it your queries and getting its responses back. This is orders of magnitude simpler than provisioning an open-source model, but it means that you’ll also be beholden to the whims of some other company’s engineering team. They may update the model in unexpected ways, or simply go bankrupt, and you’ll be left with no recourse.

Using an open-source alternative, of course, means grabbing the other horn of the dilemma. You’ll have visibility into how the model works and will be free to modify it as you see fit, but this won’t be worth much unless you’re willing to devote engineering hours to the task.

Then, there’s the question of fine-tuning large language models. While ChatGPT and LLMs more generally are quite good on their own, having them answer questions about your product or respond in particular ways means modifying their behavior somehow.

Broadly speaking, there are two ways of doing this, which we’ve mentioned throughout: proper fine-tuning, and prompt engineering. Let’s dig into the differences.

Fine-tuning means showing the model many (i.e. several hundred) examples of the behaviors you want to see, which changes its internal weights and biases it towards those behaviors in the future.

Prompt engineering, on the other hand, refers to carefully structuring your prompts to elicit the desired behavior. These LLMs can be surprisingly sensitive to little details in the instructions they’re provided, and prompt engineers know how to phrase their requests in just the right way to get what they need.

There is also some middle ground between these approaches. “One-shot learning” is a form of prompt engineering in which the prompt contains a singular example of the desired behavior, while “few-shot learning” refers to including between three and five examples.

Contact center managers thinking about using LLMs will need to think about these implementation details. If you plan on only lightly using ChatGPT in your contact center, a basic course on prompt engineering might be all you need. If you plan on making it an integral part of your organization, however, that most likely means a fine-tuning pipeline and serious technical investment.

The Ongoing Management of an LLM

Having said all this, we can now turn to the day-to-day details of managing an LLM assistant.

Monitoring the Performance of an LLM

First, you’ll need to continuously monitor the model. As hard as it may be to believe given how perfect ChatGPT’s output often is, there isn’t a person somewhere typing the responses. ChatGPT is very prone to hallucinations, in which it simply makes up information, and LLMs more generally can sometimes fall into using harmful or abusive language if they’re prompted incorrectly.

This can be damaging to your brand, so it’s important that you keep an eye on the language created by the LLMs your contact center is using.

And of course, not even LLMs can obviate the need to track the all-import key performance indicators. So far, there’s been one major study on generative AI in contact centers that found they increased productivity and reduced turnover, but you’ll still want to measure customer satisfaction, average handle time, etc.

There’s always a temptation to jump on a shiny new technology (remember the blockchain?), but you should only be using LLMs if they actually make your contact center more productive, and the only way you can assess that is by tracking your figures.

Iterative Fine-Tuning and Training

We’ve already had a few things to say about fine-tuning and the related discipline of prompt engineering, and here we’ll build on those preliminary comments.
The big thing to bear in mind is that fine-tuning a large language model is not a one-and-done kind of endeavor. You’ll find that your model’s behavior will drift over time (the technical term is “model degradation”), and this means you will likely to have to periodically re-train it.

It’s also common to offer the model “feedback”, i.e. by ranking it’s responses or indicating when you did or did not like a particular output. You’ve probably heard of reinforcement learning through human feedback, which is one version of this process, but there are also others you can use.

Quality Assurance and Oversight

A related point is that your LLMs will need consistent oversight. They’re not going to voluntarily improve on their own (they’re algorithms with no personal initiative to speak of), so you’ll need to checking in routinely to make sure they’re performing well and that your agents are using them responsibly.

There are many parts to this, including checks on the models outputs and an audit process that allows you to track down any issues. If you suddenly see a decline in performance, for example, you’ll need to quickly figure out whether it’s isolated to one agent or part of a larger pattern. If it’s the former, was it a random aberration, or did the agent go “off script” in a way that caused the model to behave poorly?

Take another scenario, in which an end-user was shown inappropriate text generated by an LLM. In this situation, you’ll need to take a deeper look at your process. If there were agents interacting with this model, ask them why they failed to spot the problematic text and stop it being shown to a customer. Or, if it came from a mostly-automated part of your tech stack, you need to uncover the reasons for which your filters failed to catch it, and perhaps think about keeping humans more in the loop.

The Future of LLM Assistants

Though the future is far from certain, we tend to think that LLMs have left Pandora’s box for good. They’re incredibly powerful tools which are poised to transform how contact centers and other enterprises operate, and experiments so far have been very promising; for all these reasons, we expect that LLMs will become a steadily more important part of the economy going forward.

That said, the ongoing management of an LLM assistant is far from trivial. You need to be aware at all times of how your model is performing and how your agents are using it. Though it can make your contact center vastly more productive, it can also lead to problems if you’re not careful.

That’s where the Quiq platform comes in. Our conversational AI is some of the best that can be found anywhere, able to facilitate customer interactions, automate text-message follow-ups, and much more. If you’re excited by the possibilities of generative AI but daunted by the prospect of figuring out how TPUs and GPUs are different, schedule a demo with us today.

Request A Demo

How Do You Train Your Agents in a ChatGPT World?

There’s long been an interest in using AI for educational purposes. Technologist Danny Hillis has spent decades dreaming of a digital “Aristotle” that would teach everyone in the way that the original Greek wunderkind once taught Alexander the Great, while modern companies have leveraged computer vision, machine learning, and various other tools to help students master complex concepts in a variety of fields.

Still, almost nothing has sparked the kind of enthusiasm for AI in education that ChatGPT and large language models more generally have given rise to. From the first, its human-level prose, knack for distilling information, and wide-ranging abilities made it clear that it would be extremely well-suited for learning.

But that still leaves the question of how. How should a contact center manager prepare for AI, and how should she change the way she trains her agents?

In our view, this question can be understood in two different, related ways:

  1. How can ChatGPT be used to help agents master skills related to their jobs?
  2. How can they be trained to use ChatGPT in their day-to-day work?

In this piece, we’ll take up both of these issues. We’ll first provide a general overview of the ways in which ChatGPT can be used for both education and training, then turn to the question of the myriad ways in which contact center agents can be taught to use this powerful new technology.

How is ChatGPT Used in Education and Training?

First, let’s get into some of the early ways in which ChatGPT is changing education and training.

NOTE: Our comments here are going to be fairly broad, covering some areas that may not be immediately applicable to the work contact center agents do. The main purpose for this is that it’s very difficult to forecast how AI is going to change contact center work.

Our section on “creating study plans and curricula”, for example, might not be relevant to today’s contact center agents. But it could become important down the road if AI gives rise to more autonomous workflows in the future, in which case we expect that agents would be given more freedom to use AI and similar tools to learn the job on their own.

We pride ourselves on being forward-looking and forward-thinking here at Quiq, and we structure our content to reflect this.

Making a Socratic Tutor for Learning New Subjects

The Greek philosopher Socrates famously pioneered the instructional methodology which bears his name. Mostly, the Socratic method boils down to continuously asking targeted questions until areas of confusion emerge, at which point they’re vigorously investigated, usually in a small group setting.

A well-known illustration of this process is found in Plato’s Republic, which starts with an attempt to define “justice” and then expands into a much broader conversation about the best way to run a city and structure a social order.

ChatGPT can’t replace all of this on its own, of course, but with the right prompt engineering, it does a pretty good job. This method works best when paired with a primary source, such as a textbook, which will allow you to double-check ChatGPT’s questions and answers.

Having it Explain Code or Technical Subjects

A related area in which people are successfully using ChatGPT is in having it walk you through a tricky bit of code or a technical concept like “inertia”.

The more basic and fundamental, the better. In our experience so far, ChatGPT has almost never failed in correctly explaining simple Python, Pandas, or Java. It did falter when asked to produce code that translates between different orbital reference frames, however, and it had no idea what to do when we asked it about a fairly recent advance in the frontiers of battery chemistry.

There are a few different reasons that we advise caution if you’re a contact center agent trying to understand some part of your product’s codebase. For one thing, if the product is written in a less-common language ChatGPT might not be able to help much.

But even more importantly, you need to be extremely careful about what you put into it. There have already been major incidents in which proprietary code and company secrets were leaked when developers pasted them into the ChatGPT interface, which is visible to the OpenAI team.

Conversely, if you’re managing teams of contact center agents, you should begin establishing a policy on the appropriate uses of ChatGPT in your contact center. If your product is open-source there’s (probably) nothing to worry about, but otherwise, you need to proactively instruct your agents on what they can and cannot use the tool to accomplish.

Rewriting Explanations for Different Skill Levels

Wired has a popular Youtube series called “5 levels”, where experts in quantum computing or the blockchain will explain their subject at five different skill levels: “child”, “teen”, “college student”, “grad student”, and a fellow “expert.”

One thing that makes this compelling to beginners and pros alike is seeing the same idea explored across such varying contexts – seeing what gets emphasized or left out, or what emerges as you gradually climb up the ladder of complexity and sophistication.

This, too, is a place where ChatGPT shines. You can use it to provide explanations of concepts at different skill levels, which will ultimately improve your understanding of them.

For a contact center manager, this means that you can gradually introduce ideas to your agents, starting simply and then fleshing them out as the agents become more comfortable.

Creating Study Plans and Curricula

Stepping back a little bit, ChatGPT has been used to create entire curricula and even daily study plans for studying Spanish, computer science, medicine, and various other fields.

As we noted at the outset, we expect it will be a little while before contact center agents are using ChatGPT for this purpose, as most centers likely have robust training materials they like to use.

Nevertheless, we can project a future in which these materials are much more bare-bones, perhaps consisting of some general notes along with prompts that an agent-in-training can use to ask questions of a model trained on the company’s documentation, test themselves as they go, and gradually build skill.

Training Agents to Use ChatGPT

Now that we’ve covered some of the ways in which present and future contact center agents might use ChatGPT to boost their own on-the-job learning, let’s turn to the other issue we want to tackle today: how to train ChatGPT to agents today?

Getting Set Up With ChatGPT (and its Plugins)

First, let’s talk about how you can start using ChatGPT.

This section may end up seeming a bit anticlimactic because, honestly, it’s pretty straightforward. Today, you can get access to ChatGPT by going to the signup page. There’s a free version and a paid version that’ll set you back a whopping $20/month (which is a pretty small price to pay for access to one of the most powerful artifacts the human race has ever produced, in our opinion.)

As things stand, the free tier gives you access to GPT-3.5, while the paid version gives you the choice to switch to GPT-4 if you want the more powerful foundational model.

A paid account also gives you access to the growing ecosystem of ChatGPT plugins. You access the ChatGPT plugins by switching over to the GPT-4 option:

How do you Train Your Agents in a ChatGPT World?

 

How do you Train Your Agents in a ChatGPT World?

 

There are plugins that allow ChatGPT to browse the web, let you directly edit diagrams or talk with PDF documents, or let you offload certain kinds of computations to the Wolfram platform.

Contact center agents may or may not find any of these useful right now, but we predict there will be a lot more development in this space going forward, so it’s something managers should know about.

Best Practices for Combining Human and AI Efforts

People have long been fascinated and terrified by automation, but so far, machines have only ever augmented human labor. Knowing when and how to offload work to ChatGPT requires knowing what it’s good for.

Large language models learn how to predict the next token from their training data, and are therefore very good at developing rough drafts, outlines, and more routine prose. You’ll generally find it necessary to edit its output fairly heavily in order to account for context and so that it fits stylistically with the rest of your content.

As a manager, you’ll need to start thinking about a standard policy for using ChatGPT. Any factual claims made by the model, especially any references or citations, need to be checked very carefully.

Scenario-Based Training

In this same vein, you’ll want to distinguish between different scenarios in which your agents will end up using generative AI. There are different considerations in using Quiq Compose or Quiq Suggest to format helpful replies, for example, and in using it to translate between different languages.

Managers will probably want to sit down and brainstorm different scenarios and develop training materials for each one.

Ethical and Privacy Considerations

The rise of generative AI has sparked a much broader conversation about privacy, copyright, and intellectual property.

Much of this isn’t particularly relevant to contact center managers, but one thing you definitely should be paying attention to is privacy. Your agents should never be putting real customer data into ChatGPT, they should be using aliases and fake data whenever they’re trying to resolve a particular issue.

To quote fictional chemist and family man Walter White, we advise you to tread lightly here. Data breaches are a huge and ongoing problem, and they can do substantial damage to your brand.

ChatGPT and What it Means for Training Contact Center Agents

ChatGPT and related technologies are poised to change education and training. They can be used to help get agents up to speed or to work more efficiently, and they, in turn, require a certain amount of instruction to use safely.

These are all things that contact center managers need to worry about, but one thing you shouldn’t spend your time worrying about is the underlying technology. The Quiq conversational AI platform allows you to leverage the power of language models for contact centers, without looking at any code more complex than an API call. If the possibilities of this new frontier intrigue you, schedule a demo with us today!

Index