Getting the Most Out of Your Customer Insights with AI

The phrase “Knowledge is power” is usually believed to have originated with 16th- and 17th-century English philosopher Francis Bacon, in his Meditationes Sacræ. Because many people recognize something profoundly right about this sentiment, it has become received wisdom in the centuries since.

Now, data isn’t exactly the same thing as knowledge, but it is tremendously powerful. Armed with enough of the right kind of data, contact center managers can make better decisions about how to deploy resources, resolve customer issues, and run their business.

As is usually the case, the data contact center managers are looking for will be unique to their field. This article will discuss these data, why they matter, and how AI can transform how you gather, analyze, and act on data.

Let’s get going!

What are Customer Insights in Contact Centers?

As a contact center, your primary focus is on helping people work through issues related to a software product or something similar. But you might find yourself wondering who these people are, what parts of the customer experience they’re stumbling over, which issues are being escalated to human agents and which are resolved by bots, etc.

If you knew these things, you would be able to notice patterns and start proactively fixing problems before they even arise. This is what customer insights is all about, and it can allow you to finetune your procedures, write clearer technical documentation, figure out the best place to use generative AI in your contact center, and much more.

What are the Major Types of Customer Insights?

Before we turn to a discussion of the specifics of customer insights, we’ll deal with the major kinds of customer insights there are. This will provide you with an overarching framework for thinking about this topic and where different approaches might fit in.

Speech and Text Data

Customer service and customer experience both tend to be language-heavy fields. When an agent works with a customer over the phone or via chat, a lot of natural language is generated, and that language can be analyzed. You might use a technique like sentiment analysis, for example, to gauge how frustrated customers are when they contact an agent. This will allow you to form a fuller picture of the people you’re helping, and discover ways of doing so more effectively.

Data on Customer Satisfaction

Contact centers exist to make customers happy as they try to use a product, and for this reason, it’s common practice to send out surveys when a customer interaction is done. When done correctly, the information contained in these surveys is incredibly valuable, and can let you know whether or not you’re improving over time, whether a specific approach to training or a new large language model is helping or hurting customer satisfaction, and more.

Predictive Analytics

Predictive analytics is a huge field, but it mostly boils down to using machine learning or something similar to predict the future based on what’s happened in the past. You might try to forecast average handle time (AHT) based on the time of the year, on the premise that when an issue arises has something to do with how long it will take to get it resolved.

To do this effectively you would need a fair amount of AHT data, along with the corresponding data about when the complaints were raised, and then you could fit a linear regression model on these two data streams. If you find that AHT reliably climbs during certain periods, you can have more agents on hand when required.

Data on Agent Performance

Like employees in any other kind of business, agents perform at different levels. Junior agents will likely take much longer to work through a thorny customer issue than more senior ones, of course, and the same could be said for agents with an extensive technical background versus those without the knowledge this background confers. Or, the same agent might excel at certain kinds of tasks but perform much worse on others.

Regardless, by gathering these data on how agents are performing you, as the manager, can figure out where weaknesses lie across all your teams. With this information, you’ll be able to strategize about how to address those weaknesses with coaching, additional education, a refresh of the standard operating procedures, or what have you.

Channel Analytics

These days, there are usually multiple ways for a customer to get in touch with your contact center, and they all have different dynamics. Sending a long email isn’t the same thing as talking on the phone, and both are distinct from reaching out on social media or talking to a bot. If you have analytics on specific channels, how customers use them, and what their experience was like, you can make decisions about what channels to prioritize.

What’s more, customers will often have interacted with your brand in the past through one or more of these channels. If you’ve been tracking those interactions, you can incorporate this context to personalize responses when they reach out to resolve an issue in the future, which can help boost customer satisfaction.

What Specific Metrics are Tracked for Customer Insights?

Now that we have a handle on what kind of customer insights there are, let’s talk about specific metrics that come up in contact centers!

First Contact Resolution (FCR)

The first contact resolution is the fraction of issues a contact center is able to resolve on the first try, i.e. the first time the customer reaches out. It’s sometimes also known as Right First Time (RFT), for this reason. Note that first contact resolution can apply to any channel, whereas first call resolution applies only when the customer contacts you over the phone. They have the same acronym but refer to two different metrics.

Average Handle Time (AHT)

The average handle time is one of the more important metrics contact centers track, and it refers to the mean length of time an agent spends on a task. This is not the same thing as how long the agent spends talking to a customer, and instead encompasses any work that goes on afterward as well.

Customer Satisfaction (CSAT)

The customer satisfaction score attempts to gauge how customers feel about your product and service. It’s common practice, to collect this information from many customers, then averaging them to get a broader picture of how your customers feel. The CSAT can give you a sense of whether customers are getting happier over time, whether certain products, issues, or agents make them happier than others, etc.

Call Abandon Rate (CAR)

The call abandon rate is the fraction of customers who end a call with an agent before their question has been answered. It can be affected by many things, including how long the customers have to wait on hold, whether they like the “hold” music you play, and similar sorts of factors. You should be aware that CAR doesn’t account for missed calls, lost calls, or dropped calls.

***

Data-driven contact centers track a lot of metrics, and these are just a sample. Nevertheless, they should convey a sense of what kinds of numbers a manager might want to examine.

How Can AI Help with Customer Insights?

And now, we come to the “main” event, a discussion of how artificial intelligence can help contact center managers gather and better utilize customer insights.

Natural Language Processing and Sentiment Analysis

An obvious place to begin is with natural language processing (NLP), which refers to a subfield in machine learning that uses various algorithms to parse (or generate) language.

There are many ways in which NLP can aid in finding customer insights. We’ve already mentioned sentiment analysis, which detects the overall emotional tenor of a piece of language. If you track sentiment over time, you’ll be able to see if you’re delivering more or less customer satisfaction.

You could even get slightly more sophisticated and pair sentiment analysis with something like named entity recognition, which extracts information about entities from language. This would allow you to e.g. know that a given customer is upset, and also that the name of a particular product kept coming up.

Classifying Different Kinds of Communication

For various reasons, contact centers keep transcripts and recordings of all the interactions they have with a customer. This means that they have access to a vast amount of textual information, but since it’s unstructured and messy it’s hard to know what to do with it.

Using any of several different ML-based classification techniques, a contact center manager could begin to tame this complexity. Suppose, for example, she wanted to have a high-level overview of why people are reaching out for support. With a good classification pipeline, she could start automating the processing of sorting communications into different categories, like “help logging in” or “canceling a subscription”.

With enough of this kind of information, she could start to spot trends and make decisions on that basis.

Statistical Analysis and A/B Testing

Finally, we’ll turn to statistical analysis. Above, we talked a lot about natural language processing and similar endeavors, but more than likely when people say “customer insights” they mean something like “statistical analysis”.

This is a huge field, so we’re going to illustrate its importance with an example focusing on churn. If you have a subscription-based business, you’ll have some customers who eventually leave, and this is known as “churn”. Churn analysis has sprung up to apply data science to understanding these customer decisions, in the hopes that you can resolve any underlying issues and positively impact the bottom line.

What kinds of questions would be addressed by churn analysis? Things like what kinds of customers are canceling (i.e. are they young or old, do they belong to a particular demographic, etc.), figuring out their reasons for doing so, using that to predict which similar questions might be in danger of churning soon, and thinking analytically about how to reduce churn.

And how does AI help? There now exist any number of AI tools that substantially automate the process of gathering and cleaning the relevant data, applying standard tests, making simple charts, and making your job of extracting customer insights much easier.

What AI Tools Can Be Used for Customer Insights?

By now you’re probably eager to try using AI for customer insights, but before you do that, let’s spend some talking about what you’d look for in a customer insights tool.

Performant and Reliable

Ideally, you want something that you can depend upon and that won’t drive you crazy with performance issues. A good customer insights tool will have many optimizations under the hood that make crunching numbers easy, and shouldn’t require you to have a computer science degree to set up.

Straightforward Integration Process

Modern contact centers work across a wide variety of channels, including emails, chat, social media, phone calls, and even more. Whatever AI-powered customer insights platform you go with should be able to seamlessly integrate with all of them.

Simple to Use

Finally, your preferred solution should be relatively easy to use. Quiq Insights, for example, makes it a breeze to create customizable funnels, do advanced filtering, see the surrounding context for different conversations, and much more.

Getting the Most Out of AI-Powered Customer Insights

Data is extremely important to the success or failure of modern businesses, and it’s getting more important all the time. Contact centers have long been forward-looking and eager to adopt new technologies, and the same must be true in our brave new data-powered world.

If you’d like a demo of Quiq Insights, reach out to see how we can help you streamline your operation while boosting customer satisfaction!

Request A Demo

Security and Compliance in Next-Gen Contact Centers

Along with almost everyone else, we’ve been singing the praises of large language models like ChatGPT for a while now. We’ve noted how they can be used in retail, how they’re already supercharging contact center agents, and have even put out some content on how researchers are pushing the frontiers of what this technology is capable of.

But none of this is to say that generative AI doesn’t come with serious concerns for security and compliance. In this article, we’ll do a deep dive into these issues. We’ll first provide some context on how advanced AI is being deployed in contact centers, before turning our attention to subjects like data leaks, lack of transparency, and overreliance. Finally, we’ll close with a treatment of the best practices contact center managers can use to alleviate these problems.

What is a “Next-Gen” Contact Center?

First, what are some ways in which a next-generation contact center might actually be using AI? Understanding this will be valuable background for the rest of the discussion about security and compliance, because knowing what generative AI is doing is a crucial first step in protecting ourselves from its potential downsides.

Businesses like contact centers tend to engage in a lot of textual communication, such as when resolving customer issues or responding to inquiries. Due to their proficiency in understanding and generating natural language, LLMs are an obvious tool to reach for when trying to automate or streamline these tasks; for this reason, they have become increasingly popular in enhancing productivity within contact centers.

To give specific examples, there are several key areas where contact center managers can effectively utilize LLMs:

Responding to Customer Queries – High-quality documentation is crucial, yet there will always be customers needing assistance with specific problems. While LLMs like ChatGPT may not have all the answers, they can address many common inquiries, particularly when they’ve been fine-tuned on your company’s documentation.

Facilitating New Employee Training – Similarly, a language model can significantly streamline the onboarding process for new staff members. As they familiarize themselves with your technology and procedures, they may encounter confusion, where AI can provide quick and relevant information.

Condensing Information – While it may be possible to keep abreast of everyone’s activities on a small team, this becomes much more challenging as the team grows. Generative AI can assist by summarizing emails, articles, support tickets, or Slack threads, allowing team members to stay informed without spending every moment of the day reading.

Sorting and Prioritizing Issues – Not all customer inquiries or issues carry the same level of urgency or importance. Efficiently categorizing and prioritizing these for contact center agents is another area where a language model can be highly beneficial. This is especially so when it’s integrated into a broader machine-learning framework, such as one that’s designed to adroitly handle classification tasks.

Language Translation – If your business has a global reach, you’re eventually going to encounter non-English-speaking users. While tools like Google Translate are effective, a well-trained language model such as ChatGPT can often provide superior translation services, enhancing communication with a diverse customer base.

What are the Security and Compliance Concerns for AI?

The preceding section provided valuable context on the ways generative AI is powering the future of contact centers. With that in mind, let’s turn to a specific treatment of the security and compliance concerns this technology brings with it.

Data Leaks and PII

First, it’s no secret that language models are trained on truly enormous amounts of data. And with that, there’s a growing worry about potentially exposing “Personally Identifiable Information” (PII) to generative AI models. PII encompasses details like your actual name, residential address, and also encompasses sensitive information like health records. It’s important to note that even if these records don’t directly mention your name, they could still be used to deduce your identity.

While our understanding of the exact data seen by language models during their training remains incomplete, it’s reasonable to assume they’ve encountered some sensitive data, considering how much of that kind of data exists on the internet. What’s more, even if a specific piece of PII hasn’t been directly shown to an LLM, there are numerous ways it might still come across such data. Someone might input customer data into an LLM to generate customized content, for instance, not recognizing that the model often permanently integrates this information into its framework.

Currently, there’s no effective method to extract data from a language model, and no finetuning technique that ensures it never reveals that data again has yet been found.

Over-Reliance on Models

Are you familiar with the term “ultracrepidarianism”? It’s a fancy SAT word that refers to a person who consistently gives advice or expresses opinions on things that they simply have no expertise in.

A similar sort of situation can arise when people rely too much on language models, or use them for tasks that they’re not well-suited for. These models, for example, are well known to hallucinate (i.e. completely invent plausible-sounding information that is false). If you were to ask ChatGPT for a list of 10 scientific publications related to a particular scientific discipline, you could well end up with nine real papers and one that’s fabricated outright.
From a compliance and security perspective, this matters because you should have qualified humans fact-checking a model’s output – especially if it’s technical or scientific.

To concretize this a bit, imagine you’ve finetuned a model on your technical documentation and used it to produce a series of steps that a customer can use to debug your software. This is precisely the sort of thing that should be fact-checked by one of your agents before being sent.

Not Enough Transparency

Large language models are essentially gigantic statistical artifacts that result from feeding an algorithm huge amounts of textual data and having it learn to predict how sentences will end based on the words that came before.

The good news is that this works much better than most of us thought it would. The bad news is that the resulting structure is almost completely inscrutable. While a machine learning engineer might be able to give you a high-level explanation of how the training process works or how a language model generates an output, no one in the world really has a good handle on the details of what these models are doing on the inside. That’s why there’s so much effort being poured into various approaches to interpretability and explainability.

As AI has become more ubiquitous, numerous industries have drawn fire for their reliance on technologies they simply don’t understand. It’s not a good look if a bank loan officer can only shrug and say “The machine told me to” when asked why one loan applicant was approved and another wasn’t.

Depending on exactly how you’re using generative AI, this may not be a huge concern for you. But it’s worth knowing that if you are using language models to make recommendations or as part of a decision process, someone, somewhere may eventually ask you to explain what’s going on. And it’d be wise for you to have an answer ready beforehand.

Compliance Standards Contact Center Managers Should be Familiar With

To wrap this section up, we’ll briefly cover some of the more common compliance standards that might impact how you run your contact center. This material is only a sketch, and should not be taken to be any kind of comprehensive breakdown.

The General Data Protection Regulation (GDPR) – The famous GDPR is a set of regulations put out by the European Union that establishes guidelines around how data must be handled. This applies to any business that interacts with data from a citizen of the EU, not just to companies physically located on the European continent.

The California Consumer Protection Act (CCPA) – In a bid to give individuals more sovereignty over what happens to their personal data, California created the CCPA. It stipulates that companies have to be clearer about how they gather data, that they have to include privacy disclosures, and that Californians must be given the choice to opt out of data collection.

Soc II – Soc II is a set of standards created by the American Institute of Certified Public Accounts that stresses confidentiality, privacy, and security with respect to how consumer data is handled and processed.

Consumer Duty – Contact centers operating in the U.K. should know about The Financial Conduct Authority’s new “Consumer Duty” regulations. The regulations’ key themes are that firms must act in good faith when dealing with customers, prevent any foreseeable harm to them, and do whatever they can to further the customer’s pursuit of their own financial goals. Lawmakers are still figuring out how generative AI will fit into this framework, but it’s something affected parties need to monitor.

Best Practices for Security and Compliance when Using AI

Now that we’ve discussed the myriad security and compliance concerns facing contact centers that use generative AI, we’ll close by offering advice on how you can deploy this amazing technology without running afoul of rules and regulations.

Have Consistent Policies Around Using AI

First, you should have a clear and robust framework that addresses who can use generative AI, under what circumstances, and for what purposes. This way, your agents know the rules, and your contact center managers know what they need to monitor and look out for.

As part of crafting this framework, you must carefully study the rules and regulations that apply to you, and you have to ensure that this is reflected in your procedures.

Train Your Employees to Use AI Responsibly

Generative AI might seem like magic, but it’s not. It doesn’t function on its own, it has to be steered by a human being. But since it’s so new, you can’t treat it like something everyone will already know how to use, like a keyboard or Microsoft Word. Your employees should understand the policy that you’ve created around AI’s use, should understand which situations require human fact-checking, and should be aware of the basic failure modes, such as hallucination.

Be Sure to Encrypt Your Data

If you’re worried about PII or data leakages, a simple solution is to encrypt your data before you even roll out a generative AI tool. If you anonymize data correctly, there’s little concern that a model will accidentally disclose something it’s not supposed to down the line.

Roll Your Own Model (Or Use a Vendor You Trust)

The best way to ensure that you have total control over the model pipeline – including the data it’s trained on and how it’s finetuned – is to simply build your own. That being said, many teams will simply not be able to afford to hire the kinds of engineers who are equal to this task. In such case, you should utilize a model built by a third party with a sterling reputation and many examples of prior success, like the Quiq platform.

Engage in Regular Auditing

As we mentioned earlier, AI isn’t magic – it can sometimes perform in unexpected ways, and its performance can also simply degrade over time. You need to establish a practice of regularly auditing any models you have in production to make sure they’re still behaving appropriately. If they’re not, you may need to do another training run, examine the data they’re being fed, or try to finetune them.

Futureproofing Your Contact Center Security

The next generation of contact centers is almost certainly going to be one that makes heavy use of generative AI. There are just too many advantages, from lower average handling time to reduced burnout and turnover, to forego it.

But doing this correctly is a major task, and if you want to skip the engineering hassle and headache, give the Quiq conversational AI platform a try! We have the expertise required to help you integrate a robust, powerful generative AI tool into your contact center, without the need to write a hundred thousand lines of code.

Request A Demo

LLM-Powered AI Assistants for Hotels – Ultimate Guide

New technologies have always been disruptive, supercharging those firms that embrace it and requiring the others to adapt or be left behind.

With the rise of new approaches to AI, such as large language models, we can see this dynamic playing out again. One place where AI assistants could have a major impact is in the hotel industry.

In this piece, we’ll explore the various ways AI assistants can be used in hotels, and what that means for the hoteliers that keep these establishments running.

Let’s get going!

What is an AI Assistant?

The term “AI assistant” refers to any use of an algorithmic or machine-learning system to automate a part of your workflow. A relatively simple example would be the autocomplete found in almost all text-editing software today, while a much more complex example might involve stringing together multiple chain-of-thought prompts into an agent capable of managing your calendar.

There are a few major types of AI assistants. Near and dear to our hearts, of course, are chatbots that function in places like contact centers. These can be agent-facing or customer-facing, and can help with answering common questions, helping draft replies to customer inquiries, and automatically translating between many different natural languages.

Chatbots (and large language models more generally) can also be augmented to produce speech, giving rise to so-called “voice assistants”. These tend to work like other kinds of chatbots but have the added ability to actually vocalize their text, creating a much more authentic customer experience.

In a famous 2018 demo, Google Duplex was able to complete a phone call to a local hair salon to make a reservation. One remarkable thing about the AI assistant was how human it sounded – its speech even included “uh”s and “mm-hmm”s that made it almost indistinguishable from an actual person, at least over the phone and for short interactions.

Then, there are 3D avatars. These digital entities are crafted to look as human as possible, and are perfect for basic presentations, websites, games, and similar applications. Graphics technology has gotten astonishingly good over the past few decades and, in conjunction with the emergence of technologies like virtual reality and the metaverse, means that 3D avatars could play a major role in the contact centers of the future.

One thing to think about if you’re considering using AI assistants in a hotel or hospitality service is how specialized you want them to be. Although there is a significant effort underway to build general-purpose assistants that are able to do most of what a human assistant does, it remains true that your agents will do better if they’re fine-tuned on a particular domain. For the time being, you may want to focus on building an AI assistant that is targeted at providing excellent email replies, for example, or answering detailed questions about your product or service.

That being said, we recommend you check the Quiq blog often for updates on AI assistants; when there’s a breakthrough, we’ll deliver actionable news as soon as possible.

How Will AI Assistants Change Hotels?

Though the audience we speak to is largely comprised of people working in or managing contact centers, the truth is that there are many overlaps with those in the hospitality space. Since these are both customer-service and customer-oriented domains, insights around AI assistants almost always transfer over.

With that in mind, let’s dive in now to talk about how AI is poised to transform the way hotels function!

AI for Hotel Operations

Like most jobs, operating a hotel involves many tasks that require innovative thinking and improvisation, and many others that are repetitive, rote, and quotidian. Booking a guest, checking them in, making small changes to their itinerary, and so forth are in the latter category, and are precisely the sorts of things that AI assistants can help with.

In an earlier example, we saw that chatbots were already able to handle appointment booking five years ago, so it requires no great leap in imagination to see how slightly more powerful systems would be able to do this on a grander scale. If it soon becomes possible to offload much of the day-to-day of getting guests into their rooms to the machines, that will free up a great deal of human time and attention to go towards more valuable work.

It’s possible, of course, that this will lead to a dramatic reduction in the workforce needed to keep hotels running, but so far, the evidence points the other way; when large language models have been used in contact centers, the result has been more productivity (especially among junior agents), less burnout, and reduced turnover. We can’t say definitively that this will apply in hotel operations, but we also don’t see any reason to think that it wouldn’t.

AI for Managing Hotel Revenues

Another place that AI assistants can change hotels is in forecasting and boosting revenues. We think this will function mainly by making it possible to do far more fine-grained analyses of consumption patterns, inventory needs, etc.

Everyone knows that there are particular times of the year when vacation bookings surge, and others in which there are a relatively small number of bookings. But with the power of big data and sophisticated AI assistants, analysts will be able to do a much better job of predicting surges and declines. This means prices for rooms or other accommodations will be more fluid and dynamic, changing in near real-time in response to changes in demand and the personal preferences of guests. The ultimate effect will be an increase in revenue for hotels.

AI in Marketing and Customer Service

A similar line of argument holds for using AI assistants in marketing and customer service. Just as both hotels and guests are better served when we can build models that allow for predicting future bookings, everyone is better served when it becomes possible to create more bespoke, targeted marketing.

By utilizing data sources like past vacations, Google searches, and browser history, AI assistants will be able to meet potential clients where they’re at, offering them packages tailored to exactly what they want and need. This will not only mean increased revenue for the hotel, but far more satisfaction for the customers (who, after all, might have gotten an offer that they themselves didn’t realize they were looking for.)

If we were trying to find a common theme between this section and the last one, we might settle on “flexibility”. AI assistants will make it possible to flexibly adjust prices (raising them during peak demand and lowering them when bookings level off), flexibly tailor advertising to serve different kinds of customers, and flexibly respond to complaints, changes, etc.

Smart Buildings in Hotels

One particularly exciting area of research in AI centers around so-called “smart buildings”. By now, most of us have seen relatively “smart” thermostats that are able to learn your daily patterns and do things like turn the temperature up when you leave to save on the cooling bill while turning it down to your preferred setting as you’re heading home from work.

These are certainly worthwhile, but they barely even scratch the surface of what will be possible in the future. Imagine a room where every device is part of an internet-of-things, all wired up over a network to communicate with each other and gather data about how to serve your needs.

Your refrigerator would know when you’re running low on a few essentials and automatically place an order, a smart stove might be able to take verbal commands (“cook this chicken to 180 degrees, then power down and wait”) to make sure dinner is ready on time, a smoothie machine might be able to take in data about your blood glucose levels and make you a pre-workout drink specifically tailored to your requirements on that day, and so on.

Pretty much all of this would carry over to the hotel industry as well. As is usually the case there are real privacy concerns here, but assuming those challenges can be met, hotel guests may one day enjoy a level of service that is simply not possible with a staff comprised only of human beings.

Virtual Tours and Guest Experience

Earlier, we mentioned virtual reality in the context of 3D avatars that will enhance customer experience, but it can also be used to provide virtual tours. We’re already seeing applications of this technology in places like real estate, but there’s no reason at all that it couldn’t also be used to entice potential guests to visit different vacation spots.

When combined with flexible and intelligent AI assistants, this too could boost hotel revenues and better meet customer needs.

Using AI Assistants in Hotels

As part of the service industry, hoteliers work constantly to best meet their customers’ needs and, for this reason, they would do well to keep an eye on emerging technologies. Though many advances will have little to do with their core mission, others, like those related to AI assistants, will absolutely help them forecast future demands, provide personalized service, and automate routine parts of their daily jobs.

If all of this sounds fascinating to you, consider checking out the Quiq conversational CX platform. Our sophisticated offering utilizes large language models to help with tasks like question answering, following up with customers, and perfecting your marketing.

Schedule a demo with us to see how we can bring your hotel into the future!

Request A Demo

Explainability vs. Interpretability in Machine Learning Models

In recent months, we’ve produced a tremendous amount of content about generative AI – from high-level primers on what large language models are and how they work, to discussions of how they’re transforming contact centers, to deep dives on the cutting edge of generative technologies.

This amounts to thousands of words, much of it describing how models like ChatGPT were trained by having them e.g. iteratively predict what the final sentence of a paragraph will be given the previous sentences.

But for all that, there’s still a tremendous amount of uncertainty about the inner workings of advanced machine-learning systems. Even the people who build them generally don’t understand how particular functions emerge or what a particular circuit is doing.

It would be more accurate to describe these systems as having been grown, like an inconceivably complex garden. And just as you might have questions if your tomatoes started spitting out math proofs, it’s natural to wonder why generative models are behaving in the way that they are.

These questions are only going to become more important as these technologies are further integrated into contact centers, schools, law firms, medical clinics, and the economy in general.

If we use machine learning to decide who gets a loan, who is likely to have committed a crime, or to have open-ended conversations with our customers, it really matters that we know how all this works.

The two big approaches to this task are explainability and interpretability.

Comparing Explainability and Interpretability

Under normal conditions, this section would come at the very end of the article, after we’d gone through definitions of both these terms and illustrated how they work with copious examples.

We’re electing to include it at the beginning for a reason; while the machine-learning community does broadly agree on what these two terms mean, there’s a lot of confusion about which bucket different techniques go into.

Below, for example, we’ll discuss Shapley Additive Explanations (SHAP). Some sources file this as an approach to explainability, while others consider it a way of making a model more interpretable.

A major contributing factor to this overlap is the simple fact that the two concepts are very closely related. Once you can explain a fact you can probably interpret it, and a big part of interpretation is explanation.

Below, we’ve tried our best to make sense of these important research areas, and have tried to lay everything out in a way that will help you understand what’s going on.

With that caveat out of the way, let’s define explainability and interpretability.

Broadly, explainability means analyzing the behavior of a model to understand why a given course of action was taken. If you want to know why data point “a” was sorted into one category while data point “b” was sorted into another, you’d probably turn to one of the explainability techniques described below.

Interpretability means making features of a model, such as its weights or coefficients, comprehensible to humans. Linear regression models, for example, calculate sums of weighted input features, and interpretability would help you understand what exactly that means.

Here’s an analogy that might help: you probably know at least a little about how a train works. Understanding that it needs fuel to move, has to have tracks constructed a certain way to avoid crashing, and needs brakes in order to stop would all contribute to the interpretability of the train system.

But knowing which kind of fuel it requires and for what reason, why the tracks must be made out of a certain kind of material, and how exactly pulling a brake switch actually gets the train to stop are all facets of the explainability of the train system.

What is Explainability in Machine Learning?

In machine learning, explainability refers to any set of techniques that allow you to reason about the nuts and bolts of the underlying model. If you can at least vaguely follow how data are processed and how they impact the final model output, the system is explainable to that degree.

Before we turn to the techniques utilized in machine learning explainability, let’s talk at a philosophical level about the different types of explanations you might be looking for.

Different Types of Explanations

There are many approaches you might take to explain an opaque machine-learning model. Here are a few:

  • Explanations by text: One of the simplest ways of explaining a model is by reasoning about it with natural language. The better sorts of natural-language explanations will, of course, draw on some of the explainability techniques described below. You can also try to talk about a system logically, by i.e. describing it as calculating logical AND, OR, and NOT operations.
  • Explanations by visualization: For many kinds of models, visualization will help tremendously in increasing explainability. Support vector machines, for example, use a decision boundary to sort data points and this boundary can sometimes be visualized. For extremely complex datasets this may not be appropriate, but it’s usually worth at least trying.
  • Local explanations: There are whole classes of explanation techniques, like LIME, that operate by illustrating how a black-box model works in some particular region. In other words, rather than trying to parse the whole structure of a neural network, we zoom in on one part of it and say “This is what it’s doing right here.”

Approaches to Explainability in Machine Learning

Now that we’ve discussed the varieties of explanation, let’s get into the nitty-gritty of how explainability in machine learning works. There are a number of different explainability techniques, but we’re going to focus on two of the biggest: SHAP and LIME.

Shapley Additive Explanations (SHAP) are derived from game theory and are a commonly-used way of making models more explainable. The basic idea is that you’re trying to parcel out “credit” for the model’s outputs among its input features. In game theory, potential players can choose to enter a game, or not, and this is the first idea that is ported over to SHAP.

SHAP “values” are generally calculated by looking at how a model’s output changes based on different combinations of features. If a model has, say, 10 input features, you could look at the output of four of them, then see how that changes when you add a fifth.

By running this procedure for many different feature sets, you can understand how any given feature contributes to the model’s overall predictions.

Local Interpretable Model-Agnostic Explanation (LIME) is based on the idea that our best bet in understanding a complex model is to first narrow our focus to one part of it, then study a simpler model that captures its local behavior.

Let’s work through an example. Imagine that you’ve taken an enormous amount of housing data and fit a complex random forest model that’s able to predict the price of a house based on features like how old it is, how close it is to neighbors, etc.

LIME lets you figure out what the random forest is doing in a particular region, so you’d start by selecting one row of the data frame, which would contain both the input features for a house and its price. Then, you would “perturb” this sample, which means that for each of its features and its price, you’d sample from a distribution around that data point to create a new, perturbed dataset.

You would feed this perturbed dataset into your random forest model and get a new set of perturbed predictions. On this complete dataset, you’d then train a simple model, like a linear regression.

Linear regression is almost never as flexible and powerful as a random forest, but it does have one advantage: it comes with a bunch of coefficients that are fairly easy to interpret.

This LIME approach won’t tell you what the model is doing everywhere, but it will give you an idea of how the model is behaving in one particular place. If you do a few LIME runs, you can form a picture of how the model is functioning overall.

What is Interpretability in Machine Learning?

In machine learning, interpretability refers to a set of approaches that shed light on a model’s internal workings.

SHAP, LIME, and other explainability techniques can also be used for interpretability work. Rather than go over territory we’ve already covered, we’re going to spend this section focusing on an exciting new field of interpretability, called “mechanistic” interpretability.

Mechanistic Interpretability: A New Frontier

Mechanistic interpretability is defined as “the study of reverse-engineering neural networks”. Rather than examining subsets of input features to see how they impact a model’s output (as we do with SHAP) or training a more interpretable local model (as we do with LIME), mechanistic interpretability involves going directly for the goal of understanding what a trained neural network is really, truly doing.

It’s a very young field that so far has only tackled networks like GPT-2 – no one has yet figured out how GPT-4 functions – but already its results are remarkable. It will allow us to discover the actual algorithms being learned by large language models, which will give us a way to check them for bias and deceit, understand what they’re really capable of, and how to make them even better.

Why are Interpretability and Explainability Important?

Interpretability and explainability are both very important areas of ongoing research. Not so long ago (less than twenty years), neural networks were interesting systems that weren’t able to do a whole lot.

Today, they are feeding us recommendations for news, entertainment, driving cars, trading stocks, generating reams of content, and making decisions that affect people’s lives, forever.

This technology is having a huge and growing impact, and it’s no longer enough for us to have a fuzzy, high-level idea of what they’re doing.

We now know that they work, and with techniques like SHAP, LIME, mechanistic interpretability, etc., we can start to figure out why they work.

Final Thoughts on Interpretability vs. Explainability

In contact centers and elsewhere, large language models are changing the game. But though their power is evident, they remain a predominately empirical triumph.

The inner workings of large language models remain a mystery, one that has only recently begun to be unraveled through techniques like the ones we’ve discussed in this article.

Though it’s probably asking too much to expect contact center managers to become experts in machine learning interpretability or explainability, hopefully, this information will help you make good decisions about how you want to utilize generative AI.

And speaking of good decisions, if you do decide to move forward with deploying a large language model in your contact center, consider doing it through one of the most trusted names in conversational AI. In recent weeks, the Quiq platform has added several tools aimed at making your agents more efficient and your customers happier.

Set up a demo today to see how we can help you!

Request A Demo