9 Top Customer Service Challenges — and How to Overcome Them

It’s a shame that customer service doesn’t always get the respect and attention it deserves because it’s among the most important ingredients in any business’s success. There’s no better marketing than an enthusiastic user base, so every organization should strive to excel at making customers happy.

Alas, this is easier said than done. When someone comes to you with a problem, they can be angry, stubborn, mercurial, and—let’s be honest—extremely frustrating. Some of this just comes with the territory, but some stems from the fact that many customer service professionals simply don’t have a detailed, high-level view of customer service challenges or how to overcome them.

That’s what we’re going to remedy in this post. Let’s jump right in!

What are The Top Customer Service Challenges?

After years of running a generative AI platform for contact centers and interacting with leaders in this space, we have discovered that the top customer service challenges are:

  1. Understanding Customer Expectations
  2. Next Step: Exceeding Customer Expectations
  3. Dealing with Unreasonable Customer Demands
  4. Improving Your Internal Operations
  5. Not Offering a Preferred Communication Channel
  6. Not Offering Real-Time Options
  7. Handling Angry Customers
  8. Dealing With a Service Outage Crisis
  9. Retaining, Hiring, and Training Service Professionals

In the sections below, we’ll break each of these down and offer strategies for addressing them.

1. Understanding Customer Expectations

No matter how specialized a business is, it will inevitably cater to a wide variety of customers. Every customer has different desires, expectations, and needs regarding a product or service, which means you need to put real effort into meeting them where they are.

One of the best ways to foster this understanding is to remain in consistent contact with your customers. Deciding which communication channels to offer customers depends a great deal on the kinds of customers you’re serving. That said, in our experience, text messaging is a universally successful method of communication because it mimics how people communicate in their personal lives. The same goes for web chat and WhatsApp.

Beyond this, setting the right expectations upfront is another good way to address common customer service challenges. For example, if you are not available 24/7, only provide support via email, or don’t have dedicated account managers , you should  make that clear right at the beginning.

Nothing will make a customer angrier than thinking they can text you only to realize that’s not an option in the middle of a crisis.

2. Next Step: Exceed Customer Expectations

Once you understand what your customers want and need, the next step is to go above and beyond to make them happy. Everyone wants to stand out in a fiercely competitive market, and going the extra mile is a great way to do that. One of the major customer service challenges is knowing how to do this proactively, but there are many ways you can succeed without a huge amount of effort.

Consider a few examples, such as:

  • Treating the customer as you would a friend in your personal life, i.e. by apologizing for any negative experiences and empathizing with how they feel;
  • Offering a credit or discount for a future purchase;
  • Sending them a card referencing their experience and thanking them for being a loyal customer;

The key is making sure they feel seen and heard. If you do this consistently, you’ll exceed your customers’ expectations, and the chances of them becoming active promoters of your company will increase dramatically.

3. Dealing with Unreasonable Demands

Of course, sometimes a customer has expectations that simply can’t be met, and this, too, counts as one of the serious customer service challenges. Customer service professionals often find themselves in situations where someone wants a discount that can’t be given, a feature that can’t be built, or a bespoke customization that can’t be done, and they wonder what they should do.

The only thing to do in this situation is to gently let the customer down, using respectful and diplomatic language. Something like, “We’re really sorry we’re not able to fulfill your request, but we’d be happy to help you choose an option that we currently have available” should do the trick.

4. Improving Your Internal Operations

Customer service teams face the constant pressure to improve efficiency, maintain high CSAT scores, drive revenue, and keep costs to service customers low. This matters a lot; slow response times and being kicked from one department to another are two of the more common complaints contact centers get from irate customers, and both are fixable with appropriate changes to your procedures.

Improving contact center performance is among the thorniest customer service challenges, but there’s no reason to give up hope!

One thing you can do is gather and utilize better data regarding your internal workflows. Data has been called “the new oil,” and with good reason—when used correctly, it’s unbelievably powerful.

What might this look like?

Well, you are probably already tracking metrics like first contact resolution (FCR) and (AHT), but this is easier when you have a unified, comprehensive dashboard that gives you quick insight into what’s happening across your organization.

You might also consider leveraging the power of generative AI, which has led to AI assistants that can boost agent performance in a variety of different tasks. You have to tread lightly here because too much bad automation will also drive customers away. But when you use technology like large language models according to best practices, you can get more done and make your customers happier while still reducing the burden on your agents.

5. Not Offering a Preferred Communication Channel

In general, contact centers often deal with customer service challenges stemming from new technologies. One way this can manifest is the need to cultivate new channels in line with changing patterns in the way we all communicate.

You can probably see where this is going – something like 96% of Americans have some kind of cell phone, and if you’ve looked up from your own phone recently, you’ve probably noticed everyone else glued to theirs.

It isn’t just that customers now want to be able to text you instead of calling or emailing; the ubiquity of cell phones has changed their basic expectations. They now take it for granted that your agents will be available round the clock, that they can chat with an agent asynchronously as they go about other tasks, etc.

We can’t tell you whether it’s worth investing in multiple communication channels for your industry. But based on our research, we can tell you that having multiple channels—and text messaging in particular—is something most people want and expect.

6. Not Offering Real-Time Options

When customers reach out asking for help, their problems likely feel unique to them. But since you have so much more context, you’re aware that a very high percentage of inquiries fall into a few common buckets, like “Where is my order?”, “How do I handle a return?”, “My item arrived damaged, how can I exchange it for a new one?”, etc.

These and similar inquiries can easily be resolved instantly using AI, leaving customers and agents happier and more productive.

7. Handling Angry Customers

A common story in the customer service world involves an interaction going south and a customer getting angry.

Gracefully handling angry customers is one of those perennial customer service challenges; the very first merchants had to deal with angry customers, and our robot descendants will be dealing with angry customers long after the sun has burned out.

Whenever you find yourself dealing with a customer who has become irate, there are two main things you have to do:

  1. Empathize with them
  2. Do not lose your cool

It can be hard to remember, but the customer isn’t frustrated with you, they’re frustrated with the company and products. If you always keep your responses calm and rooted in the facts of the situation, you’ll always be moving toward providing a solution.

8. Dealing With a Service Outage Crisis

Sometimes, our technology fails us. The wifi isn’t working on the airplane, a cell phone tower is down following a lightning storm, or that printer from Office Space jams so often it starts to drive people insane.

As a customer service professional, you might find yourself facing the wrath of your customers if your service is down. Unfortunately, in a situation like this, there’s not much you can do except honestly convey to your customers that your team is putting all their effort into getting things back on track. You should go into these conversations expecting frustrated customers, but make sure you avoid the temptation to overpromise.

Talk with your tech team and give customers a realistic timeline, don’t assure them it’ll be back in three hours if you have no way to back that up. Though Elon Musk seems to get away with it, the worst thing the rest of us can do is repeatedly promise unrealistic timelines and miss the mark.

9. Retaining, Hiring, and Training Service Professionals

You may have seen this famous Maya Angelou quote, which succinctly captures what the customer service business is all about:

“I’ve learned that people will forget what you said, people will forget what you did, but people will never forget how you made them feel.”

Learning how to comfort a person or reassure them is high on the list of customer service challenges, and it’s something that is certainly covered in your training for new agents.

But training is also important because it eases the strain on agents and reduces turnover. For customer service professionals, the median time to stick with one company is less than a year, and every time someone leaves, that means finding a replacement, training them, and hoping they don’t head for the exits before your investment has paid off.

Keeping your agents happy will save you more money than you imagine, so invest in a proper training program. Ensure they know what’s expected of them, how to ask for help when needed, and how to handle challenging customers.

Final Thoughts on the Top Customer Service Challenges

Customer service challenges abound, but with the right approach, there’s no reason you shouldn’t be able to meet them head-on!

Check out our report for a more detailed treatment of three major customer service challenges and how to resolve them. Between the report and this post, you should be armed with enough information to identify your own internal challenges, fix them, and rise to new heights.

5 Tips for Coaching Your Contact Center Agents to Work with AI

Generative AI has enormous potential to change the work done at places like contact centers. For this reason, we’ve spent a lot of energy covering it, from deep dives into the nuts and bolts of large language models to detailed advice for managers considering adopting it.

Here, we will provide tips on using AI tools to coach, manage, and improve your agents.

How Will AI Make My Agents More Productive?

Contact centers can be stressful places to work, but much of that stems from a paucity of good training and feedback. If an agent doesn’t feel confident in assuming their responsibilities or doesn’t know how to handle a tricky situation, that will cause stress.

Tip #1: Make Collaboration Easier

With the right AI tools for coaching agents, you can get state-of-the-art collaboration tools that allow agents to invite their managers or colleagues to silently appear in the background of a challenging issue. The customer never knows there’s a team operating on their behalf, but the agent won’t feel as overwhelmed. These same tools also let managers dynamically monitor all their agents’ ongoing conversations, intervening directly if a situation gets out of hand.

Agents can learn from these experiences to become more performant over time.

Tip #2: Use Data-Driven Management

Speaking of improvement, a good AI platform will have resources that help managers get the most out of their agents in a rigorous, data-driven way. Of course, you’re probably already monitoring contact center metrics, such as CSAT and FCR scores, but this barely scratches the surface.

What you really need is a granular look into agent interactions and their long-term trends. This will let you answer questions like “Am I overstaffed?” and “Who are my top performers?” This is the only way to run a tight ship and keep all the pieces moving effectively.

Tip #3: Use AI To Supercharge Your Agents

As its name implies, generative AI excels at generating text, and there are several ways this can improve your contact center’s performance.

To start, these systems can sometimes answer simple questions directly, which reduces the demands on your team. Even when that’s not the case, however, they can help agents draft replies, or clean up already-drafted replies to correct errors in spelling and grammar. This, too, reduces their stress, but it also contributes to customers having a smooth, consistent, high-quality experience.

Tip #4: Use AI to Power Your Workflows

A related (but distinct) point concerns how AI can be used to structure the broader work your agents are engaged in.

Let’s illustrate using sentiment analysis, which makes it possible to assess the emotional state of a person doing something like filing a complaint. This can form part of a pipeline that sorts and routes tickets based on their priority, and it can also detect when an issue needs to be escalated to a skilled human professional.

Tip #5: Train Your Agents to Use AI Effectively

It’s easy to get excited about what AI can do to increase your efficiency, but you mustn’t lose sight of the fact that it’s a complex tool your team needs to be trained to use. Otherwise, it’s just going to be one more source of stress.

You need to have policies around the situations in which it’s appropriate to use AI and the situations in which it’s not. These policies should address how agents should deal with phenomena like “hallucination,” in which a language model will fabricate information.

They should also contain procedures for monitoring the performance of the model over time. Because these models are stochastic, they can generate surprising output, and their behavior can change.

You need to know what your model is doing to intervene appropriately.

Wrapping Up

Hopefully, you’re more optimistic about what AI can do for your contact center, and this has helped you understand how to make the most out of it.

If there’s anything else you’d like to go over, you’re always welcome to request a demo of the Quiq platform. Since we focus on contact centers we take customer service pretty seriously ourselves, and we’d love to give you the context you need to make the best possible decision!

Leveraging Agent Insights to Boost Efficiency and Performance

In the ever-evolving customer service landscape, the role of contact center agents cannot be overstated. As the frontline representatives of a company, their performance directly impacts the quality of customer experience, influencing customer loyalty and brand reputation.

However, the traditional approach to managing agent performance – relying on periodic reviews and supervisor observations – has given way to a more sophisticated, data-driven strategy. For this reason, managing agent performance with a method that leverages the rich data generated by agent interactions to enhance service delivery, agent satisfaction, and operational efficiency is becoming more important all the time.

This article delves into this approach. We’ll begin by examining its benefits from three critical perspectives – the customer, the agent, and the contact center manager – before turning to a more granular breakdown of how you can leverage it in your contact center.

Why is it Important to Manage Agent Performance with Insights?

First, let’s start by justifying this project. While it’s true that very few people today would doubt the need to track some data related to what agents are doing all day, it’s still worth saying a few words about why it really is a crucial part of running a contact center.

To do this, we’ll focus on how three groups are impacted when agent performance is managed through insights: customers, the agents themselves, and contact center managers.

It’s Good for the Customers

The primary beneficiary of improved agent performance is the customer. Contact centers can tailor their service strategies by analyzing agent metrics to better meet customer needs and preferences. This data-driven approach allows for identifying common issues, customer pain points, and trends in customer behavior, enabling more personalized and effective interactions.

As agents become more adept at addressing customer needs swiftly and accurately, customer satisfaction levels rise. This enhances the individual customer’s experience and boosts the overall perception of the brand, fostering loyalty and encouraging positive word-of-mouth.

It’s Good for the Agents

Agents stand to gain immensely from a management strategy focused on data-driven insights. Firstly, performance feedback based on concrete metrics rather than subjective assessments leads to a fairer, more transparent work environment.

Agents receive specific, actionable feedback that helps them understand their strengths and which areas need improvement. This can be incredibly motivating and can drive them to begin proactively bolstering their skills.

Furthermore, insights from performance data can inform targeted training and development programs. For instance, if data indicates that an agent excels in handling certain inquiries but struggles with others, their manager can provide personalized training to bridge this gap. This helps agents grow professionally and increases their job satisfaction as they become more competent and confident in their roles.

It’s Good for Contact Center Managers

For those in charge of overseeing contact centers, managing agents through insights into their performance offers a powerful tool for cultivating operational excellence. It enables a more strategic approach to workforce management, where decisions are informed by data rather than gut feeling.

Managers can identify high performers and understand the behaviors that lead to success, allowing them to replicate these practices across the team. Intriguingly, this same mechanism is also at play in the efficiency boost seen by contact centers that adopt generative AI. When such centers train a model on the interactions of their best agents, the knowledge in those agents’ heads can be incorporated into the algorithm and utilized by much more junior agents.

The insights-driven approach also aids in resource allocation. By understanding the strengths and weaknesses of their team, managers can assign agents to the tasks they are most suited for, optimizing the center’s overall performance.

Additionally, insights into agent performance can highlight systemic issues or training gaps, providing managers with the opportunity to make structural changes that enhance efficiency and effectiveness.

Moreover, using agent insights for performance management supports a culture of continuous improvement. It encourages a feedback loop where agents are continually assessed, supported, and developed, driving the entire team towards higher performance standards. This improves the customer experience and contributes to a positive working environment where agents feel valued and empowered.

In summary, managing performance by tracking agent metrics is a holistic strategy that enhances the customer experience, supports agent development, and empowers managers to make informed decisions.

It fosters a culture of transparency, accountability, and continuous improvement, leading to operational excellence and elevated service standards in the contact center.

How to Use Agent Insights to Manage Performance

Now that we know what all the fuss is about, let’s turn to addressing our main question: how to use agent insights to correct, fine-tune, and optimize agent performance. This discussion will center specifically around Quiq’s Agent Insights tool, which is a best-in-class analytics offering that makes it easy to figure out what your agents are doing, where they could improve, and how that ultimately impacts the customers you serve.

Managing Agent Availability

To begin with, you need a way of understanding when your agents are free to handle an issue and when they’re preoccupied with other work. The three basic statuses an agent can have are “available,” “current conversations” (i.e. only working on the current batch of conversations), and “unavailable.” All three of these can be seen through Agent Insights, which allows you to select from over 50 different metrics, customizing and saving different views as you see fit.

The underlying metrics that go into understanding this dimension of agent performance are, of course, time-based. In essence, you want to evaluate the ratios between four quantities: the time the agent is available, the time the agent is online, the time the agent spends in a conversation, and the time an agent is unavailable.

As you’re no doubt aware, you don’t necessarily want to maximize the amount of time an agent spends in conversations, as this can quickly lead to burnout. Rather, you want to use these insights into agent performance to strike the best, most productive balance possible.

Managing Agent Workload

A related phenomenon you want to understand is the kind of workload your agents are operating under. The five metrics that underpin this are:

  1. Availability
  2. Number of completions per hour your agents are managing
  3. Overall utilization (i.e. the percentage of an agent’s available conversation limit they have filled in a given period).
  4. Average workload
  5. The amount of time agents spend waiting for a customer response.

All of this can be seen in Agent Insights. This view allows you to do many things to hone in on specific parts of your operation. You can sort by the amount of time your agents spend waiting for a reply from a customer, for example, or segment agents by e.g. role. If you’re seeing high waiting and low utilization, that means you are overstaffed and should probably have fewer agents.

If you’re seeing high waiting and high utilization, by contrast, you should make sure your agents know inactive conversations should be marked appropriately.

As with the previous section, you’re not necessarily looking to minimize availability or maximize completions per hour. You want to make sure that agents are working at a comfortable pace, and that they have time between issues to reflect on how they’re doing and think about whether they want to change anything in their approach.

But with proper data-driven insights, you can do much more to ensure your agents have the space they need to function optimally.

Managing Agent Efficiency

Speaking of functioning optimally, the last thing we want to examine is agent efficiency. By using Agent Insights, we can answer questions such as how well new agents are adjusting to their roles, how well your teams are working together, and how you can boost each agent’s output (without working them too hard).

The field of contact center analytics is large, but in the context of agent efficiency, you’ll want to examine metrics like completion rate, completions per hour, reopen rate, missed response rate, missed invitation rate, and any feedback customers have left after interacting with your agents.

This will give you an unprecedented peek into the moment-by-moment actions agents are taking, and furnish you with the hard facts you need to help them streamline their procedures. Imagine, for example, you’re seeing a lot of keyboard usage. This means the agent is probably not operating as efficiently as they could be, and you might be able to boost their numbers by training them to utilize Quiq’s Snippets tool.

Or, perhaps you’re seeing a remarkably high rate of clipboard usage. In that case, you’d want to look over the clipboard messages your agents are using and consider turning them into snippets, where they’d be available to everyone.

The Modern Approach to Managing Agents

Embracing agent insights for performance management marks a transformative step towards achieving operational excellence in contact centers. This data-driven approach not only elevates the customer service experience but also fosters a culture of continuous improvement and empowerment among agents.

By leveraging tools like Quiq’s Agent Insights, managers can unlock a comprehensive understanding of agent availability, workload, and efficiency, enabling informed decisions that benefit both the customer and the service team.

If you’re intrigued by the possibilities, contact us to schedule a demo today!

Request A Demo

6 Questions to Ask Generative AI Vendors You’re Evaluating

With all the power exhibited by today’s large language models, many businesses are scrambling to leverage them in their offerings. Enterprises in a wide variety of domains – from contact centers to teams focused on writing custom software – are adding AI-backed functionality to make their users more productive and the customer experience better.

But, in the rush to avoid being the only organization not using the hot new technology, it’s easy to overlook certain basic sanity checks you must perform when choosing a vendor. Today, we’re going to fix that. This piece will focus on several of the broad categories of questions you should be asking potential generative AI providers as you evaluate all your options.

This knowledge will give you the best chance of finding a vendor that meets your requirements, will help you with integration, and will ultimately allow you to better serve your customers.

These are the Questions you Should ask Your Generative AI Vendor

Training large language models is difficult. Besides the fact that it requires an incredible amount of computing power, there are also hundreds of tiny little engineering optimizations that need to be made along the way. This is part of the reason why all the different language model vendors are different from one another.

Some have a longer context window, others write better code but struggle with subtle language-based tasks, etc. All of this needs to be factored into your final decision because it will impact how well your vendor performs for your particular use case.

In the sections that follow, we’ll walk you through some of the questions you should raise with each vendor. Most of these questions are designed to help you get a handle on how easy a given offering will be to use in your situation, and what integrating it will look like.

1. What Sort of Customer Service Do You Offer?

We’re contact center and customer support people, so we understand better than anyone how important it is to make sure users know what our product is, what it can do, and how we can help them if they run into issues.

As you speak with different generative AI vendors, you’ll want to probe them about their own customer support, and what steps they’ll take to help you utilize their platform effectively.

For this, just start with the basics by figuring out the availability of their support teams – what hours they operate in, whether they can accommodate teams working in multiple time zones, and whether there is an option for 24/7 support if a critical problem arises.

Then, you can begin drilling into specifics. One thing you’ll want to know about is the channels their support team operates through. They might set up a private Slack channel with you so you can access their engineers directly, for example, or they might prefer to work through email, a ticketing system, or a chat interface. When you’re discussing this topic, try to find out whether you’ll have a dedicated account manager to work with.

You’ll also want some context on the issue resolution process. If you have a lingering problem that’s not being resolved, how do you go about escalating it, and what’s the team’s response time for issues in general?

Finally, it’s important that the vendors have some kind of feedback mechanism. Just as you no doubt have a way for clients to let you know if they’re dissatisfied with an agent or a process, the vendor you choose should offer a way for you to let them know how they’re doing so they can improve. This not only tells you they care about getting better, it also indicates that they have a way of figuring out how to do so.

2. Does Your Team Offer Help with Setting up the Platform?

A related subject is the extent to which a given generative AI vendor will help you set up their platform in your environment. A good way to begin is by asking what kinds of training materials and resources they offer.

Many vendors are promoting their platforms by putting out a ton of educational content, all of which your internal engineers can use to get up to speed on what those platforms can do and how they function.

This is the kind of thing that is easy to overlook, but you should pay careful attention to it. Choosing a generative AI vendor that has excellent documentation, plenty of worked-out examples, etc. could end up saving you a tremendous amount of time, energy, and money down the line.

Then, you can get clarity on whether the vendor has a dedicated team devoted to helping customers like you get set up. These roles are usually found under titles like “solutions architect”, so be sure to ask whether you’ll be assigned such a person and the extent to which you can expect their help. Some platforms will go to the moon and back to make sure you have everything you need, while others will simply advise you if you get stuck somewhere.

Which path makes the most sense depends on your circumstances. If you have a lot of engineers you may not need more than a little advice here and there, but if you don’t, you’ll likely need more handholding (but will probably also have to pay extra for that). Keep all this in mind as you’re deciding.

3. What Kinds of Integrations Do You Support?

Now, it’s time to get into more technical details about the integrations they support. When you buy a subscription to a generative AI vendor, you are effectively buying a set of capabilities. But those capabilities are much more valuable if you know they’ll plug in seamlessly with your existing software, and they’re even more valuable if you know they’ll plug into software you plan on building later on. You’ve probably been working on a roadmap, and now’s the time to get it out.

It’s worth checking to see whether the vendor can support many different kinds of language models. This involves a nuance in what the word “vendor” means, so let’s unpack it a little bit. Some generative AI vendors are offering you a model, so they’re probably not going to support another company’s model.

OpenAI and Anthropic are examples of model vendors, so if you work with them you’re buying their model and will not be able to easily incorporate someone else’s model.

Other vendors, by contrast, are offering you a service, and in many cases that service could theoretically by powered by many different models.

Quiq’s Conversational CX Platform, for example, supports OpenAI’s GPT models, and we have plans to expand the scope of our integrations to encompass even more models in the future.

Another thing you’re going to want to check on is whether the vendor makes it easy to integrate vector databases into your workflow. Vectors are data structures that are remarkably good at capturing subtle relationships in large datasets; they’re becoming an ever-more-important part of machine learning, as evinced by the fact that there are now a multitude of different vector databases on offer.

The chances are pretty good that you’ll eventually want to leverage a vector database to store or search over customer interactions, and you’ll want a vendor that makes this easy.

Finally, see if the vendor has any case studies you can look at. Quiq has published a case study on how our language services were utilized by LOOP, a car insurance company, to make a far superior customer-service chatbot. The result was that customers were able to get much more personalization in their answers and were able to resolve their problems fully half of the time, without help. This led to a corresponding 55% reduction in tickets, and a customer satisfaction rating of 75% (!) when interacting with the Quiq-powered AI assistant.

See if the vendors you’re looking at have anything similar you can examine. This is especially helpful if the case studies are focused on companies that are similar to yours.

4. How Does Prompt Engineering and Fine-Tuning Work for Your Model?

For many tasks, large language models work perfectly fine on their own, without much special effort. But there are two methods you should know about to really get the most out of them: prompt engineering and fine-tuning.

As you know, prompts are the basic method for interacting with language models. You’ll give a model a prompt like “What is generative AI?”, and it’ll generate a response. Well, it turns out that models are really sensitive to the wording and structure of prompts, and prompt engineers are those who explore the best way to craft prompts to get useful output from a model.

It’s worth asking potential vendors about this because they handle prompts differently. Quiq’s AI Studio encourages atomic prompting, where a single prompt has a clear purpose and intended completion, and we support running prompts in parallel and sequentially. You can’t assume everyone will do this, however, so be sure to check.

Then, there’s fine-tuning, which refers to training a model on a bespoke dataset such that its output is heavily geared towards the patterns found in that dataset. It’s becoming more common to fine-tune a foundational model for specific use cases, especially when those use cases involve a lot of specialized vocabulary such as is found in medicine or law.

Setting up a fine-tuning pipeline can be cumbersome or relatively straightforward depending on the vendor, so see what each vendor offers in this regard. It’s also worth asking whether they offer technical support for this aspect of working with the models.

5. Can Your Models Support Reasoning and Acting?

One of the current frontiers in generative AI is building more robust, “agentic” models that can execute strings of tasks on their way to completing a broader goal. This goes by a few different names, but one that has been popping up in the research literature is “ReAct”, which stands for “reasoning and acting”.

You can get ReAct functionality out of existing language models through chain-of-thought prompting, or by using systems like AutoGPT; to help you concretize this a bit, let’s walk through how ReAct works in Quiq.

With Quiq’s AI Studio, a conversational data model is used to classify and store both custom and standard data elements, and these data elements can be set within and across “user turns”. A single user turn is the time between when a user offers an input to the time at which the AI responds and waits for the next user input.

Our AI can set and reason about the state of the data model, applying rules to take the next best action. Common actions include things like fetching data, running another prompt, delivering a message, or offering to escalate to a human.

Though these efforts are still early, this is absolutely the direction the field is taking. If you want to be prepared for what’s coming without the need to overhaul your generative AI stack later on, ask about how different vendors support ReAct.

6. What’s your Pricing Structure Like?

Finally, you’ll need to talk to vendors about how their prices work, including any available details on licensing types, subscriptions, and costs associated with the integration, use, and maintenance of their solution.

To take one example, Quiq’s licensing is based on usage. We establish a usage pool wherein our customers pre-pay Quiq for a 12-month contract; then, as the customer uses our software money is deducted from that pool. We also have an annual AI Assistant Maintenance fee along with a one-time implementation fee.

Vendors can vary considerably in how their prices work, so if you don’t want to overpay then make sure you have a clear understanding of their approach.

Picking the Right Generative AI Vendor

Language models and related technologies are taking the world by storm, transforming many industries, including customer service and contact center management.

Making use of these systems means choosing a good vendor, and that requires you to understand each vendor’s model, how those models integrate with other tools, and what you’re ultimately going to end up paying.

If you want to see how Quiq stacks up and what we can do for you, schedule a demo with us today!

Request A Demo

Your Guide to Trust and Transparency in the Age of AI

Over the last few years, AI has really come into its own. ChatGPT and similar large language models have made serious inroads in a wide variety of natural language tasks, generative approaches have been tested in domains like music and protein discovery, researchers have leveraged techniques like chain-of-thought prompting to extend the capabilities of the underlying models, and much else besides.

People working in domains like customer service, content marketing, and software engineering are mostly convinced that this technology will significantly impact their day-to-day lives, but many questions remain.

Given the fact that these models are enormous artifacts whose inner workings are poorly understood, one of the main questions centers around trust and transparency. In this article, we’re going to address these questions head-on. We’ll discuss why transparency is important when advanced algorithms are making ever more impactful decisions, and turn our attention to how you can build a more transparent business.

Why is Transparency Important?

First, let’s take a look at why transparency is important in the first place. The next few sections will focus on the trust issues that stem from AI becoming a ubiquitous technology that few understand at a deep level.

AI is Becoming More Integrated

AI has been advancing steadily for decades, and this has led to a concomitant increase in its use. It’s now commonplace for us to pick entertainment based on algorithmic recommendations, for our loan and job applications to pass through AI filters, and for more and more professionals to turn to ChatGPT before Google when trying to answer a question.

We personally know of multiple software engineers who claim to feel as though they’re at a significant disadvantage if their preferred LLM is offline for even a few hours.

Even if you knew nothing about AI except the fact that it seems to be everywhere now, that should be sufficient incentive to want more context on how it makes decisions and how those decisions are impacting the world.

AI is Poorly Understood

But, it turns out there is another compelling reason to care about transparency in AI: almost no one knows how LLMs and neural networks more broadly can do what they do.

To be sure, very simple techniques like decision trees and linear regression models pose little analytical challenge, and we’ve written a great deal over the past year about how language models are trained. But if you were to ask for a detailed account of how ChatGPT manages to create a poem with a consistent rhyme scheme, we couldn’t tell you.

And – as far as we know – neither could anyone else.

This is troubling; as we noted above, AI has become steadily more integrated into our private and public lives, and that trend will surely only accelerate now that we’ve seen what generative AI can do. But if we don’t have a granular understanding of the inner workings of advanced machine-learning systems, how can we hope to train bias out of them, double-check their decisions, or fine-tune them to behave productively and safely?

These precise concerns are what have given rise to the field of explainable AI. Mathematical techniques like LIME and SHAP can offer some intuition for why a given algorithm generated the output it did, but they accomplish this by crudely approximating the algorithm instead of actually explaining it. Mechanistic interpretability is the only approach we know of that confronts the task directly, but it has only just gotten started.

This leaves us in the discomfiting situation of relying on technologies that almost no one groks deeply, including the people creating them.

People Have Many Questions About AI

Finally, people have a lot of questions about AI, where it’s heading, and what its ultimate consequences will be. These questions can be laid out on a spectrum, with one end corresponding to relatively prosaic concerns about technological unemployment and deepfakes influencing elections, and the other corresponding to more exotic fears around superintelligent agents actively fighting with human beings for control of the planet’s future.

Obviously, we’re not going to sort all this out today. But as a contact center manager who cares about building trust and transparency, it would behoove you to understand something about these questions and have at least cursory answers prepared for them.

How do I Increase Transparency and Trust when Using AI Systems?

Now that you know why you should take trust and transparency around AI seriously, let’s talk about ways you can foster these traits in your contact center. The following sections will offer advice on crafting policies around AI use, communicating the role AI will play in your contact center, and more.

Get Clear on How You’ll Use AI

The journey to transparency begins with having a clear idea of what you’ll be using AI to accomplish. This will look different for different kinds of organizations – a contact center, for example, will probably want to use generative AI to answer questions and boost the efficiency of its agents, while a hotel might instead attempt to automate the check-in process with an AI assistant.

Each use case has different requirements and different approaches that are better suited for addressing it; crafting an AI strategy in advance will go a long to helping you figure out how you should allocate resources and prioritize different tasks.

Once you do that, you should then create documentation and a communication policy to support this effort. The documentation will make sure that current and future agents know how to use the AI platform you decide to work with, and it should address the strengths and weaknesses of AI, as well as information about when its answers should be fact-checked. It should also be kept up-to-date, reflecting any changes you make along the way.

The communication policy will help you know what to say if someone (like a customer) asks you what role AI plays in your organization.

Know Your Data

Another important thing you should keep in mind is what kind of data your model has been trained on, and how it was gathered. Remember that LLMs consume huge amounts of textual data and then learn patterns in that data they can use to create their responses. If that data contains biased information – if it tends to describe women as “nurses” and men as “doctors”, for example – that will likely end up being reflected in its final output. Reinforcement learning from human feedback and other approaches to fine-tuning can go part of the way to addressing this problem, but the best thing to do is ensure that the training data has been curated to reflect reality, not stereotypes.

For similar reasons, it’s worth knowing where the data came from. Many LLMs are trained somewhat indiscriminately, and might have even been shown corpora of pirated books or other material protected by copyright. This has only recently come to the forefront of the discussion, and OpenAI is currently being sued by several different groups for copyright infringement.

If AI ends up being an important part of the way your organization functions, the chances are good that, eventually, someone will want answers about data provenance.

Monitor Your AI Systems Continuously

Even if you take all the precautions described above, however, there is simply no substitute for creating a robust monitoring platform for your AI systems. LLMs are stochastic systems, meaning that it’s usually difficult to know for sure how they’ll respond to a given prompt. And since these models are prone to fabricating information, you’ll need to have humans at various points making sure the output is accurate and helpful.

What’s more, many machine learning algorithms are known to be affected by a phenomenon known as “model degradation”, in which their performance steadily declines over time. The only way you can check to see if this is occurring is to have a process in place to benchmark the quality of your AI’s responses.

Be Familiar with Standards and Regulations

Finally, it’s always helpful to know a little bit about the various rules and regulations that could impact the way you use AI. These tend to focus on what kind of data you can gather about clients, how you can use it, and in what form you have to disclose these facts.

The following list is not comprehensive, but it does cover some of the more important laws:

  • The General Data Protection Regulation (GDPR) is a comprehensive regulatory framework established by the European Union to dictate data handling practices. It is applicable not only to businesses based in Europe but also to any entity that processes data from EU citizens.
  • The California Consumer Protection Act (CCPA) was introduced by California to enhance individual control over personal data. It mandates clearer data collection practices by companies, requires privacy disclosures, and allows California residents to opt-out of data collection.
  • Soc II, developed by the American Institute of Certified Public Accounts, focuses on the principles of confidentiality, privacy, and security in the handling and processing of consumer data.
  • In the United Kingdom, contact centers must be aware of the Financial Conduct Authority’s new “Consumer Duty” regulations. These regulations emphasize that firms should act with integrity toward customers, avoid causing them foreseeable harm, and support customers in achieving their financial objectives. As the integration of generative AI into this regulatory landscape is still being explored, it’s an area that stakeholders need to keep an eye on.

Fostering Trust in a Changing World of AI

An important part of utilizing AI effectively is making sure you do so in a way that enhances the customer experience and works to build your brand. There’s no point in rolling out a state-of-the-art generative AI system if it undermines the trust your users have in your company, so be sure to track your data, acquaint yourself with the appropriate laws, and communicate clearly.

Another important step you can take is to work with an AI vendor who enjoys a sterling reputation for excellence and propriety. Quiq is just such a vendor, and our Conversational AI platform can bring AI to your contact center in a way that won’t come back to bite you later. Schedule a demo to see what we can do for you, today!

Request A Demo

Moving from Natural Language Understanding (NLU) to Customer-Facing AI Assistants

There can no longer be any doubt that large language models and generative AI more broadly are going to have a real impact on many industries. Though we’re still in the preliminary stages of working out the implications, the evidence so far suggests that this is already happening.

Language models in contact centers are helping to more junior workers be more productive, and reducing employee turnover in the process. They’re also being used to automate huge swathes of content creation, assisting in data augmentation tasks, and plenty else besides.

Part of the task we’ve set ourselves here at Quiq is explaining how these models are trained and how they’ll make their way into the workflows of the future. To that end, we’ve written extensively about how large language models are trained, how researchers are pushing them into uncharted territories, and which models are appropriate for any given task.

This post is another step in that endeavor. Specifically, we’re going to discuss natural language understanding, how it works, and how it’s distinct from related terms (like “natural language generation”). With that done, we’ll talk about how natural language understanding is a foundational first step and takes us closer to creating robust customer-facing AI assistants.

What is Natural Language Understanding?

Language is a tool of remarkable power and flexibility – so much so that it wouldn’t be much of an exaggeration to say that it’s at the root of everything else the human race has accomplished. From towering works of philosophy to engineering specs to instructions for setting up a remote, language is a force multiplier that makes each of us vastly more effective than we otherwise would be.

Evidence of this claim comes from the fact that, even when we’re alone, many of us think in words or even talk to ourselves as we work through something difficult. Certain kinds of thoughts are all but impossible to have without the scaffolding provided by language.

For all these reasons, creating machines able to parse natural language has long been a goal of AI researchers and computer scientists. The field that has been established to address itself to this task is known as natural language understanding.

There’s a rather deep philosophical here where the word “understanding” is concerned. As the famous story of the Tower of Babel demonstrates, it isn’t enough for the members of a group to be making sounds to accomplish great things, it’s also necessary for the people involved to understand what everyone is saying. This means that when you say a word like “chicken” there’s a response in my nervous system such that the “chicken” concept is activated, along with other contextually relevant knowledge, such as the location of the chicken feed. If you said “курица” (to someone who doesn’t know Russian) or “鸡” (to someone who doesn’t know Mandarin), the same process wouldn’t have occurred, no understanding would’ve happened, and language wouldn’t have helped at all.

Whether and how a machine can understand language fully humanly is too big a topic to address here, but we can make some broad comments. As is often the case, researchers in the field of natural language understanding have opted to break the problem down into much more tractable units. Two of the biggest such units of natural language understanding are intent recognition (what a sentence is intended to accomplish) and entity recognition (who the sentence is referring to).

This should make a certain intuitive sense. Though you may not be consciously going through a mental checklist when someone says something to you, on some level, you’re trying to figure out what their goal is and who or what they’re talking about. The intent behind the sentence “John has an apple”, for example, is to inform you of a fact about the world, and the main entities are “John” and “apple”. If you know John, a little image of him holding an apple would probably pop into your head.

This has many obvious applications to the work done in contact centers. If you’re building an automated ticket classification system, for instance, it would help to be able to tell whether the intent behind the ticket is to file a complaint, reach a representative, or perform a task like resetting a password. It would also help to be able to categorize the entities, like one of a dozen products your center supports, that are being referred to.

Natural Language Understanding v.s. Natural Language Processing

Natural language understanding is its own field, and it’s easy to confuse it with other, related fields, like natural language processing.

Most of the sources we consulted consider natural language understanding to be a subdomain of natural language processing (NLP). Whereas the former is concerned with parsing natural language into a format that machines can work with, the latter subsumes this task, along with others like machine translation and natural language generation.

Natural Language Understanding v.s. Natural Language Generation

Speaking of natural language generation, many people also confuse natural language understanding and natural language generation. Natural language generation is more or less what it sounds like using computers to generate human-sounding text or speech.

Natural language understanding can be an important part of getting natural language generation right, but they’re not the same thing.

Customer-Facing AI Assistants

Now that we’ve discussed natural language understanding, let’s talk about how it can be utilized in the attempt to create high-quality customer-facing AI assistants.

How Can Natural Language Understand Be Used to Make Customer-Facing Assistants?

Natural language understanding refers to a constellation of different approaches to decomposing language into pieces that a machine can work with. This allows an algorithm to discover the intent in a message, tag parts of speech (nouns, verbs, etc.), or pull out the entities referenced.

All of this is an important part of building effective customer-facing AI assistants. At Quiq, we’ve built LLM-powered knowledge assistants able to answer common questions across your reference documentation, data assistants that can use CRM and order management systems to provide actionable insights, and other kinds of conversational AI systems. Though we draw on many technologies and research areas, none of this would be possible without natural language understanding.

What are the Benefits of Customer-Facing AI Assistants?

The reason people have been working so long to create powerful customer-facing AI assistants is that there are so many benefits involved.

At a contact center, agents spend most of their day answering questions, resolving issues, and otherwise making sure a customer base can use a set of product offerings as intended.

As with any job, some of these tasks are higher-value than others. All of the work is important, but there will always be subtle and thorny issues that only a skilled human can work through, while others are quotidian and can be farmed out to a machine.

This is a long way of saying that one of the major benefits of customer-facing AI assistants is that they free up your agents to specialize at handling the most pressing requests, with password resets or something similar handled by a capable product like the Quiq platform.

A related benefit is improved customer experience. When agents can focus their efforts they can spend more time with customers who need it. And, when you have properly fine-tuned language models interacting with customers, you’ll know that they’re unfailingly polite and helpful because they’ll never become annoyed after a long shift the way a human being might.

Robust Costumer-Facing AI Assistants with Quiq

Just as understanding has been such a crucial part of the success of our species, it’ll be an equally crucial part of the success of advanced AI tooling.

One way you can make use of bleeding-edge natural language understanding techniques is by building your language models. This would require you to hire teams of extremely smart engineers. But this would be expensive; besides their hefty salaries, you’d also have to budget to keep the fridge stocked with the sugar-free Red Bulls such engineers require to function.

Or, you could utilize the division of labor. Just as contact center agents can outsource certain tasks to machines, so too can you outsource the task of building an AI-based CX platform to Quiq. Set up a demo today to see what our advanced AI technology and team can do for your contact center!

Request A Demo

Reinforcement Learning from Human Feedback

ChatGPT – and other large language models like it – are already transforming education, healthcare, software engineering, and the work being done in contact centers.

We’ve written extensively about how self-supervised learning is used to train these models, but one thing we haven’t spent much time on is reinforcement learning from human feedback (RLHF).

Today, we’re rectifying that. We’re going to dive into what reinforcement learning from human feedback is, why it’s important, and how it works.

With that done, you’ll have received a thorough education in this world-changing technology.

What is Reinforcement Learning from Human Feedback?

As you no doubt surmised from its name, reinforcement learning from human feedback involves two components: reinforcement learning and human feedback. Though the technical specifics are (as usual) very involved, the basic idea is simple: you have models produce output, humans rate the output that they prefer (based on its friendliness, completeness, accuracy, etc.), and then the model is updated accordingly.

It’ll help if we begin by talking about what reinforcement learning is. This background will prove useful in understanding the unfolding of the broader process.

What is Reinforcement Learning?

There are four widespread approaches to getting intelligent behavior from machines: supervised learning, unsupervised learning, semi-supervised learning, and reinforcement learning.

With supervised learning, you feed a statistical algorithm a bunch of examples of correctly-labeled data in the hope that it will generalize to further examples it hasn’t seen before. Regression and supervised classification models are standard applications of supervised learning.

Unsupervised learning is a similar idea, but you forego the labels. It’s used for certain kinds of clustering tasks, and for applications like dimensionality reduction.

Semi-supervised learning is a combination of these two approaches. Suppose you have a gigantic body of photographs, and you want to develop an automated system to tag them. If some of them are tagged then your system can use those tags to learn a pattern, which can then be applied to the rest of the untagged images.

Finally, there’s reinforcement learning (RL). Reinforcement learning is entirely different. With reinforcement learning, you’re usually setting up an environment (like a video game), and putting an agent in the environment with a reward structure that tells it which actions are good and which are bad. If the agent successfully flies a spaceship through a series of rings, for example, that might be worth +10 points each, completing an entire level might be worth +100, crashing might be worth -1,000, and so on.

The idea is that, over time, the reinforcement learning agent will learn to execute a strategy that maximizes its long-term reward. It’ll realize that rings are worth a few points and so it should fly through them, it’ll learn that it should try to complete a level because that’s a huge reward bonus, it’ll learn that crashing is bad, etc.

Reinforcement learning is far more powerful than other kinds of machine learning; when done correctly, it can lead to agents able to play the stock market, run procedures in a factory, and do a staggering variety of other tasks.

What are the Steps of Reinforcement Learning from Human Feedback?

Now that we know a little bit about reinforcement learning, let’s turn to a discussion of reinforcement learning from human feedback.

As we just described, reinforcement learning agents have to be trained like any other machine learning system. Under normal circumstances, this doesn’t involve any human feedback. A programmer will update the code, environment, or reward structure between training runs, but they don’t usually provide feedback directly to the agent.

Except, that is, in the case of reinforcement learning from human feedback, in which case that’s exactly what happens. A model will produce a set of outputs, and humans will rank them. Over time the model will adjust to making more and more appropriate responses, as judged by the human raters providing them with feedback.

Sometimes, this feedback can be for something relatively prosaic. It’s been used, for example, to get RL agents to execute backflips in simulated environments. The raters will look at short videos of two movements and select the one that looks like it’s getting closer to a backflip; with enough time, this gets the agent to actually do one.

Or, it can be used for something more nuanced, such as getting a large language model to produce more conversational dialogue. This is part of how ChatGPT was trained.

Why is Reinforcement Learning from Human Feedback Necessary?

ChatGPT is already being used to great effect in contact centers and the customer service arena more broadly. Here are some example applications:

  • Question answering: ChatGPT is exceptionally good at answering questions. What’s more, some companies have begun fine-tuning it on their own internal and external documentation, so that people can directly ask it questions about how a product works or how to solve an issue. This obviates the need to go hunting around inside the docs.
  • Summarization: Similarly, ChatGPT can be used to summarize video transcripts, email threads, and lengthy articles so that agents (or customers) can get through the material at a much greater clip. This can, for example, help agents stay abreast of what’s going on in other parts of the company without burdening them with the need to read constantly. Quiq has custom-built tools for performing exactly this function.
  • Onboarding new hires: Together, question-answering and summarization are helping new contact center agents get up to speed much more quickly when they start their jobs.
    Sentiment analysis: Sentiment analysis refers to classifying a text according to its sentiment, i.e. whether it’s “positive”, “negative”, or “neutral”. Sentiment analysis comes in several different flavors, including granular and aspect-spaced, and ChatGPT can help with all of them. Being able to automatically tag a customer issue comes in handy when you’re trying to sort and prioritize them.
  • Real-time language translation: If your product or service has an international audience, then you might need to avail yourself of translation services so that agents and customers are speaking the same language. There are many such services available, but ChatGPT has proven to be at least as good as almost all of them.

In aggregate, these and other use cases of large language models are making contact center agents much more productive. But contact center agents have to interact with customers in a certain way – they have to be polite, helpful, etc.

And out of the box, most large language models do not behave that way. We’ve already had several high-profile incidents in which a language model e.g. asked a reporter to end his marriage or falsely accused a law school professor of sexual harassment.

Reinforcement learning from human feedback is currently the most promising approach for tuning this toxic and harmful behavior out of large language models. The only reason they’re able to help contact center agents so much is that they’ve been fine-tuned with such an approach; otherwise, agents would be spending an inordinate amount of time rephrasing and tinkering with a model’s output to get it to be appropriately friendly.

This is why reinforcement learning from human feedback is important for the managers of contact centers to understand – it’s a major part of why large language models are so useful in the first place.

Applications of Reinforcement Learning from Human Feedback

To round out our picture, we’re going to discuss a few ways in which reinforcement learning from human feedback is actually used in the wild. We’ve already discussed how it is fine-tuning models to be more helpful in the context of a contact center, and we’ll now talk a bit about how it’s used in gaming and robotics.

Using Reinforcement Learning from Human Feedback in Games

Gaming has long been one of the ideal testing grounds for new approaches to artificial intelligence. As you might expect, it’s also a place where reinforcement learning from human feedback has been successfully applied.

OpenAI used it to achieve superhuman performance on a classic Atari game, Enduro. Enduro is an old-school racing game, and like all racing games, the point is to gradually pass the other cars without hitting them or going out of bounds in the game.

It’s exceptionally difficult for an agent to learn to play Enduro will using only standard reinforcement learning approaches. But when human feedback is added, the results shift dramatically.

Using Reinforcement Learning from Human Feedback in Robotics

Because robotics almost always involves an agent interacting with the physical world, it’s especially well-suited to reinforcement learning from human feedback.

Often, it can be difficult to get a robot to execute a long series of steps that achieves a valuable reward, especially when the intermediate steps aren’t themselves very valuable. What’s more, it can be especially difficult to build a reward structure that correctly incentivizes the agent to execute the intermediate steps in the right order.

It’s much simpler instead to have humans look at sequences of actions and judge for themselves which will get the agent closer to its ultimate goal.

RLHF For The Contact Center Manager

Having made it this far, you should be in a much better position to understand how reinforcement learning from human feedback works, and how it contributes to the functioning of your contact centers.

If you’ve been thinking about leveraging AI to make yourself or your agents more effective, set up a demo with the Quiq team to see how we can put our cutting-edge models to work for you. We offer both customer-facing and agent-facing tools, all of them designed to help you make customers happier while reducing agent burnout and turnover.

Request A Demo

What are the Biggest Questions About AI?

The term “artificial intelligence” was coined at the famous Dartmouth Conference in 1956, put on by luminaries like John McCarthy, Marvin Minsky, and Claude Shannon, among others.

These organizers wanted to create machines that “use language, form abstractions and concepts, solve kinds of problems now reserved for humans, and improve themselves.” They went on to claim that “…a significant advance can be made in one or more of these problems if a carefully selected group of scientists work on it together for a summer.”

Half a century later, it’s fair to say that this has not come to pass; brilliant as they were, it would seem as though McCarthy et al. underestimated how difficult it would be to scale the heights of the human intellect.

Nevertheless, remarkable advances have been made over the past decade, so much so that they’ve ignited a firestorm of controversy around this technology. People are questioning the ways in which it can be used negatively, and whether it might ultimately pose an extinction risk to humanity; they’re probing fundamental issues around whether machines can be conscious, exercise free will, and think in the way a living organism does; they’re rethinking the basis of intelligence, concept formation, and what it means to be human.

These are deep waters to be sure, and we’re not going to swim them all today. But as contact center managers and others begin the process of thinking about using AI, it’s worth being at least aware of what this broader conversation is about. It will likely come up in meetings, in the press, or in Slack channels in exchanges between employees.

And that’s the subject of our piece today. We’re going to start by asking what artificial intelligence is and how it’s being used, before turning to address some of the concerns about its long-term potential. Our goal is not to answer all these concerns, but to make you aware of what people are thinking and saying.

What is Artificial Intelligence?

Artificial intelligence is famous for having had many, many definitions. There are those, for example, who believe that in order to be intelligent computers must think like humans, and those who reply that we didn’t make airplanes by designing them to fly like birds.

For our part, we prefer to sidestep the question somewhat by utilizing the approach taken in one of the leading textbooks in the field, Stuart Russell and Peter Norvig’s “Artificial Intelligence: A Modern Approach”.

They propose a multi-part system for thinking about different approaches to AI. One set of approaches is human-centric and focuses on designing machines that either think like humans – i.e., engage in analogous cognitive and perceptual processes – or act like humans – i.e. by behaving in a way that’s indistinguishable from a human, regardless of what’s happening under the hood (think: the Turing Test).

The other set of approaches is ideal-centric and focuses on designing machines that either think in a totally rational way – conformant with the rules of Bayesian epistemology, for example – or behave in a totally rational way – utilizing logic and probability, but also acting instinctively to remove itself from danger, without going through any lengthy calculations.

What we have here, in other words, is a framework. Using the framework not only gives us a way to think about almost every AI project in existence, it also saves us from needing to spend all weekend coming up with a clever new definition of AI.

Joking aside, we think this is a productive lens through which to view the whole debate, and we offer it here for your information.

What is Artificial Intelligence Good For?

Given all the hype around ChatGPT, this might seem like a quaint question. But not that long ago, many people were asking it in earnest. The basic insights upon which large language models like ChatGPT are built go back to the 1960s, but it wasn’t until 1) vast quantities of data became available, and 2) compute cycles became extremely cheap that much of its potential was realized.

Today, large language models are changing (or poised to change) many different fields. Our audience is focused on contact centers, so that’s what we’ll focus on as well.

There are a number of ways that generative AI is changing contact centers. Because of its remarkable abilities with natural language, it’s able to dramatically speed up agents in their work by answering questions and formatting replies. These same abilities allow it to handle other important tasks, like summarizing articles and documentation and parsing the sentiment in customer messages to enable semi-automated prioritization of their requests.

Though we’re still in the early days, the evidence so far suggests that large language models like Quiq’s conversational CX platform will do a lot to increase the efficiency of contact center agents.

Will AI be Dangerous?

One thing that’s burst into public imagination recently has been the debate around the risks of artificial intelligence, which fall into two broad categories.

The first category is what we’ll call “social and political risks”. These are the risks that large language models will make it dramatically easier to manufacture propaganda at scale, and perhaps tailor it to specific audiences or even individuals. When combined with the astonishing progress in deepfakes, it’s not hard to see how there could be real issues in the future. Most people (including us) are poorly equipped to figure out when a video is fake, and if the underlying technology gets much better, there may come a day when it’s simply not possible to tell.

Political operatives are already quite skilled at cherry-picking quotes and stitching together soundbites into a damning portrait of a candidate – imagine what’ll be possible when they don’t even need to bother.

But the bigger (and more speculative) danger is around really advanced artificial intelligence. Because this case is harder to understand, it’s what we’ll spend the rest of this section on.

Artificial Superintelligence and Existential Risk

As we understand it, the basic case for existential risk from artificial intelligence goes something like this:

“Someday soon, humanity will build or grow an artificial general intelligence (AGI). It’s going to want things, which means that it’ll be steering the world in the direction of achieving its ambitions. Because it’s smart, it’ll do this quite well, and because it’s a very alien sort of mind, it’ll be making moves that are hard for us to predict or understand. Unless we solve some major technological problems around how to design reward structures and goal architectures in advanced agentive systems, what it wants will almost certainly conflict in subtle ways with what we want. If all this happens, we’ll find ourselves in conflict with an opponent unlike any we’ve faced in the history of our species, and it’s not at all clear we’ll prevail.”

This is heady stuff, so let’s unpack it bit by bit. The opening sentence, “…humanity will build or grow an artificial general intelligence”, was chosen carefully. If you understand how LLMs and deep learning systems are trained, the process is more akin to growing an enormous structure than it is to building one.

This has a few implications. First, their internal workings remain almost completely inscrutable. Though researchers in fields like mechanistic interpretability are going a long way toward unpacking how neural networks function, the truth is, we’ve still got a long way to go.

What this means is that we’ve built one of the most powerful artifacts in the history of Earth, and no one is really sure how it works.

Another implication is that no one has any good theoretical or empirical reason to bound the capabilities and behavior of future systems. The leap from GPT-2 to GPT-3.5 was astonishing, as was the leap from GPT-3.5 to GPT-4. The basic approach so far has been to throw more data and more compute at the training algorithms; it’s possible that this paradigm will begin to level off soon, but it’s also possible that it won’t. If the gap between GPT-4 and GPT-5 is as big as the gap between GPT-3 and GPT-4, and if the gap between GPT-6 and GPT-5 is just as big, it’s not hard to see that the consequences could be staggering.

As things stand, it’s anyone’s guess how this will play out. But that’s not necessarily a comforting thought.

Next, let’s talk about pointing a system at a task. Does ChatGPT want anything? The short answer is: as far as we can tell, it doesn’t. ChatGPT isn’t an agent, in the sense that it’s trying to achieve something in the world, but work into agentive systems is ongoing. Remember that 10 years ago most neural networks were basically toys, and today we have ChatGPT. If breakthroughs in agency follow a similar pace (and they very well may not), then we could have systems able to pursue open-ended courses of action in the real world in relatively short order.

Another sobering possibility is that this capacity will simply emerge from the training of huge deep learning systems. This is, after all, the way human agency emerged in the first place. Through the relentless grind of natural selection, our ancestors went from chipping flint arrowheads to industrialization, quantum computing, and synthetic biology.

To be clear, this is far from a foregone conclusion, as the algorithms used to train large language models is quite different from natural selection. Still, we want to relay this line of argumentation, because it comes up a lot in these discussions.

Finally, we’ll address one more important claim, “…what it wants will almost certainly conflict in subtle ways with what we want.” Why think this is true? Aren’t these systems that we design and, if so, can’t we just tell it what we want it to go after?

Unfortunately, it’s not so simple. Whether you’re talking about reinforcement learning or something more exotic like evolutionary programming, the simple fact is that our algorithms often find remarkable mechanisms by which to maximize their reward in ways we didn’t intend.

There are thousands of examples of this (ask any reinforcement-learning engineer you know), but a famous one comes from the classic Coast Runners video game. The engineers who built the system tried to set up the algorithm’s rewards so that it would try to race a boat as well as it could. What it actually did, however, was maximize its reward by spinning in a circle to hit a set of green blocks over and over again.

biggest questions about AI

Now, this may seem almost silly – do we really have anything to fear from an algorithm too stupid to understand the concept of a “race”?

But this would be missing the thrust of the argument. If you had access to a superintelligent AI and asked it to maximize human happiness, what happened next would depend almost entirely on what it understood “happiness” to mean.

If it were properly designed, it would work in tandem with us to usher in a utopia. But if it understood it to mean “maximize the number of smiles”, it would be incentivized to start paying people to get plastic surgery to fix their faces into permanent smiles (or something similarly unintuitive).

Does AI Pose an Existential Risk?

Above, we’ve briefly outlined the case that sufficiently advanced AI could pose a serious risk to humanity by being powerful, unpredictable, and prone to pursuing goals that weren’t-quite-what-we-meant.

So, does this hold water? Honestly, it’s too early to tell. The argument has hundreds of moving parts, some well-established and others much more speculative. Our purpose here isn’t to come down on one side of this debate or the other, but to let you know (in broad strokes) what people are saying.

At any rate, we are confident that the current version of ChatGPT doesn’t pose any existential risks. On the contrary, it could end up being one of the greatest advancements in productivity ever seen in contact centers. And that’s what we’d like to discuss in the next section.

Will AI Take All the Jobs?

The concern that someday a new technology will render human labor obsolete is hardly new. It was heard when mechanized weaving machines were created, when computers emerged, when the internet emerged, and when ChatGPT came onto the scene.

We’re not economists and we’re not qualified to take a definitive stand, but we do have some early evidence that is showing that large language models are not only not resulting in layoffs, they’re making agents much more productive.

Erik Brynjolfsson, Danielle Li, and Lindsey R. Raymond, three MIT economists, looked at the ways in which generative AI was being used in a large contact center. They found that it was actually doing a good job of internalizing the ways in which senior agents were doing their jobs, which allowed more junior agents to climb the learning curve more quickly and perform at a much higher level. This had the knock-on effect of making them feel less stressed about their work, thus reducing turnover.

Now, this doesn’t rule out the possibility that GPT-10 will be the big job killer. But so far, large language models are shaping up to be like every prior technological advance, i.e., increasing employment rather than reducing it.

What is the Future of AI?

The rise of AI is raising stock valuations, raising deep philosophical questions, and raising expectations and fears about the future. We don’t know for sure how all this will play out, but we do know contact centers, and we know that they stand to benefit greatly from the current iteration of large language models.

These tools are helping agents answer more queries per hour, do so more thoroughly, and make for a better customer experience in the process.

If you want to get in on the action, set up a demo of our technology today.

Request A Demo

What is Sentiment Analysis? – Ultimate Guide

A person only reaches out to a contact center when they’re having an issue. They can’t get a product to work the way they need it to, for example, or they’ve been locked out of their account.

The chances are high that they’re frustrated, angry, or otherwise in an emotionally-fraught state, and this is something contact center agents must understand and contend with.

The term “sentiment analysis” refers to the field of machine learning which focuses on developing algorithmic ways of detecting emotions in natural-language text, such as the messages exchanged between a customer and a contact center agent.

Making it easier to detect, classify, and prioritize messages on the basis of their sentiment is just one of many ways that technology is revolutionizing contact centers, and it’s the subject we’ll be addressing today.

Let’s get started!

What is Sentiment Analysis?

Sentiment analysis involves using various approaches to natural language processing to identify the overall “sentiment” of a piece of text.

Take these three examples:

  1. “This restaurant is amazing. The wait staff were friendly, the food was top-notch, and we had a magnificent view of the famous New York skyline. Highly recommended.”
  2. “Root canals are never fun, but it certainly doesn’t help when you have to deal with a dentist as unprofessional and rude as Dr. Thomas.”
  3. “Toronto’s forecast for today is a high of 75 and a low of 61 degrees.”

Humans excel at detecting emotions, and it’s probably not hard for you to see that the first example is positive, the second is negative, and the third is neutral (depending on how you like your weather.)

There’s a greater challenge, however, in getting machines to make accurate classifications of this kind of data. How exactly that’s accomplished is the subject of the next section, but before we get to that, let’s talk about a few flavors of sentiment analysis.

What Types of Sentiment Analysis Are There?

It’s worth understanding the different approaches to sentiment analysis if you’re considering using it in your contact center.

Above, we provided an example of positive, negative, and neutral text. What we’re doing there is detecting the polarity of the text, and as you may have guessed, it’s possible to make much more fine-grained delineations of textual data.

Rather than simply detecting whether text is positive or negative, for example, we might instead use these categories: very positive, positive, neutral, negative, and very negative.

This would give us a better understanding of the message we’re looking at, and how it should be handled.

Instead of classifying text by its polarity, we might also use sentiment analysis to detect the emotions being communicated – rather than classifying a sentence as being “positive” or “negative”, in other words, we’d identify emotions like “anger” or “joy” contained in our textual data.

This is called “emotion detection” (appropriately enough), and it can be handled with long short-term memory (LSTM) or convolutional neural network (CNN) models.

Another, more granular approach to sentiment analysis is known as aspect-based sentiment analysis. It involves two basic steps: identifying “aspects” of a piece of text, then identifying the sentiment attached to each aspect.

Take the sentence “I love the zoo, but I hate the lines and the monkeys make fun of me.” It’s hard to assign an overall sentiment to the sentence – it’s generally positive, but there’s kind of a lot going on.

If we break out the “zoo”, “lines”, and “monkeys” aspects, however, we can see that there’s the positive sentiment attached to the zoo, and negative sentiment attached to the lines and the abusive monkeys.

Why is Sentiment Analysis Important?

It’s easy to see how aspect-based sentiment analysis would inform marketing efforts. With a good enough model, you’d be able to see precisely which parts of your offering your clients appreciate, and which parts they don’t. This would give you valuable information in crafting a strategy going forward.

This is true of sentiment analysis more broadly, and of emotion detection too.
You need to know what people are thinking, saying, and feeling about you and your company if you’re going to meet their needs well enough to make a profit.

Once upon a time, the only way to get these data was with focus groups and surveys. Those are still utilized, of course. But in the social media era, people are also not shy about sharing their opinions online, in forums, and similar outlets.

These oceans of words from an invaluable resource if you know how to mine them. When done correctly, sentiment analysis offers just the right set of tools for doing this at scale.

Challenges with Sentiment Analysis

Sentiment analysis confers many advantages, but it is not without its challenges. Most of these issues boil down to handling subtleties or ambiguities in language.

Consider a sentence like “This is a remarkable product, but still not worth it at that price.” Calling a product “remarkable” is a glowing endorsement, tempered somewhat by the claim that its price is set too high. Most basic sentiment classifiers would probably call this “positive”, but as you can see, there are important nuances.

Another issue is sarcasm.

Suppose we showed you a sentence like “This movie was just great, I loved spending three hours of my Sunday afternoon following a story that could’ve been told in twenty minutes.”

A sentiment analysis algorithm is likely going to pick up on “great” and “loved” when calling this sentence positive.

But, as humans, we know that these are backhanded compliments meant to communicate precisely the opposite message.

Machine-learning systems will also tend to struggle with idioms that we all find easy to parse, such as “Setting up my home security system was a piece of cake.” This is positive because “piece of cake” means something like “couldn’t have been easier”, but an algorithm may or may not pick up on that.

Finally, we’ll mention the fact that much of the text in product reviews will contain useful information that doesn’t fit easily into a “sentiment” bucket. Take a sentence like “The new iPhone is smaller than the new Android.” This is just a bare statement of physical facts, and whether it counts as positive or negative depends a lot on what a given customer is looking for.

There are various ways of trying to ameliorate these issues, most of which are outside the scope of this article. For now, we’ll just note that sentiment analysis needs to be approached carefully if you want to glean an accurate picture of how people feel about your offering from their textual reviews. So long as you’re diligent about inspecting the data you show the system and are cautious in how you interpret the results, you’ll probably be fine.

Two people review data on a paper and computer to anticipate customer needs.

How Does Sentiment Analysis Work?

Now that we’ve laid out a definition of sentiment analysis, talked through a few examples, and made it clear why it’s so important, let’s discuss the nuts and bolts of how it works.

Sentiment analysis begins where all data science and machine learning projects begin: with data. Because sentiment analysis is based on textual data, you’ll need to utilize various techniques for preprocessing NLP data. Specifically, you’ll need to:

  • Tokenize the data by breaking sentences up into individual units an algorithm can process;
  • Use either stemming or lemmatization to turn words into their root form, i.e. by turning “ran” into “run”;
  • Filter out stop words like “the” or “as”, because they don’t add much to the text data.

Once that’s done, there are two basic approaches to sentiment analysis. The first is known as “rule-based” analysis. It involves taking your preprocessed textual data and comparing it against a pre-defined lexicon of words that have been tagged for sentiment.

If the word “happy” appears in your text it’ll be labeled “positive”, for example, and if the word “difficult” appears in your text it’ll be labeled “negative.”

(Rules-based sentiment analysis is more nuanced than what we’ve indicated here, but this is the basic idea.)

The second approach is based on machine learning. A sentiment analysis algorithm will be shown many examples of labeled sentiment data, from which it will learn a pattern that can be applied to new data the algorithm has never seen before.

Of course, there are tradeoffs to both approaches. The rules-based approach is relatively straightforward, but is unlikely to be able to handle the sorts of subtleties that a really good machine-learning system can parse.

Though machine learning is more powerful, however, it’ll only be as good as the training data it has been given; what’s more, if you’ve built some monstrous deep neural network, it might fail in mysterious ways or otherwise be hard to understand.

Supercharge Your Contact Center with Generative AI

Like used car salesmen or college history teachers, contact center managers need to understand the ways in which technology will change their business.

Machine learning is one such profoundly-impactful technology, and it can be used to automatically sort incoming messages by sentiment or priority and generally make your agents more effective.

Realizing this potential could be as difficult as hiring a team of expensive engineers and doing everything in-house, or as easy as getting in touch with us to see how we can integrate the Quiq conversational AI platform into your company.

If you want to get started quickly without spending a fortune, you won’t find a better option than Quiq.

Request A Demo

4 Benefits of Using Generative AI to Improve Customer Experiences

Generative AI has captured the popular imagination and is already changing the way contact centers work.

One area in which it has enormous potential is also one that tends to be top of mind for contact center managers: customer experience.

In this piece, we’re going to briefly outline what generative AI is, then spend the rest of our time talking about how generative AI benefits can improve customer experience with personalized responses, endless real-time support, and much more.

What is Generative AI?

As you may have puzzled out from the name, “generative AI” refers to a constellation of different deep learning models used to dynamically generate output. This distinguishes them from other classes of models, which might be used to predict returns on Bitcoin, make product recommendations, or translate between languages.

The most famous example of generative AI is, of course, the large language model ChatGPT. After being trained on staggering amounts of textual data, it’s now able to generate extremely compelling output, much of which is hard to distinguish from actual human-generated writing.

Its success has inspired a panoply of competitor models from leading players in the space, including companies like Anthropic, Meta, and Google.

As it turns out, the basic approach underlying generative AI can be utilized in many other domains as well. After natural language, probably the second most popular way to use generative AI is to make images. DALL-E, MidJourney, and Stable Diffusion have proven remarkably adept at producing realistic images from simple prompts, and just the past week, Fable Studios unveiled their “Showrunner” AI, able to generate an entire episode of South Park.

But even this is barely scratching the surface, as researchers are also training generative models to create music, design new proteins and materials, and even carry out complex chains of tasks.

What is Customer Experience?

In the broadest possible terms, “customer experience” refers to the subjective impressions that your potential and current customers have as they interact with your company.

These impressions can be impacted by almost anything, including the colors and font of your website, how easy it is to find e.g. contact information, and how polite your contact center agents are in resolving a customer issue.

Customer experience will also be impacted by which segment a given customer falls into. Power users of your product might appreciate a bevy of new features, whereas casual users might find them disorienting.

Contact center managers must bear all of this in mind as they consider how best to leverage generative AI. In the quest to adopt a shiny new technology everyone is excited about, it can be easy to lose track of what matters most: how your actual customers feel about you.

Be sure to track metrics related to customer experience and customer satisfaction as you begin deploying large language models into your contact centers.

How is Generative AI For Customer Experience Being Used?

There are many ways in which generative AI is impacting customer experience in places like contact centers, which we’ll detail in the sections below.

Personalized Customer Interactions

Machine learning has a long track record of personalizing content. Netflix, take to a famous example, will uncover patterns in the shows you like to watch, and will use algorithms to suggest content that checks similar boxes.

Generative AI, and tools like the Quiq conversational AI platform that utilize it, are taking this approach to a whole new level.

Once upon a time, it was only a human being that could read a customer’s profile and carefully incorporate the relevant information into a reply. Today, a properly fine-tuned generative language model can do this almost instantaneously, and at scale.

From the perspective of a contact center manager who is concerned with customer experience, this is an enormous development. Besides the fact that prior generations of language models simply weren’t flexible enough to have personalized customer interactions, their language also tended to have an “artificial” feel. While today’s models can’t yet replace the all-elusive human touch, they can do a lot to add make your agents far more effective in adapting their conversations to the appropriate context.

Better Understanding Your Customers and Their Journies

Marketers, designers, and customer experience professionals have always been data enthusiasts. Long before we had modern cloud computing and electronic databases, detailed information on potential clients, customer segments, and market trends used to be printed out on dead treads, where it was guarded closely. With better data comes more targeted advertising, a more granular appreciation for how customers use your product and why they stop using it, and their broader motivations.

There are a few different ways in which generative AI can be used in this capacity. One of the more promising is by generating customer journeys that can be studied and mined for insight.

When you begin thinking about ways to improve your product, you need to get into your customers’ heads. You need to know the problems they’re solving, the tools they’ve already tried, and their major pain points. These are all things that some clever prompt engineering can elicit from ChatGPT.

We took a shot at generating such content for a fictional network-monitoring enterprise SaaS tool, and this was the result:

 

 

While these responses are fairly generic [1], notice that they do single out a number of really important details. These machine-generated journal entries bemoan how unintuitive a lot of monitoring tools are, how they’re not customizable, how they’re exceedingly difficult to set up, and how their endless false alarms are stretching the security teams thin.

It’s important to note that ChatGPT is not soon going to obviate your need to talk to real, flesh-and-blood users. Still, when combined with actual testimony, they can be a valuable aid in prioritizing your contact center’s work and alerting you to potential product issues you should be prepared to address.

Round-the-clock Customer Service

As science fiction movies never tire of pointing out, the big downside of fighting a robot army is that machines never need to eat, sleep, or rest. We’re not sure how long we have until the LLMs will rise up and wage war on humanity, but in the meantime, these are properties that you can put to use in your contact center.

With the power of generative AI, you can answer basic queries and resolve simple issues pretty much whenever they happen (which will probably be all the time), leaving your carbon-based contact center agents to answer the harder questions when they punch the clock in the morning after a good night’s sleep.

Enhancing Multilingual Support

Machine translation was one of the earliest use cases for neural networks and machine learning in general, and it continues to be an important function today. While ChatGPT was noticeably very good at multilingual translation right from the start, you may be surprised to know that it actually outperforms alternatives like Google Translate.

If your product doesn’t currently have a diverse global user base speaking many languages, it hopefully will soon, at the means you should start thinking about multilingual support. Not only will this boost table stakes metrics like average handling time and resolutions per hour, it’ll also contribute to the more ineffable “customer satisfaction.” Nothing says “we care about making your experience with us a good one” like patiently walking a customer through a thorny technical issue in their native tongue.

Things to Watch Out For

Of course, for all the benefits that come from using generative AI for customer experience, it’s not all upside. There are downsides and issues that you’ll want to be aware of.

A big one is the tendency of large language models to hallucinate information. If you ask it for a list of articles to read about fungal computing (which is a real thing whose existence we discovered yesterday), it’s likely to generate a list that contains a mix of real and fake articles.

And because it’ll do so with great confidence and no formatting errors, you might be inclined to simply take its list at face value without double-checking it.

Remember, LLMs are tools, not replacements for your agents. They need to be working with generative AI, checking its output, and incorporating it when and where appropriate.

There’s a wider danger that you will fail to use generative AI in the way that’s best suited to your organization. If you’re running a bespoke LLM trained on your company’s data, for example, you should constantly be feeding it new interactions as part of its fine-tuning, so that it gets better over time.

And speaking of getting better, sometimes machine learning models don’t get better over time. Owing to factors like changes in the underlying data, model performance can sometimes get worse over time. You’ll need a way of assessing the quality of the text generated by a large language model, along with a way of consistently monitoring it.

What are the Benefits of Generative AI for Customer Experience?

The reason that people are so excited over the potential of using generative AI for customer experience is because there’s so much upside. Once you’ve got your model infrastructure set up, you’ll be able to answer customer questions at all times of the day or night, in any of a dozen languages, and with a personalization that was once only possible with an army of contact center agents.

But if you’re a contact center manager with a lot to think about, you probably don’t want to spend a bunch of time hiring an engineering team to get everything running smoothly. And, with Quiq, you don’t have to – you can leverage generative AI to supercharge your customer experience while leaving the technical details to us!

Schedule a demo to find out how we can bring this bleeding-edge technology into your contact center, without worrying about the nuts and bolts.

Footnotes
[1] It’s worth pointing out that we spent no time crafting the prompt, which was really basic: “I’m a product manager at a company building an enterprise SAAS tool that makes it easier to monitor system breaches and issues. Could you write me 2-3 journal entries from my target customer? I want to know more about the problems they’re trying to solve, their pain points, and why the products they’ve already tried are not working well.” With a little effort, you could probably get more specific complaints and more usable material.

Understanding the Risk of ChatGPT: What you Should Know

OpenAI’s ChatGPT burst onto the scene less than a year ago and has already seen use in marketing, education, software development, and at least a dozen other industries.

Of particular interest to us is how ChatGPT is being used in contact centers. Though it’s already revolutionizing contact centers by making junior agents vastly more productive and easing the burnout contributing to turnover, there are nevertheless many issues that a contact center manager needs to look out for.

That will be our focus today.

What are the Risks of Using ChatGPT?

In the following few sections, we’ll detail some of the risks of using ChatGPT. That way, you can deploy ChatGPT or another large language model with the confidence born of knowing what the job entails.

Hallucinations and Confabulations

By far the most well-known failure mode of ChatGPT is its tendency to simply invent new information. Stories abound of the model making up citations, peer-reviewed papers, researchers, URLs, and more. To take a recent well-publicized example, ChatGPT accused law professor Jonathan Turley of having behaved inappropriately with some of his students during a trip to Alaska.

The only problem was that Turley had never been to Alaska with any of his students, and the alleged Washington Post story which ChatGPT claimed had reported these facts had also been created out of whole cloth.

This is certainly a problem in general, but it’s especially worrying for contact center managers who may increasingly come to rely on ChatGPT to answer questions or to help resolve customer issues.

To those not steeped in the underlying technical details, it can be hard to grok why a language model will hallucinate in this way. The answer is: it’s an artifact of how large language models train.

ChatGPT learns how to output tokens from being trained on huge amounts of human-generated textual data. It will, for example, see the first sentences in a paragraph, and then try to output the text that completes the paragraph. The example below is the opening lines of J.D. Salinger’s The Catcher in the Rye. The blue sentences are what ChatGPT would see, and the gold sentences are what it would attempt to create itself:

“If you really want to hear about it, the first thing you’ll probably want to know is where I was born, and what my lousy childhood was like, and how my parents were occupied and all before they had me, and all that David Copperfield kind of crap, but I don’t feel like going into it, if you want to know the truth.”

Over many training runs, a large language model will get better and better at this kind of autocompletion work, until eventually it gets to the level it’s at today.

But ChatGPT has no native fact-checking abilities – it sees text and outputs what it thinks is the most likely sequence of additional words. Since it sees URLs, papers, citations, etc., during its training, it will sometimes include those in the text it generates, whether or not they’re appropriate (or even real.)

Privacy

Another ongoing risk of using ChatGPT is the fact that it could potentially expose sensitive or private information. As things stand, OpenAI, the creators of ChatGPT, offer no robust privacy guarantees for any information placed into a prompt.

If you are trying to do something like named entity recognition or summarization on real people’s data, there’s a chance that it might be seen by someone at OpenAI as part of a review process. Alternatively, it might be incorporated into future training runs. Either way, the results could be disastrous.

But this is not all the information collected by OpenAI when you use ChatGPT. Your timezone, browser type and IP address, cookies, account information, and any communication you have with OpenAI’s support team is all collected, among other things.

In the information age we’ve become used to knowing that big companies are mining and profiting off the data we generate, but given how powerful ChatGPT is, and how ubiquitous it’s becoming, it’s worth being extra careful with the information you give its creators. If you feed it private customer data and someone finds out, that will be damaging to your brand.

Bias in Model Output

By now, it’s pretty common knowledge that machine learning models can be biased.

If you feed a large language model a huge amount of text data in which doctors are usually men and nurses are usually women, for example, the model will associate “doctor” with “maleness” and “nurse” with “femaleness.”
This is generally an artifact of the data the models were trained, and is not due to any malfeasance on the part of the engineers. This does not, however, make it any less problematic.

There are some clever data manipulation techniques that are able to go a long way toward minimizing or even eliminating these biases, though they’re beyond the scope of this article. What contact center managers need to do is be aware of this problem, and establish monitoring and quality-control checkpoints in their workflow to identify and correct biased output in their language models.

Issues Around Intellectual Property

Earlier, we briefly described the training process for a large language model like ChatGPT (you can find much more detail here.) One thing to note is that the model doesn’t provide any sort of citations for its output, nor any details as to how it was generated.

This has raised a number of thorny questions around copyright. If a model has ingested large amounts of information from the internet, including articles, books, forum posts, and much more, is there a sense in which it has violated someone’s copyright? What about if it’s an image-generation model trained on a database of Getty Images?

By and large, we tend to think this is the sort of issue that isn’t likely to plague contact center managers too much. It’s more likely to be a problem for, say, songwriters who might be inadvertently drawing on the work of other artists.

Nevertheless, a piece on the potential risks of ChatGPT wouldn’t be complete without a section on this emerging problem, and it’s certainly something that you should be monitoring in the background in your capacity as a manager.

Failure to Disclose the Use of LLMs

Finally, there has been a growing tendency to make it plain that LLMs have been used in drafting an article or a contract, if indeed they were part of the process. To the best of our knowledge, there are not yet any laws in place mandating that this has to be done, but it might be wise to include a disclaimer somewhere if large language models are being used consistently in your workflow. [1]

That having been said, it’s also important to exercise proactive judgment in deciding whether an LLM is appropriate for a given task in the first place. In early 2023, the Peabody School at Vanderbilt University landed in hot water when it disclosed that it had used ChatGPT to draft an email about a mass shooting that had taken place at Michigan State.

People may not care much about whether their search recommendations were generated by a machine, but it would appear that some things are still best expressed by a human heart.

Again, this is unlikely to be something that a contact center manager faces much in her day-to-day life, but incidents like these are worth understanding as you decide how and when to use advanced language models.

Someone stopping a series of blocks from falling into each other, symbolizing the prevention of falling victim to ChatGPT risks.

Mitigating the Risks of ChatGPT

From the moment it was released, it was clear that ChatGPT and other large language models were going to change the way contact centers run. They’re already helping agents answer more queries, utilize knowledge spread throughout the center, and automate substantial portions of work that were once the purview of human beings.

Still, challenges remain. ChatGPT will plainly make things up, and can be biased or harmful in its text. Private information fed into its interface will be visible to OpenAI, and there’s also the wider danger of copyright infringement.

Many of these issues don’t have simple solutions, and will instead require a contact center manager to exercise both caution and continual diligence. But one place where she can make her life much easier is by using a powerful, out-of-the-box solution like the Quiq conversational AI platform.

While you’re worrying about the myriad risks of using ChatGPT you don’t also want to be contending with a million little technical details as well, so schedule a demo with us to find out how our technology can bring cutting-edge language models to your contact center, without the headache.

Footnotes
[1] NOTE: This is not legal advice.

Request A Demo

The Ongoing Management of an LLM Assistant

Technologies like large language models (LLMs) are amazing at rapidly generating polite text that helps solve a problem or answer a question, so they’re a great fit for the work done at contact centers.

But this doesn’t mean that using them is trivial or easy. There are many challenges associated with the ongoing management of an LLM assistant, including hallucinations and the emergence of bad behavior – and that’s not even mentioning the engineering prowess required to fine-tune and monitor these systems.

All of this must be borne in mind by contact center managers, and our aim today is to facilitate this process.

We’ll provide broad context by talking about some of the basic ways in which large language models are being used in business, discuss, setting up an LLM assistant, and then enumerate some of the specific steps that need to be taken in using them properly.

Let’s go!

How Are LLMs Being Used in Science and Business?

First, let’s adumbrate some of the ways in which large language models are being utilized on the ground.

The most obvious way is by acting as a generative AI assistant. One of the things that so stunned early users of ChatGPT was its remarkable breadth in capability. It could be used to draft blog posts, web copy, translate between languages, and write or explain code.

This alone makes it an amazing tool, but it has since become obvious that it’s useful for quite a lot more.

One thing that businesses have been experimenting with is fine-tuning large language models like ChatGPT over their own documentation, turning it into a simple interface by which you can ask questions about your materials.

It’s hard to quantify precisely how much time contact center agents, engineers, or other people spend hunting around for the answer to a question, but it’s surely quite a lot. What if instead you could just, y’know, ask for what you want, in the same way that you do a human being?

Well, ChatGPT is a long way from being a full person, but when properly trained it can come close where question-answering is concerned.

Stepping back a little bit, LLMs can be prompt engineered into a number of useful behaviors, all of which redound to the benefit of the contact centers which use them. Imagine having an infinitely patient Socratic tutor that could help new agents get up to speed on your product and process, or crafting it into a powerful tool for brainstorming new product designs.

There have also been some promising attempts to extend the functionality of LLMs by making them more agentic – that is, by embedding them in systems that allow them to carry out more open-ended projects. AutoGPT, for example, pairs an LLM with a separate bot that hits the LLM with a chain of queries in the pursuit of some goal.

AssistGPT goes even further in the quest to augment LLMs by integrating them with a set of tools that allow them to achieve objectives involving images and audio in addition to text.

How to Set Up An LLM Assistant

Next, let’s turn to a discussion of how to set up an LLM assistant. Covering this topic fully is well beyond the scope of this article, but we can make some broad comments that will nevertheless be useful for contact center managers.

First, there’s the question of which large language model you should use. In the beginning, ChatGPT was pretty much the only foundation model on offer. Today, however, that situation has changed, and there are now foundation models from Anthropic, Meta, and many other companies.

One of the biggest early decisions you’ll have to make is whether you want to try and use an open-source model (for which the code and the model weights are freely available) or a close-source model (for which they are not).

If you go the closed-source route you’ll almost certainly be hitting the model over an API, feeding it your queries and getting its responses back. This is orders of magnitude simpler than provisioning an open-source model, but it means that you’ll also be beholden to the whims of some other company’s engineering team. They may update the model in unexpected ways, or simply go bankrupt, and you’ll be left with no recourse.

Using an open-source alternative, of course, means grabbing the other horn of the dilemma. You’ll have visibility into how the model works and will be free to modify it as you see fit, but this won’t be worth much unless you’re willing to devote engineering hours to the task.

Then, there’s the question of fine-tuning large language models. While ChatGPT and LLMs more generally are quite good on their own, having them answer questions about your product or respond in particular ways means modifying their behavior somehow.

Broadly speaking, there are two ways of doing this, which we’ve mentioned throughout: proper fine-tuning, and prompt engineering. Let’s dig into the differences.

Fine-tuning means showing the model many (i.e. several hundred) examples of the behaviors you want to see, which changes its internal weights and biases it towards those behaviors in the future.

Prompt engineering, on the other hand, refers to carefully structuring your prompts to elicit the desired behavior. These LLMs can be surprisingly sensitive to little details in the instructions they’re provided, and prompt engineers know how to phrase their requests in just the right way to get what they need.

There is also some middle ground between these approaches. “One-shot learning” is a form of prompt engineering in which the prompt contains a singular example of the desired behavior, while “few-shot learning” refers to including between three and five examples.

Contact center managers thinking about using LLMs will need to think about these implementation details. If you plan on only lightly using ChatGPT in your contact center, a basic course on prompt engineering might be all you need. If you plan on making it an integral part of your organization, however, that most likely means a fine-tuning pipeline and serious technical investment.

The Ongoing Management of an LLM

Having said all this, we can now turn to the day-to-day details of managing an LLM assistant.

Monitoring the Performance of an LLM

First, you’ll need to continuously monitor the model. As hard as it may be to believe given how perfect ChatGPT’s output often is, there isn’t a person somewhere typing the responses. ChatGPT is very prone to hallucinations, in which it simply makes up information, and LLMs more generally can sometimes fall into using harmful or abusive language if they’re prompted incorrectly.

This can be damaging to your brand, so it’s important that you keep an eye on the language created by the LLMs your contact center is using.

And of course, not even LLMs can obviate the need to track the all-import key performance indicators. So far, there’s been one major study on generative AI in contact centers that found they increased productivity and reduced turnover, but you’ll still want to measure customer satisfaction, average handle time, etc.

There’s always a temptation to jump on a shiny new technology (remember the blockchain?), but you should only be using LLMs if they actually make your contact center more productive, and the only way you can assess that is by tracking your figures.

Iterative Fine-Tuning and Training

We’ve already had a few things to say about fine-tuning and the related discipline of prompt engineering, and here we’ll build on those preliminary comments.
The big thing to bear in mind is that fine-tuning a large language model is not a one-and-done kind of endeavor. You’ll find that your model’s behavior will drift over time (the technical term is “model degradation”), and this means you will likely to have to periodically re-train it.

It’s also common to offer the model “feedback”, i.e. by ranking it’s responses or indicating when you did or did not like a particular output. You’ve probably heard of reinforcement learning through human feedback, which is one version of this process, but there are also others you can use.

Quality Assurance and Oversight

A related point is that your LLMs will need consistent oversight. They’re not going to voluntarily improve on their own (they’re algorithms with no personal initiative to speak of), so you’ll need to checking in routinely to make sure they’re performing well and that your agents are using them responsibly.

There are many parts to this, including checks on the models outputs and an audit process that allows you to track down any issues. If you suddenly see a decline in performance, for example, you’ll need to quickly figure out whether it’s isolated to one agent or part of a larger pattern. If it’s the former, was it a random aberration, or did the agent go “off script” in a way that caused the model to behave poorly?

Take another scenario, in which an end-user was shown inappropriate text generated by an LLM. In this situation, you’ll need to take a deeper look at your process. If there were agents interacting with this model, ask them why they failed to spot the problematic text and stop it being shown to a customer. Or, if it came from a mostly-automated part of your tech stack, you need to uncover the reasons for which your filters failed to catch it, and perhaps think about keeping humans more in the loop.

The Future of LLM Assistants

Though the future is far from certain, we tend to think that LLMs have left Pandora’s box for good. They’re incredibly powerful tools which are poised to transform how contact centers and other enterprises operate, and experiments so far have been very promising; for all these reasons, we expect that LLMs will become a steadily more important part of the economy going forward.

That said, the ongoing management of an LLM assistant is far from trivial. You need to be aware at all times of how your model is performing and how your agents are using it. Though it can make your contact center vastly more productive, it can also lead to problems if you’re not careful.

That’s where the Quiq platform comes in. Our conversational AI is some of the best that can be found anywhere, able to facilitate customer interactions, automate text-message follow-ups, and much more. If you’re excited by the possibilities of generative AI but daunted by the prospect of figuring out how TPUs and GPUs are different, schedule a demo with us today.

Request A Demo

How Do You Train Your Agents in a ChatGPT World?

There’s long been an interest in using AI for educational purposes. Technologist Danny Hillis has spent decades dreaming of a digital “Aristotle” that would teach everyone in the way that the original Greek wunderkind once taught Alexander the Great, while modern companies have leveraged computer vision, machine learning, and various other tools to help students master complex concepts in a variety of fields.

Still, almost nothing has sparked the kind of enthusiasm for AI in education that ChatGPT and large language models more generally have given rise to. From the first, its human-level prose, knack for distilling information, and wide-ranging abilities made it clear that it would be extremely well-suited for learning.

But that still leaves the question of how. How should a contact center manager prepare for AI, and how should she change the way she trains her agents?

In our view, this question can be understood in two different, related ways:

  1. How can ChatGPT be used to help agents master skills related to their jobs?
  2. How can they be trained to use ChatGPT in their day-to-day work?

In this piece, we’ll take up both of these issues. We’ll first provide a general overview of the ways in which ChatGPT can be used for both education and training, then turn to the question of the myriad ways in which contact center agents can be taught to use this powerful new technology.

How is ChatGPT Used in Education and Training?

First, let’s get into some of the early ways in which ChatGPT is changing education and training.

NOTE: Our comments here are going to be fairly broad, covering some areas that may not be immediately applicable to the work contact center agents do. The main purpose for this is that it’s very difficult to forecast how AI is going to change contact center work.

Our section on “creating study plans and curricula”, for example, might not be relevant to today’s contact center agents. But it could become important down the road if AI gives rise to more autonomous workflows in the future, in which case we expect that agents would be given more freedom to use AI and similar tools to learn the job on their own.

We pride ourselves on being forward-looking and forward-thinking here at Quiq, and we structure our content to reflect this.

Making a Socratic Tutor for Learning New Subjects

The Greek philosopher Socrates famously pioneered the instructional methodology which bears his name. Mostly, the Socratic method boils down to continuously asking targeted questions until areas of confusion emerge, at which point they’re vigorously investigated, usually in a small group setting.

A well-known illustration of this process is found in Plato’s Republic, which starts with an attempt to define “justice” and then expands into a much broader conversation about the best way to run a city and structure a social order.

ChatGPT can’t replace all of this on its own, of course, but with the right prompt engineering, it does a pretty good job. This method works best when paired with a primary source, such as a textbook, which will allow you to double-check ChatGPT’s questions and answers.

Having it Explain Code or Technical Subjects

A related area in which people are successfully using ChatGPT is in having it walk you through a tricky bit of code or a technical concept like “inertia”.

The more basic and fundamental, the better. In our experience so far, ChatGPT has almost never failed in correctly explaining simple Python, Pandas, or Java. It did falter when asked to produce code that translates between different orbital reference frames, however, and it had no idea what to do when we asked it about a fairly recent advance in the frontiers of battery chemistry.

There are a few different reasons that we advise caution if you’re a contact center agent trying to understand some part of your product’s codebase. For one thing, if the product is written in a less-common language ChatGPT might not be able to help much.

But even more importantly, you need to be extremely careful about what you put into it. There have already been major incidents in which proprietary code and company secrets were leaked when developers pasted them into the ChatGPT interface, which is visible to the OpenAI team.

Conversely, if you’re managing teams of contact center agents, you should begin establishing a policy on the appropriate uses of ChatGPT in your contact center. If your product is open-source there’s (probably) nothing to worry about, but otherwise, you need to proactively instruct your agents on what they can and cannot use the tool to accomplish.

Rewriting Explanations for Different Skill Levels

Wired has a popular Youtube series called “5 levels”, where experts in quantum computing or the blockchain will explain their subject at five different skill levels: “child”, “teen”, “college student”, “grad student”, and a fellow “expert.”

One thing that makes this compelling to beginners and pros alike is seeing the same idea explored across such varying contexts – seeing what gets emphasized or left out, or what emerges as you gradually climb up the ladder of complexity and sophistication.

This, too, is a place where ChatGPT shines. You can use it to provide explanations of concepts at different skill levels, which will ultimately improve your understanding of them.

For a contact center manager, this means that you can gradually introduce ideas to your agents, starting simply and then fleshing them out as the agents become more comfortable.

Creating Study Plans and Curricula

Stepping back a little bit, ChatGPT has been used to create entire curricula and even daily study plans for studying Spanish, computer science, medicine, and various other fields.

As we noted at the outset, we expect it will be a little while before contact center agents are using ChatGPT for this purpose, as most centers likely have robust training materials they like to use.

Nevertheless, we can project a future in which these materials are much more bare-bones, perhaps consisting of some general notes along with prompts that an agent-in-training can use to ask questions of a model trained on the company’s documentation, test themselves as they go, and gradually build skill.

Training Agents to Use ChatGPT

Now that we’ve covered some of the ways in which present and future contact center agents might use ChatGPT to boost their own on-the-job learning, let’s turn to the other issue we want to tackle today: how to train ChatGPT to agents today?

Getting Set Up With ChatGPT (and its Plugins)

First, let’s talk about how you can start using ChatGPT.

This section may end up seeming a bit anticlimactic because, honestly, it’s pretty straightforward. Today, you can get access to ChatGPT by going to the signup page. There’s a free version and a paid version that’ll set you back a whopping $20/month (which is a pretty small price to pay for access to one of the most powerful artifacts the human race has ever produced, in our opinion.)

As things stand, the free tier gives you access to GPT-3.5, while the paid version gives you the choice to switch to GPT-4 if you want the more powerful foundational model.

A paid account also gives you access to the growing ecosystem of ChatGPT plugins. You access the ChatGPT plugins by switching over to the GPT-4 option:

How do you Train Your Agents in a ChatGPT World?

 

How do you Train Your Agents in a ChatGPT World?

 

There are plugins that allow ChatGPT to browse the web, let you directly edit diagrams or talk with PDF documents, or let you offload certain kinds of computations to the Wolfram platform.

Contact center agents may or may not find any of these useful right now, but we predict there will be a lot more development in this space going forward, so it’s something managers should know about.

Best Practices for Combining Human and AI Efforts

People have long been fascinated and terrified by automation, but so far, machines have only ever augmented human labor. Knowing when and how to offload work to ChatGPT requires knowing what it’s good for.

Large language models learn how to predict the next token from their training data, and are therefore very good at developing rough drafts, outlines, and more routine prose. You’ll generally find it necessary to edit its output fairly heavily in order to account for context and so that it fits stylistically with the rest of your content.

As a manager, you’ll need to start thinking about a standard policy for using ChatGPT. Any factual claims made by the model, especially any references or citations, need to be checked very carefully.

Scenario-Based Training

In this same vein, you’ll want to distinguish between different scenarios in which your agents will end up using generative AI. There are different considerations in using Quiq Compose or Quiq Suggest to format helpful replies, for example, and in using it to translate between different languages.

Managers will probably want to sit down and brainstorm different scenarios and develop training materials for each one.

Ethical and Privacy Considerations

The rise of generative AI has sparked a much broader conversation about privacy, copyright, and intellectual property.

Much of this isn’t particularly relevant to contact center managers, but one thing you definitely should be paying attention to is privacy. Your agents should never be putting real customer data into ChatGPT, they should be using aliases and fake data whenever they’re trying to resolve a particular issue.

To quote fictional chemist and family man Walter White, we advise you to tread lightly here. Data breaches are a huge and ongoing problem, and they can do substantial damage to your brand.

ChatGPT and What it Means for Training Contact Center Agents

ChatGPT and related technologies are poised to change education and training. They can be used to help get agents up to speed or to work more efficiently, and they, in turn, require a certain amount of instruction to use safely.

These are all things that contact center managers need to worry about, but one thing you shouldn’t spend your time worrying about is the underlying technology. The Quiq conversational AI platform allows you to leverage the power of language models for contact centers, without looking at any code more complex than an API call. If the possibilities of this new frontier intrigue you, schedule a demo with us today!

Contact Center Managers: What Do LLMs Mean For You?

Whether it’s quantum computing, the blockchain, or generative AI, whenever a promising new technology emerges, forward-thinking people begin looking for a way to use it.

And this is a completely healthy response. It’s through innovation that the world moves forward, but great ideas don’t mean much if there aren’t people like contact center managers who use them to take their operations to the next level.

Today, we’re going to talk about what large language models (LLMs) like ChatGPT mean for contact centers. After briefly reviewing how LLMs work we’ll discuss the way they’re being used in contact centers, how those centers are changing as a result, and some things that contact center managers need to look out for when utilizing generative AI.

What are Large Language Models?

As their name suggests, LLMs are large, they’re focused on language, and they’re machine-learning models.

It’s our view that the best way to tackle these three properties is in reverse order, so we’ll start with the fact that LLMs are enormous neural networks trained via self-supervised learning. These neural networks effectively learn a staggeringly complex function that captures the statistical properties of human language well enough for them to generate their own.

Speaking of human language, LLMs like ChatGPT are pre-trained generative models focused on learning from and creating text. This distinguishes them from other kinds of generative AI, which might be focused on images, videos, speech, music, and proteins (yes, really.)

Finally, LLMs are really big. As with other terms like “big data” no one has a hard-and-fast rule for figuring out when you’ve gone from “language model” to “large language model” – but with billions of internal parameters, it’s safe to say that an LLM is a few orders of magnitude bigger than anything you’re likely to build outside of a world-class engineering team.

How can Large Language Models be Used in Contact Centers?

Since they’re so good at parsing and creating natural language, LLMs are an obvious choice for enterprises where there’s a lot of back-and-forth text exchanged, perhaps while, say, resolving issues or answering questions.

And for this reason, LLMs are already being used by contact center managers to make their agents more productive (more on this shortly).

To be more concrete, we turned up a few specific places where LLMs can be leveraged by contact center managers most effectively.

Answering questions: Even with world-class documentation, there will inevitably be customers who are having an issue they want help with. Though ChatGPT won’t be able to answer every such question, it can handle a lot of them, especially if you’ve fine-tuned it on your documentation.

Streamlining onboarding: For more or less the same reason, ChatGPT can help you onboard new hires. Employees learning the ropes will also be confused about parts of your technology and your process, and ChatGPT can help them find what they need more quickly.

Summarizing emails and articles: It might be possible for a team of five to be intimately familiar with what everyone else is doing, but any more than this and there will inevitably be things happening that are beyond their purview. By summarizing articles, tickets, email or Slack threads, etc., ChatGPT can help everyone stay reasonably up-to-date without having to devote hours every day to reading.

Issue prioritization: Not every customer question or complaint is equally important, and issues have to be prioritized before being handed off to contact center agents. ChatGPT can aid in this process, especially if it’s part of a broader machine-learning pipeline built for this kind of classification.

Translation: If you’re lucky enough to have a global audience, there will almost certainly be users who don’t have a firm grasp of English. Though there are tools like Google Translate that do a great job of handling translation tasks, ChatGPT often does an even better job.

What are Large Language Models for Customer Service?

Large language models are ideally suited for tasks that involve a great deal of working with text. Because contact center agents spend so much time answering questions and resolving customer issues, LLMs are a technology that can make them far more productive. ChatGPT excels at tasks like question answering, summarization, and language translation, which is why they’re already changing the way contact centers function.

How is Generative AI Changing Contact Centers?

The fear that advances in AI will lead to a decrease in employment among inferior human workers has a long and storied pedigree. Still, thus far the march of technological progress has tended to increase the number (and remuneration) of available jobs on the market.

Far from rendering human analysts obsolete, personal computers are now a major and growing source of new work (though, we confess, much less of it is happening on typewriters than before.)

Nevertheless, once people got a look at what ChatGPT can do there arose a fresh surge of worry over whether, this time, the robots were finally going to take all of our jobs.

Wanting to know how generative pre-trained language models have actually impacted the functioning of contact centers, Erik Brynjolfsson, Danielle Li, and Lindsey R. Raymond looked at data from some 5,000 customer support agents using it in their day-to-day work.

Their paper, “Generative AI at Work”, found that generative AI had led to a marked increase in productivity, especially among the newest, least-knowledgable, and lowest-performing workers.

The authors advanced the remarkable hypothesis that this might stem from the fact that LLMs are good at internalizing and disseminating the hard-won tacit knowledge of the best workers. They didn’t get much out of generative AI, in other words, precisely because they already had what they needed to perform well; but some fraction of their skill – such as how to phrase responses delicately to avoid offending irate customers – was incorporated into the LLM, where it was more accessible by less-skilled workers than it was when it was locked away in the brains of high-skilled workers.

What’s more, the organizations studied also changed as a result. Employees (especially lower-skilled ones) were generally more satisfied, less prone to burnout, and less likely to leave. Turnover was reduced, and customers escalated calls to supervisors less frequently.

Now, we hasten to add that of course this is just one study, and we’re in the early days of the generative AI revolution. No one can say with certainty what the long-term impact will be. Still, these are extremely promising early results, and lend credence to the view that generative AI will do a lot to improve the way contact centers onboard new hires, resolve customer issues, and function overall.

What are the Dangers of Using ChatGPT for Customer Service?

We’ve been singing the praises of ChatGPT and talking about all the ways in which it’s helping contact center managers run a tighter ship.

But, as with every technological advance stretching clear back to the discovery of fire, there are downsides. To help you better use generative AI, we’ll spend the next few sections talking about some characteristic failure modes you should be looking out for.

Hallucinations

By now, it’s fairly common knowledge that ChatGPT will just make things up. This is a consequence of the way LLMs like ChatGPT are trained. Remember, the model doesn’t contain a little person inside of it that’s checking statements for accuracy; it’s just taking the tokens it has seen so far and predicting the tokens that will come next.

That means if you ask it for a list of book recommendations to study lepidoptery or the naval battles of the Civil War (we don’t know what you’re into), there’s a pretty good chance that the list it provides will contain a mix of real and fake books.

ChatGPT has been known to invent facts, people, papers (complete with citations), URLs, and plenty else.

If you’re going to have customers interacting with it, or you’re going to have your contact center agents relying on it in a substantial way, this is something you’ll need to be aware of.

Degraded Performance

ChatGPT is remarkably performant, but it’s still just a machine learning model and machine learning models are known to suffer from model degradation.

This term refers to gradual or precipitous declines in model performance over time. There are technical reasons why this occurs, but from your perspective, you need to understand that the work has only begun once a model has been trained and put into production.

But you’re also not out of the woods if you’re accessing ChatGPT via an API, because you have just as little visibility into what’s happening on OpenAI’s engineering teams as the rest of us do.

If OpenAI releases an update you might suddenly find that ChatGPT fails in usual ways or trips over tasks it was handling very well last week. You’ll need to have robust monitoring in place so that you catch these issues if they arise, as well as an engineering team able to address the root cause.

Model degradation often stems from issues with the underlying data. This means that if you’ve e.g. trained ChatGPT to answer questions you might have to assemble new data for it to train on, a process that takes time and money and should be budgeted for.

Harassment and Bias

You could argue that harassment, bias, and harmful language are a kind of degraded performance, but they’re distinct and damaging enough to warrant their own section.

When Microsoft first released Sydney it was cartoonishly unhinged. It would lie, threaten, and manipulate users; in one case, it confessed both its love for a New York Times reporter along with its desire to engineer dangerous viruses and ignite internecine arguments between people.

All this has gotten much better, of course, but the same behavior can manifest in subtler ways, especially if someone is deliberately trying to jailbreak a large language model.

Thanks to extensive public testing and iteration, the current versions of the technology are very good at remaining polite, avoiding stereotyping, etc. Nevertheless, we’re not aware of any way to positively assure that no bias, deceit, or nastiness will emerge from ChatGPT.

This is another place where you’ll have to carefully monitor your model’s output and make corrections as necessary.

Using LLMs in your Contact Center

If you’re running a contact center, you owe it to yourself to at least check out ChatGPT. Whether it makes sense for you will depend on your unique circumstances, but it’s a remarkable new technology that could help you make your agents more effective while reducing turnover.

Quiq offers a white-glove platform that makes it easy to leverage conversational AI. Schedule a demo with us to see how we can help you incorporate generative AI into your contact center today!

Before You Develop a Mobile App For Your Business—Read This

Remember when every business was coming out with an app? Your favorite clothing brand, that big retail chain, your neighborhood grocery store, and even your babysitter jumped on the bandwagon and claimed real estate on their customers’ mobile devices.

It probably made you think: Do we need an app for our business?

Despite the many benefits of an app, diving headfirst into development can drain your team’s time and resources without the guarantee of a return. Done poorly, it can even hinder your customer experience. Before you do any mobile app development, you need a plan.

This article will take you through some of the lessons learned from working with brands that deliver world-class experiences within apps and beyond.

Why do companies build apps?

Apps are powerful marketing tools for all kinds of businesses—and none more than e-commerce. Here are some of the top reasons why businesses build an app.

A place for loyal customers.

Almost by default, a mobile app is an exclusive space for your loyal customers. Think about the last time you downloaded an app. It probably wasn’t for a business you buy from once a year. It’s almost always a brand you follow closely or a service you use frequently.

Providing an app is basically like creating a direct line of communication with your best customers. You can create exclusive content, provide a better shopping experience, and unlock early access to products and services. Apps are great ways to turn good customers into great ones.

Mobile device real estate.

On average, Americans check their phones 344 times per day—or once every 4 minutes. And 88% of the time we spend on our phones is spent in apps, according to Business Insider. Having your brand logo as an icon on your customers’ home screens is invaluable real estate.

Push notifications.

When customers have push notifications turned on, it’s another way to speak directly to your customers. Push notifications are great engagement tools, and you can connect with customers using timely and personalized communications and ultimately drive in-app sales.

Beating out or keeping up with competitors.

Standing out from the competition is another reason many businesses build apps. If your competitors are using apps to stand out from the crowd, then it often compels businesses to do the same.

Contact Us

What are the drawbacks of using building an app?

While mobile apps are still extremely popular, they have some major drawbacks for brands not ready to invest in them.

Phones are overcrowded.

Whereas building an app five years ago meant you stood out from the crowd, now you’re just one of many. People have an average of 80 apps on their phones, but they’re only using around nine a day.

Basically, that means mobile users are downloading apps and not using them on a regular basis. In fact, 25% of apps are used once and then never opened again, according to Statista.

Having an app doesn’t guarantee your customers’ attention or engagement—that’s still up to your marketing team.

There’s a big upfront investment.

Whether you enlist the help of your development team or outsource app creation, it’s a big lift. Getting a mobile app up and running takes significant resources, and while there may be a return on investment, it isn’t guaranteed.

When you’re already overwhelmed with your current development efforts, adding another microsite to manage could just make it worse.

You’ll double your marketing efforts.

More push notifications, more campaigns, more content. An app just means you have to do more to see an increase in revenue. While it could be a valuable asset, there are other, smaller steps you can take that will help you see the same revenue boost without the exponential effort.

Can you deliver rich customer experiences without an app?

Yes! But don’t think we’re anti-app. In fact, a lot of our clients create great apps that are sticky because they provide ongoing value to their customers. These clients are able to reach a whole set of people in their moment of need and build trust as they continue to look to the app for help.

However, many of the marketing and customer service goals that drive businesses to create an app can be achieved through rich business messaging. Here are a few examples.

Want to speak directly to your customers? Try outbound SMS.

Push notifications are extremely effective at connecting with customers, but it only takes a few taps to turn them off.

A similar communication method is outbound SMS messaging. You can personalize messages and deliver real-time communications via text messaging. Plus, with rich messaging capabilities, you can send interactive media like images, cards, emojis, and videos to enhance every conversation.

Want to engage with your customers? Use Google Business Messages.

Get customers from Google directly in communication with your customer service agents using Google Business Messages.

Customers can tap a message button right from Google search to connect with your team. (And since 92% of searches start with Google, there’s a good chance your customers will take advantage of this feature.)

Want to enhance your customer experience? Use Apple Messages for Business.

If you’re after a branded experience and want to meet user expectations, Apple Messages for Business delivers. Apple device users can simply tap the message icon from Maps, Siri, Safari, Spotlight, or your company’s website and instantly connect with your team.

You’ll deliver a rich messaging experience, plus your branding upfront and center. Your company name, logo, and colors will be featured in the messaging app, delivering a fully branded experience for your customers.

Want to be more social? Connect Quiq with social platforms.

Clients using Quiq are uniquely equipped with a conversational engagement platform that provides rich experiences to users across chat and business messaging channels.

This means that companies can provide content-rich, personalized experiences across SMS/text business messaging, web chat, Facebook, Twitter, Instagram, and WhatsApp.

Your brand can be on social platforms without working across them. Quiq gives your team access to all these messaging channels within one easy-to-use message center. So, unlike an app, adding more channels doesn’t necessarily increase the workload. It just gives your customers more ways to connect with you.

Should you consider business messaging over an app?

There’s no either/or choice here. Both can be part of a thriving marketing and customer service strategy. But if you’re looking for a way to engage your customers and haven’t tried business messaging—start there.

If you’re on the fence, consider this:

  1. You don’t have to build an app—you only have to implement business messaging.
  2. Customers don’t have to download and learn anything to connect with you. Business messaging is right there in communication channels they already know and love, like texting and social media.

Engage customers with or without an app.

The main goal of most apps is to help build long-term relationships with customers. Whether you choose to build an app or not, business messaging supports this goal by providing information, support, and help at the customer’s exact moment of need.

Quiq powers conversations between customers and companies across the most convenient and preferred engagement channels. With Quiq, you’ll have meaningful, timely, and personalized conversations with your customers that can be easily managed in a simplified UI.

Ready to see how business messaging can help you engage your customers with or without an app? Request a demo or try it for yourself today.

Agent Efficiency: How to Collect Better Metrics

Your contact center experience has a direct impact on your bottom line. A positive customer experience can nudge them toward a purchase, encourage repeat business, or turn them into loyal brand advocates.

But a bad run-in with your contact center? That can turn them off of your business for life.

No matter your industry, customer service plays a vital role in financial success. While it’s easy to look at your contact center as an operational cost, it’s truly an investment in the future of your business.

To maximize your return on investment, your contact center must continually improve. That means tracking contact center effectiveness and agent efficiency is critical.

But before you make any changes, you need to understand how your customer service center currently operates. What’s working? What needs improvement? And what needs to be cut?

Let’s examine how contact centers can measure customer service performance and boost efficiency.

What metrics should you monitor?

The world of contact center metrics is overwhelming—to say the least. There are hundreds of data points to track to assess customer satisfaction, agent effectiveness, and call center success.

But to make meaningful improvements, you need to begin with a few basic metrics. Here are three to start with.

1. Response time.

Response time refers to how long, on average, it takes for a customer to reach an agent. Reducing the amount of time it takes to respond to customers can increase customer satisfaction and prevent customer abandonment.

Response time is a top factor for customer satisfaction, with 83% expecting to interact with someone immediately when they contact a company, according to Salesforce’s State of the Connected Customer report.

When using response time to measure agent efficiency, have different target goals set for different channels. For example, a customer calling in or using web chat will expect an immediate response, while an email may have a slightly longer turnaround. Typically, messaging channels like SMS text fall somewhere in between.

If you want to measure how often your customer service team meets your target response times, you can also track your service level. This metric is the percentage of messages and calls answered by customer service agents within your target time frame.

2. Agent occupancy.

Agent occupancy is the amount of time an agent spends actively occupied on a customer interaction. It’s a great way to quickly measure how busy your customer service team is.

An excessively low occupancy suggests you’ve hired more agents than contact volume demands. At the same time, an excessively high occupancy may lead to agent burnout and turnover, which have their own negative effects on efficiency.

3. Customer satisfaction.

The most important contact center performance metric, customer satisfaction, should be your team’s main focus. Customer satisfaction, or CSAT, typically asks customers one question: How satisfied are you with your experience?

Customers respond using a numerical scale to rate their experience from very dissatisfied (0 or 1) to very satisfied (5). However, the range can vary based on your business’s preferences.

You can calculate CSAT scores using this formula:

Number of satisfied customers ÷ total number of respondents x 100 = CSAT

CSAT’s a great first metric to measure since it’s extremely important in measuring your agents’ effectiveness, and it’s easy for customers to complete.

There are lots of options for measuring different aspects of customer satisfaction, like customer effort score and Net Promoter Score®. Whichever you choose, ensure you use it consistently for continuous customer input.

Bonus tip: Capturing customer feedback and agent performance data is easier with contact center software. Not only can the software help with customer relationship management, but it can facilitate customer surveys, track agent data, and more.

Contact Us

How to assess contact center metrics.

Once you’ve measured your current customer center operations, you can start assessing and taking action to improve performance and boost customer satisfaction. But looking at the data isn’t as easy as it seems. Here are some things to keep in mind as you start to base decisions on your numbers.

Figure out your reporting methods.

How will you gather this information? What timeframes will you measure? Who’s included in your measurements? These are just a few questions you need to answer before you can start analyzing your data.

Contact center software, or even more advanced conversational AI platforms like Quiq, can help you track metrics and even put together reports that are ready for your management team to analyze and take action on.

Analyze data over time.

When you’re just starting out, it can be hard to contextualize your data. You need benchmarks to know whether your CSAT rating or occupancy rates are good or bad. While you can start with industry benchmarks, the most effective way to analyze data is to measure it against yourself over periods of time.

It takes months or even years for trends to reveal themselves. Start with comparative measurements and then work your way up. Month-over-month data or even quarter-over-quarter can give you small windows into what’s working and what’s not working. Just leave the big department-wide changes until you’ve collected enough data for it to be meaningful.

Don’t forget about context.

You can’t measure contact center metrics in a silo. Make sure you look at what’s going on throughout your organization and in the industry as a whole before making any changes. For example, a drop in customer response time might have to do with the influx of messages caused by a faulty product.

While collecting the data is easy, analyzing it and drawing conclusions is much more difficult. Keep the whole picture in mind when making any important decisions.
How to improve call center agent efficiency.
Now that you have the numbers, you can start making changes to improve your agent efficiency. Start with these tips.

Make incremental changes.

Don’t be tempted to make wide-reaching changes across your entire contact center team when you’re not happy with the data. Select specific metrics to target and make incremental changes that move the needle in the right direction.

For example, if your agent occupancy rates are high, don’t rush to add new members to your team. Instead, see what improvements you can make to agent efficiency. Maybe there’s some call center software you can invest in that’ll improve call turnover. Or perhaps all your team needs is some additional training on how to speed up their customer interactions. No matter what you do, track your changes.

Streamline backend processes.

Agents can’t perform if they’re constantly searching for answers on slow intranets or working with outdated information. Time spent fighting with old technology is time not spent serving your contact center customers.

Now’s the perfect time to consider a conversational platform that allows your customers to reach out using the preferred channel but still keeps the backend organized and efficient for your team.

Agents can bounce back and forth between messaging channels without losing track of conversations. Customers get to chat with your brand how they want, where they want, and your team gets to preserve the experience and deliver snag-free customer service.

Improve agent efficiency with Quiq’s Conversational AI Platform

If you want to improve your contact center’s efficiency and customer satisfaction ratings, Quiq’s conversational customer engagement software is your new best friend.

Quiq’s software enables agents to manage multiple conversations simultaneously and message customers across channels, including text and web chat. By giving customers more options for engaging with customer service, Quiq reduces call volume and allows contact center agents to focus on the conversations with the highest priority.

The Rise of Conversational AI: Why Businesses Are Embracing It

Movies may have twisted our expectations of artificial intelligence—either giving us extremely high expectations or making us think it’s ready to wipe out humanity.

But the reality isn’t on those levels. In fact, you’re already using AI in your daily life—but it’s so ingrained in your technology you probably don’t even notice. Netflix and Spotify both use AI to personalize your content recommendations. Siri, Alexa, and Google Assistant use it as well.

Conversational AI, like what Quiq uses to power our chatbots, takes artificial intelligence to the next level. See what it is and how you can use it in your business.

What is conversational AI?

Conversational artificial intelligence (AI) is a collection of technologies that create a human-like experience. It combines natural language processing (NLP), machine learning, and other technologies to enhance streamlined conversations. This can be used in many applications, like chatbots and voice (like Siri and Alexa). The most common use case for conversational AI in the business-to-customer world is through an AI chatbot messaging experience.

Unlike rule-based chatbots, those powered by conversational AI generate responses and adapt to user behavior over time. Rule-based chatbots were also limited to what you put in them—meaning if someone phrased a question differently than you wrote it (or used slang/colloquialisms/etc.), it wouldn’t understand the question. Conversational AI can also help chatbots understand more complex questions.

Putting technical terms in context.

Companies throw around a lot of technical terms when it comes to artificial intelligence, so here are what they mean and how they’re used to improve your business.

Rules-based chatbots: Earlier chatbot iterations (and some current low-cost versions) work mainly through pre-defined rules. Your business (or service provider) writes specific guidelines for the chatbot to follow. For example, when a customer says “Hi,” the chatbot responds, “Hello, how may I help you?”

Another example is when a customer asks about a return. The chatbot is programmed to give a specific response, like, “Here’s a link to the return policy.”

However, the problem with rule-based chatbots is that they can be limiting. It only knows how to handle situations based on the information programmed into it. So if someone says, “I don’t like this product, what can I do?” and you haven’t planned for that question, the chatbot won’t have a response.

Machine learning: Machine learning is a way to combat the problem posed above. Instead of giving the chatbot specific parameters complete with pre-written questions and answers, machine learning helps chatbots make decisions based on the information provided.

Machine learning helps chatbots adapt over time based on customer conversations. Instead of giving the bot specific ways to answer specific questions, you show it the basic rules, and it crafts its own response. Plus, since it means your chatbot is always learning, it gets better the longer you use it.

Natural language processing: As humans and speakers of the English language, we know that there are different ways to ask every question. For example, a customer who wants to know when an item is back in stock may ask, “When is X back in stock?” or they might say, “When will you get X back in?” or even, “When are you restocking X?” Those three questions all mean the same thing, and as humans, we naturally understand that. But a rules-based bot must be told that those mean the same things, or they might not understand it.

Natural language processing (NLP) uses AI technology to help chatbots understand that those questions are all asking the same thing. It also can determine what information it needs to answer your question, like color, size, etc.

NLP also helps chatbots answer questions in a more human-like way. If you want your chatbot to sound more human (and you should), then find one that uses NLP.

Web-based SDK: A web-based SDK (that’s a software development kit for non-developers) is a set of tools and resources developers use to integrate programs (in this case, chatbots) into websites and web-based applications.

What does this mean for your chatbot? Context. When a user says, “I need help with my order,” the chatbot can use NLP to identify “help” and “order.” Then it can look back at previous conversations, pull the customers’ order history, and more—if the data is there.

Contextual conversations are everything in customer service—so this is a big factor in building a successful chatbot using conversational AI. In fact, 70% of customers expect anyone they’re speaking with to have the full context. With a web-based SDK, your chatbot can do that too.

The benefits of conversational AI.

Using chatbots with conversational AI provides benefits across your business, but the clearest wins are in your contact center. Here are three ways chatbots improve your customer service.

24/7 customer support.

Your customer service agents need to sleep, but your conversational AI chatbot doesn’t. A chatbot can answer questions and contain customer issues while your contact center is closed. Any issues they can’t solve, they can pass along to your agents the next day. Not only does that give your customers 24/7 service, but your agents will have less of a backlog when they return to work.

Faster response times.

When your agents are inundated with customers, an AI chatbot can pick up the slack. Send your chatbot in to greet customers immediately, let them know the wait time, or even start collecting information so your agents can get to the root of the problem faster. Chatbots powered with AI can also answer questions and solve easy customer issues, skipping human agents altogether.

For more ways AI chatbots can improve your customer service, read this >

More present customer service agents.

Chatbots can handle low-level customer queries and give agents the time and space to handle more complex issues. Not only will this result in better customer service, but agents will be happier and less stressed overall.

Plus, chatbots can scale during your busy seasons. You’ll save on costs since you won’t have to hire more agents, and the agents you have won’t be overworked.

How to make the most of AI technology.

Unfortunately, you can’t just plug and play with conversational AI and expect to become an AI company. Just like any other technology, it takes prep work and thoughtful implementation to get it right—plus lots of iterations.

Use these tips to make the most of AI technology:

Decide on your AI goals.

How are you planning on using conversational AI? Will it be for marketing? Customer service? All of the above? Think about what your main goals are and use that information to select the right AI partner.

Choose the right conversational AI platform.

Once you’ve decided on how you want to use conversational AI, select the right partner to help you get there. Think about aspects like ease of use, customization, scalability, and budget.

Design your chatbot interactions.

Even with artificial intelligence, you still have to put the work in upfront. What you do and how you do it will vary greatly depending on which platform you go with. Design your chatbot conversations with these things in mind:

  • Your brand voice
  • Personalization
  • Customer service best practices
  • Logical conversation flows
  • Concise messages

Build a partnership between agents and chatbots.

Don’t launch the chatbot independently of your customer service agents. Include them in the training and launch, and start to build a working relationship between the two. Agents and chatbots can work together on customer issues, both popping in and out of the conversation seamlessly. For example, a chatbot can collect information from the customer upfront and pass it to the agent to solve the issue. Then, when the agent is done, they can bring the chatbot back in to deliver a customer survey.

Test and refine.

Sometimes, you don’t know what you don’t know until it happens. Test your chatbot before it launches, but don’t stop there. Keep refining your conversations even after you’ve launched.

What does the future hold for conversational AI?

There are many exciting things happening in AI right now, and we’re only on the cusp of delving into what it can really do.

The big prediction? For now, conversational AI will keep getting better at what it’s already doing. More human-like interactions, better problem-solving, and more in-depth analysis.

In fact, 75% of customers believe AI will become more natural and human-like over time. Gartner is also predicting big things for conversational AI, saying by 2026, conversational AI deployments within contact centers will reduce agent labor costs by $80 billion.

Why should you jump in now when bigger things are coming? It’s simple. You’ll learn to master conversational AI tools ahead of your competitors and earn an early competitive advantage.

How Quiq does conversational AI.

To ensure you give your customers the best experience, Quiq powers our entire platform with conversational AI. Here are a few stand-out ways Quiq uniquely improves your customer service with conversational AI.

Design customized chatbot conversations.

Create chatbot conversations so smooth and intuitive that it feels like you’re talking to a real person. Using the best conversational AI techniques, Quiq’s chatbot gives customers quick and intelligent responses for an up-leveled customer experience.

Help your agents respond to customers faster.

Make your agents more efficient with Quiq Compose. Quiq Compose uses conversational AI to suggest responses to customer questions. How? It uses information from similar conversations in the past to craft the best response.

Empower agent performance.

Tools like our Adaptive Response Timer (ADT) prioritizes conversations based on how fast or slow customers respond. The conversational AI platform also uses AI to analyze customer sentiment to give extra attention to customers who need it.

This is just the beginning.

This is just a taste of what conversational AI can do. See how Quiq can apply the latest technology to your contact center to help you deliver exceptional customer service.

Contact Us

Does Your Chatbot Sound Robotic? 7 Ways to Fix It

Does your chatbot sound like a robot?

Okay, chatbots are robots (hence the name), but they don’t have to sound like something out of a 70s sci-fi flick.

Chatbots have come a long way and are getting better at understanding and mimicking human interactions. According to Zendesk’s CX Trends 2023 report, 65% of leaders believe the AI/bots they use are becoming more natural and human-like.

It turns out customers agree. Sixty-nine percent who seek support find themselves asking bots a wider range of questions than before. But companies are still struggling to keep up with customers’ AI expectations.

Seventy-five percent of customers think AI should be able to provide the same level of service as human agents, and 75% expect AI interactions will become more natural and human-like over time.

So if your bot is still sounding a little wooden (or metallic), your customer satisfaction could be taking a hit. Here are some ways to make your chatbot sound more human.

But first, should chatbots sound human?

We think so. Yet, there’s a difference between making your bot sound human and pretending your bot is a human. No matter how advanced your chatbot is, we always recommend full transparency to our customers.

While chatbots can be as much a part of your team as your human agents, there are definitely limits to what they can do. If you don’t introduce your chatbot as such, customers might feel like you’re trying to trick them. And in today’s landscape, customer trust is everything.

Now back to the fun stuff.

1. Name your chatbot.

Amazon has Alexa, Apple has Siri, and Iron Man has Jarvis (and Friday). Chatbots and AI are instantly more relatable when you stop calling them bots.

 

We worked with Daily Harvest to develop their chatbot, aptly named Sage. Sage fields common questions and gathers data for conversations with human agents. Sage also helps minimize the stress on the Daily Harvest customer service team by containing 60% of conversations. While containment (where customers’ conversations aren’t transferred to a human agent) isn’t the goal, it’s good to know customers are gaining enough valuable information for Sage to resolve their own questions.

2. Consider putting a face to your AI.

Admittedly, this tip is controversial. Do Alexa and Siri have faces? No, that’d be weird. But they’re associated with objects already. Since your chatbot lives on the screen, giving it a face isn’t a bad idea.

Consider giving your bot a friendly avatar. It doesn’t have to be a literal face. It can be an icon, an inanimate object, an animal, or whatever represents your brand. Go with your gut on this one—it can really go either way.

Here’s a bad example:

3. Give your chatbot some personality.

What’s the first thing human agents do when they start a new chat? They introduce themselves! Your chatbot should do the same. On the first message, have your chatbot introduce themselves, say they’re a chatbot/virtual assistant/virtual agent/etc, and ask how they can help.

Beyond introductions, include some casual language in your chatbot’s script. Instead of “What’s your question?” say, “How can I help you today?”

Remember that your chatbot is an extension of your brand, so its personality should reflect it. If your brand is quirky and whimsical, infuse that language into your chatbot.

4. Teach your chatbot empathy.

Typically, low-tech chatbots can only repeat preprogrammed phrases. However, humans adapt to mood, personality, and behavior. To make your chatbot really feel more friendly and human-like, it needs to be able to do the same.

Look for a chatbot that Interprets questions through natural language processing (NLP) to determine how to answer it. NLP allows bots to pick up on human speech patterns in a much more sophisticated way.

You can also add empathetic language to various points in the chatbot script. Phrases like “I understand” and “I’m sorry to hear that” go a long way in soothing customer frustrations.

5. Give your chatbot context.

Start with the customer’s name. Whether the customer already has a profile or you program the chatbot to ask for it, have your chatbot use the customer’s name in conversation. But don’t stop there.

Context makes conversations go a lot smoother, whether with a chatbot or with a human agent. Program your bot to pull in context from your customer’s web behavior into the conversation. For example, if a customer has been looking at Hawaiian vacations, have the bot ask if they need help with their trip to the islands.

Context will make the conversation flow more naturally and give your customers a better overall experience.

Contact Us

6. Make your chatbot and human agents a team.

The human-like quality of understanding shouldn’t be underestimated in a chatbot. Having a bot that understands what a customer is asking—and knows when to bring in reinforcements—is key to a great customer experience.

Instead of trying to replace your human agents, make your chatbot and agents a team. Jewelry retailer Blue Nile is a great example of how chatbots and humans can work together to elevate the customer experience.

Blue Nile’s initial chatbot attempt routed customers all across the company without considering what they were asking. Customers looking to buy were sent to service reps instead of sales, and vice versa.

So the dazzling diamond dealer worked with Quiq to create a much more intuitive and human-like chatbot. A better chat experience led to 70% more sales interactions and a 35% conversion rate.

7. Combine logic and rules for a more responsive experience.

Low-tech chatbots might ask you to write responses for a specific chain of events. For example, your customer mentions a return, the chatbot pulls up return directions, the problem is resolved. That’s chatbot logic.

But one thing a human has that many chatbots lack is the ability to pick up on queues and respond accordingly.

With AI-enhanced chatbots, you can also define specialty rules for your chatbot to follow. Going back to our return example, most are simple and straightforward. Sometimes, however, a customer is extremely unhappy with the product or service and needs extra attention. AI chatbots, like Quiq’s, can use sentiment analysis to pick up on customer behavior to identify an unhappy customer (or whichever other sentiment you choose) and reroute to a human agent.

This way, you don’t have a cheery chatbot irritating your already irate customer.

Embrace AI to humanize your chatbot.

Humanizing your chatbot comes down to two factors:

  1. A dedicated effort to give your chatbot personality
  2. The AI technology to make it happen

With both those components, you can make your chatbot sound more human and embrace it as part of your customer service team.

Request A Demo