Introducing Email AI: How to Slash High Volumes While Improving CSAT Save your Spot →

7 Reasons Why Customers Want to Text You

If you’re old enough, you may remember a time when everyone wasn’t on their phones every waking moment of their lives. We’ve left those days long behind, and surveys indicate that many of us check our phones more than 300 times daily.

This isn’t surprising, given that we’re using our phones to buy tickets, order groceries, argue with relatives (and everyone else), talk to coworkers, and even find a date.

But one thing we’re not doing very often with phones is using them to make calls. This is where texting comes in. In 2020, mobile business messaging traffic climbed to 2.7 trillion, up a full 10% from just one year prior. More directly relevant to CX directors, text-based customer service requests increased by more than a quarter in 2021 alone.

What does this ultimately mean for your team? Simple: customers prefer texting.

The emergence of this crucial channel is one of the major recent trends in customer communication, and this article will enumerate the 7 biggest reasons why.

1. Customers Want Options

Too many options may lead to feeling overwhelmed—but that doesn’t stop customers from wanting them anyway.

There are countless ways to reach out to businesses—including phone, email, texting, and talking to AI assistants—but people generally pick the method of communication that fits their preferences.

Make sure to offer texting as one of those options; it’s become extremely popular, and may become more popular still as the years go on.

2. Your Customers are Busy. Texting is Asynchronous.

The pace of life has quickened these days. Your present and future customers will abhor the thought of wasting a single moment, and this is one of the biggest reasons customers prefer texting.

Suppose a customer starts a return or warranty flow at their office, but won’t be able to take a picture of the damaged item they bought until they got home. With text messaging, this becomes one seamless, continuous conversation that can be picked up or paused as needed, making it a great way to resolve issues that are important but can be handled in multiple independent steps. And with the right vendor, agents don’t have to close conversations while waiting for customer responses.

For its part, email support still feels like a more formal medium. People spend longer drafting messages, which means getting help will take more of their day, and no one enjoys not knowing when they’ll receive a reply.

And even though generative AI has begun to change this picture for the better, it’s still the case that a text message tends to fit better into your customers’ lives, allowing you to reach them where they are.

3. Texting Isn’t Limited to Millennials (or GenZ). It’s Generation Agnostic.

Before going further, it’s worth busting the myth that all the benefits of texting for customer service appeal to younger crowds. That may have been the case when The Matrix was released (25 years ago–ouch!), but it’s not the case any longer.

No, our parents and grandparents can text along with the best of us, and some surveys indicate that 90% of adults over 50 use their smartphones to send instant messages, texts, or emails (a number that’s likely gone up since.)

Part of what’s driving this trend in customer communication is that texting is both ubiquitous (almost everyone uses text messaging to communicate with someone), and it’s also a less technical way for people of any age to reach customer service.

Other messaging channels may require downloading an app or using unfamiliar social media. Text messaging, however, is ready to go on everyone’s device—even if they remember the Nixon administration.

4. Customers Want Better Experiences.

One of the main goals of customer service is to give patrons an experience they’ll remember, and continue to want to pay for.

And pay for it they are; research conducted by PwC found that almost three-quarters of people indicated that customer experience is one of the main things they consider when making a purchase.

Of course, text messaging is hardly the only thing that goes into creating a superlative customer experience, but when done correctly, it’ll probably get you further than you think.

Some of the benefits of texting for customer service have already been discussed (it’s quicker, it’s more convenient), but you may not realize that texting is a medium that can complete your overall customer experience strategy. Everything from sending support messages and getting order updates to checking out can all be handled right within SMS text messaging.

As we alluded to earlier, many think email is a fairly formal way of communicating, while texting is almost always more conversational. If it doesn’t feel contrived, the greater warmth and friendliness that comes through in texting can help build trust with customers, which is becoming all the more important in an age of data breaches and deepfakes.

5. Customers Want More than Words: They Want Rich Messaging, Too.

Building on this theme, one of the other major trends in customer communication is a move away from stiff, formal diction and towards talking to people more like they’re friends or acquaintances (with emojis, GIFs, the works).

When you think about business text messages, “Haha, yeah, sounds great! 😂” may not be the first thing that comes to mind. But perhaps a similar conversational tone should be, especially if using text messaging as a channel is a core part of your CX strategy.

Much of this is powered by “rich messaging,” which was developed to support emojis, in-message buttons and cards, audio messages, high-quality pictures, videos, and all the other staples of modern communication. Rich messaging can help you connect to your customers in a newly engaging way, and also helps them complete tasks quickly because they can use buttons or product cards to quickly input their information. This not only makes their lives easier, it increases the fidelity at which brands can communicate with customers.

Learn more about why your business should use rich messaging here.

6. They Just Prefer Texting.

The previous two sections discussed how texting leads to better customer experience and how rich messaging does a lot to increase the information and the conversationalism of communicating over text.

Taken together, these facts point to a broader one, which is that, well, many customers just prefer texting.

We’re all guilty of hitting ignore on a well-intentioned phone call or putting off making an appointment when we have to dial a number. More and more, people perceive a phone call as invasive and time-consuming.

This contention is backed up by data: 75% of millennials avoid phone calls as they’re time-consuming, and 81% experience anxiety before summoning up the courage to make a call. Millennials have a reputation for being phone-averse, but it doesn’t stop with them.

One of the basic issues with phone calls is that they’re unpredictable. A customer service call can take a few minutes or half an hour, so customers don’t know how to prepare.

Business texts are quick and efficient, and they can happen on the customer’s terms.

This is reflected in the fact that texts are more likely to be opened and responded to, by a large margin. The vast majority of texts (95%) receive replies within three minutes, and the average open rate is close to 100% (and just shy of five times better than similar metrics for emails).

To sum up (and reiterate): customers prefer texting, so your business should seriously consider adding it to your channel mix if you haven’t already.

7. Customers Want Texting to be Responsive and Personalized. Generative AI Makes That Easier On Businesses.

We’d be remiss if we didn’t include a section on the remarkable advances of large language models and what they mean for broader trends in customer communication—particularly the generative components.

This is a huge topic, but what they mean for customers is the availability of responses that are contextual, personalized, and always on.

The formulaic, stilted chatbots of yesteryear have been replaced by models that are dynamic, able to use techniques like retrieval-augmented generation to reply in ways that are tailored to that customer. It might generate answers from a knowledge base, for example, or provide personalized recommendations from a product catalog.

Moreover, language models don’t take holidays, breaks, or time off, and can therefore reply whenever and wherever they’re needed.

This doesn’t mean they’re a full-on replacement for human contact center agents, of course. But they’re a remarkable supplement. In addition, when you partner with a conversational AI platform for CX, you can utilize AI-driven benefits of texting for customer service with a minimum of hassle!

When It Comes Down to It: Yes, Your Customers Still Want to Text You.

Whether you’re looking at hard data or just vibes, it’s clear that more and more customers prefer texting. It’s easy, convenient, and fits better into a busy life while also affording the opportunity for personalization that drives higher levels of customer satisfaction.

If you’d like to learn more about how Quiq supports enterprise CX companies that want to make texting a centerpiece of their customer outreach strategy, learn more here.

How To Encourage More Customers To Use your Live Chat Service

When customer experience directors float the idea of investing more heavily in live chat for customer service, it’s not uncommon for them to get pushback. One of the biggest motivations for such reticence is uncertainty over whether anyone will actually want to use such support channels—and whether investing in them will ultimately prove worth it.

An additional headwind comes from the fact that many CX directors are laboring under the misapprehension that they need an elaborate plan to push customers into a new channel. But one thing we consistently hear from our enterprise customers is that it’s surprising how naturally customers start using a new channel when they realize it exists. To borrow a famous phrase from Field of Dreams, “If you build it, they will come.” Or, to paraphrase a bit, “If you build it (and make it easy for them to engage with you), they will come.” You don’t have to create a process that diverts them to the new channel.

The article below fleshes out and defends this claim. We’ll first sketch the big-picture case for why live chat with customers remains as important as ever, then finish with some tips for boosting customer engagement with your live chat service.

Why is Live Chat Important for Contact Centers?

Before we talk about how to get people to use your live chat for customer service features, let’s discuss why such channels continue to be an important factor in the success of customer experience companies.

The simplest way to do this is with data: 60% of customers indicate that they’re more likely to visit a website again if it has live chat for customer service, and a few more (63%) say that a live chat widget will increase their willingness to make a purchase.

But that still leaves the question of how live chat stacks up against other possible communication channels. Well, nearly three-quarters (73%) are more comfortable using live chat for customer service issues than email or phone—and a high fraction (61%) are especially annoyed by the prospect of being put on hold.

If this isn’t enough, there are customer satisfaction (CSAT) scores to think about as well. This is perhaps the strongest data point in support of customer live chat, as 87% of customers give a positive rating to their live chat conversations.

Agents also prefer live chat over the phone because regularly dealing with angry and upset customers via phone can take an emotional toll. Live chat contributes to agent job retention—a big, expensive issue that many CX leaders are constantly trying to grapple with.

So, the data is clear and it makes sense for all the reasons we’ve discussed: Live chat for customer service shows every indication of being a worthwhile communication channel, both now and in the future.

6 Tips for Encouraging Customers to Use Live Chat

With that having been said, the next few sections will detail some of the most promising strategies for getting more of your customers to use your live chat features.

1. Make Sure People Know You have Live Chat

The first (and probably easiest) way to get more customers to use your live chat is to take every step possible to make sure they know it’s something you offer. Above, we argued that little special effort is required to get potential customers to use a new channel, but that shouldn’t be taken to mean that there’s no use in broadcasting its existence.

You can get a lot of mileage out of promoting live chat through your normal marketing channels–a mention on your support page, on your social feeds, and at the bottom of your order confirmation emails, for example. In the rest of this section, we’ll outline a few other low-cost ways to boost engagement with live chat for customer service.

First, use your IVR to move callers from phone to messaging. You can also mention that you support live chat for customer service during the phone hold message. We noted above that people tend to hate being put on hold. You can use that to your advantage by offering them the more attractive alternative of hopping onto a digital messaging channel instead—including WhatsApp, Apple Messages for Business, and SMS. For example, this might sound as simple as: “Press 2 to chat with an agent over SMS text messaging, or get faster support over live web chat on our website.”

From your perspective, an added benefit is that your agents can easily shuffle between several different live chat conversations, whereas that isn’t possible on the phone. This means faster resolutions, a higher volume of questions answered, and more satisfaction all the way around.

Similarly, include plenty of links to live chat when communicating with your customers. After they make a purchase, for example, you could include a message suggesting they utilize live chat to resolve any questions they have. If you’re sending them other emails, that’s a good place to highlight live chat as well. Don’t neglect hero pages and product pages; being able to answer questions while talking directly to current and future buyers is a great way to boost sales.

BODi® (formerly Beach Body) is a California-based nutrition and fitness company that pursued exactly this strategy when they ditched their older menu-based support system in favor of “Ask BODi AI.”

Bodi-Customer

This eventually became a cost-effective support channel that was able to answer a variety of free-form questions from customers, leading to happier buyers and better financial performance.

2. Minimize the Hassle of Using Live Chat

One of the better ways of boosting engagement with any feature, including live chat, is to make it as pain-free as possible.

Take contact forms, for example, which can speed up time to resolution by organizing all the basic information a service agent needs. This is great when a customer has a complex issue, but if they only have a quick question, filling out even a simple contact form may be onerous enough to prevent them from asking it.

There’s a bit of a balancing act here, but, in general, the fewer fields a contact form has, the more likely someone is to fill it out.

The emergence of large language models (LLMs) has made it possible to use an AI assistant to collect information about customers’ specific orders or requests. When such an assistant detects that a request is complex and needs human attention, it can ask for the necessary information to pass along to an agent. This turns the traditional contact form into a conversation, placing it further along in the customer service journey so only those customers who need to fill it out will have to use it.

Or take something as prosaic as the location and prominence of your ‘live chat’ button. Is it easy to find, or is it buried so deep you’d need Indiana Jones to dig it out? Does it pop out proactively to engage potential or returning customers with contextual messaging based on what they’re browsing?

It’s also worth briefly mentioning that the main value prop of rich messaging content– like carousel cards, buttons, and quick replies–results in much less friction for the consumer. We have a dedicated section on rich messaging below that spells this out in more detail.

Though they may seem minor in isolation, there’s an important truth here: if you want to get more people to use your live chat for customer service, make it easy and pain-free for them to do so. Every additional second of searching or fiddling means another lost opportunity.

3. Personalize Your Chat

Another way to make live chat for customer service more attractive is to personalize your interactions. Personalization can be anything from including an agent’s name and picture in the chat interface displayed on your webpage to leveraging an LLM to craft a whole bespoke context for each conversation.

For our purposes, the two big categories of personalization are brand-specific personalization and customer-specific personalization. Let’s discuss each.

Brand-specific personalization

For the former, marketing and contact teams should collaborate to craft notifications, greetings, etc., to fit their brand’s personality. Chat icons often feature an introductory message such as “How can I help you?” to let browsers know their questions are welcome. This is a place for you to set the tone for the rest of a conversation, and such friendly wording can encourage people to take the next step and type out a message.

More broadly, these departments should also develop a general tone of voice for their service agents. While there may be some scripted language in customer service interactions, most customers expect human support specialists to act like humans. And, since every request or concern is a little different, agents often need to change what they say or how they say it.

This is no less true for buyers on different parts of your site. Customer questions will be different depending on whether they’re on a checkout page, a product page, or the help center because they are at very distinct points in their buying journey. It’s important to contextualize any proactive messaging and the conversational flow itself to accommodate this (i.e., “Need help checking out? We’ve got live agents standing by.” versus “Have questions about this product? Try asking me”).

Setting rules for tone of voice and word choice ensure the messaging experience is consistent no matter which agent helps a customer or what the conversation is about.

Customer-specific personalization

Then, there’s customer-specific personalization, which might involve something as simple as using their name, or extend to drawing from their purchase history to include the specifics of the order they’re asking about.

Once upon a time, this work fell almost entirely to human contact centers, but no more! Among the many things that today’s LLMs excel at is personalization. Machine learning has long been used to personalize recommendations (think: Netflix learning what kinds of shows you like), but when LLMs are turbo-charged with a technique like retrieval-augmented generation (which allows them to use validated data sources to inform their replies to questions), the results can be astonishing.

Machine-based personalization and retrieval-augmented generation are both big subjects, and you can read through the links for more context. But the high-level takeaway is that, together, they facilitate the creation of a seamless and highly personalized experience across your communication channels using the latest advances in AI. Customers will feel more comfortable using your live chat feature, and will grow to feel a connection with your brand over time.

4. Include Privacy and Data Usage Messages

As of this writing, news recently broke that a data breach may have resulted in close to three billion – billion! – people having their social security numbers compromised. You’re no doubt familiar with a bevy of similar stories, which have been pouring forth since more or less the moment people started storing their private data online.

And yet, for the savvy customer experience director, this is an opportunity; by taking privacy very seriously, you can distinguish yourself and thereby build trust.

Customers visiting your website want an assurance that you will take every precaution with their private information, and this can be provided through easy-to-understand data privacy policies and customizable cookie preferences.

Live messaging tools can add a wrinkle because they are often powered by third-party software. Customer service messaging can also require a lot of personal information, making some users hesitant to use these tools.

You can quell these concerns by elucidating how you handle private customer data. When a message like this appears at the start of a new chat, is always accessible via the header, or persists in your chat menu, customers can see how their data is safeguarded and feel secure while entering personal details.

An additional wrinkle comes from the increasing ubiquity of tools based on generative AI. Many worry that any information provided to a model might be used to “train” that model, thus increasing the chances that it’ll be leaked in the future. The best way to avoid this calamity is to partner with a conversational AI for CX platform that works tirelessly to ensure that your customers’ data is never used in this way.

That said, whatever you do, make sure your AI assistants have messages designed to handle requests about privacy and security. Someone will ask eventually, and it’s good to be prepared.

5. Use Rich Messages

Smartphones have become a central hub for browsing the internet, shopping, socializing, and managing daily activities. As text messaging gradually supplemented most of our other ways of communicating, it became obvious that an upgrade was needed.

This led to the development of rich messaging applications and protocols such as Apple Messages for Business and WhatsApp, which use Rich Communication Services (RCS). RCS features enhancements like buttons, quick replies, and carousel cards—all designed to make interactions easier and faster for the customer.

For all these reasons, using rich messaging in live chat with customers will likely help boost engagement. Customers are accustomed to seeing emojis now, and you can include them as a way of humanizing and personalizing your interactions. There might be contexts in which they need to see graphics or images, which is very difficult with the old Short Messaging Service (SMS).

In the final analysis, rich messaging offers another powerful opportunity to create the kind of seamless experience that makes interacting with your support enjoyable and productive.

6. Separating Chat and Agent Availability

Once upon a time, ‘chat availability’ simply meant the same thing as ‘agent availability,’ but today’s language models are rapidly becoming capable enough to resolve a wide variety of issues on their own. In fact, one of the major selling points of AI assistants is that they provide round-the-clock service because they don’t need to eat, sleep, or take bathroom breaks.

This doesn’t mean that they can be left totally alone, of course. Humans still need to monitor their interactions to make sure they’re not being rude or hallucinating false information. But this is also something that becomes much easier when you pair with an industry-leading conversational AI for CX platform that has robust safeguards, monitoring tools, and the ability to switch between different underlying models (in case one starts to act up).

Having said that, there are still a wide variety of tasks for which a living agent is still the best choice. For this reason, many companies have specific time windows when live chat for customer service is available. When it’s not, some choose to let customers know when live chat is an option by communicating the next availability window.

In practice, users will often simply close their tabs if they can’t talk to a person, cutting the interaction off before it begins. In our view, the best course is usually to shift the conversation to an asynchronous channel where it can be handled by an AI assistant able to hand the chat off to an agent when one becomes available.

Employing these two strategies, means that your ability to service customers is decoupled from operational constraints of agent availability, and you are always ready to seize the opportunity to serve customers when they are eager to engage with your brand

Creating Greater CX Outcomes with Live Web Chat is Just the Start.

Live web chat with customers remains an excellent way to resolve issues while building trust and boosting the overall customer experience. The best strategies for increasing engagement with your live chat is to make sure people know it’s an option, make it easy to use, personalize interactions where possible—and make the most out of AI to automatically resolve routine inquiries while filling in live agent availability gaps.

If you’re interested in taking additional steps to resolve common customer service pain points, check out our ebook on the subject. It features a number of straightforward, actionable strategies to help you keep your customers as happy as possible!

Current Large Language Models and How They Compare

From ChatGPT and Bard to BLOOM and Claude, there is a veritable ocean of current LLMs (large language models) for you to choose from. Some are specialized for specific use cases, some are open-source, and there’s a huge variance in the number of parameters they contain.

If you’re a CX leader and find yourself fascinated by the potential of using this technology in your contact center, it can be hard to know how to run proper LLM comparisons.

Today, we’re going to tackle this issue head-on by talking about specific criteria you can use to compare LLMs, sources of additional information, and some of the better-known options.

But always remember that the point of using an LLM is to deliver a world-class customer experience, and the best option is usually the one that delivers multi-model functionality with a minimum of technical overhead.

With that in mind, let’s get started!

What is Generative AI?

While it may seem like large language models (LLMs) and generative AI have only recently emerged, the work they’re based on goes back decades. The journey began in the 1940s with Walter Pitts and Warren McCulloch, who designed artificial neurons based on early brain research. However, practical applications became feasible only after the development of the backpropagation algorithm in 1985, which enabled effective training of larger neural networks.

By 1989, researchers had developed a convolutional system capable of recognizing handwritten numbers. Innovations such as long short-term memory networks further enhanced machine learning capabilities during this period, setting the stage for more complex applications.

The 2000s ushered in the era of big data, crucial for training generative pre-trained models like ChatGPT. This combination of decades of foundational research and vast datasets culminated in the sophisticated generative AI and current LLMs we see transforming contact centers and related industries today.

What’s the Best Way to do a Large Language Models Comparison?

If you’re shopping around for a current LLM for a particular application, it makes sense to first clarify the evaluation criteria you should be using. We’ll cover that in the sections below.

Large Language Models Comparison By Industry Use Case

One of the more remarkable aspects of current LLMs is that they’re good at so many things. Out of the box, most can do very well at answering questions, summarizing text, translating between natural languages, and much more.

But there might be situations in which you’d want to boost the performance of one of the current LLMs on certain tasks. The two most popular ways of doing this are retrieval-augmented generation (RAG) and fine-tuning a pre-trained model.

Here’s a quick recap of what both of these are:

  • Retrieval-augmented generation refers to getting one of the general-purpose, current LLMs to perform better by giving them access to additional resources they can use to improve their outputs. You might hook it up to a contact-center CRM so that it can provide specific details about orders, for example.
  • Fine-tuning refers to taking a pre-trained model and honing it for specific tasks by continuing its training on data related to that task. A generic model might be shown hundreds of polite interactions between customers and CX agents, for example, so that it’s more courteous and helpful.

So, if you’re considering using one of the current LLMs in your business, there are a few questions you should ask yourself. First, are any of them perfectly adequate as-is? If they’re not, the next question is how “adaptable” they are. It’s possible to use RAG or fine-tuning with most of the current LLMs, the question is how easy they make it.

Of course, by far the easiest option would be to leverage a model-agnostic conversational AI platform for CX. These can switch seamlessly between different models, and some support RAG out of the box, meaning you aren’t locked into one current LLM and can always reach for the right tool when needed.

What’s a Good Way To Think About an Open-Source or Closed-Source Large Language Models Comparison?

You’ve probably heard of “open-source,” which refers to the practice of releasing source code to the public so that it can be forked, modified, and scrutinized.

The open-source approach has become incredibly popular, and this enthusiasm has partially bled over into artificial intelligence and machine learning. It is now fairly common to open-source software, datasets, and training frameworks like TensorFlow.

How does this translate to the realm of large language models? In truth, it’s a bit of a mixture. Some models are proudly open-sourced, while others jealously guard their model’s weights, training data, and source code.

This is one thing you might want to consider as you carry out your LLM comparisons. Some of the very best models, like ChatGPT, are closed-source. The downside of using such a model is that you’re entirely beholden to the team that built it. If they make updates or go bankrupt, you could be left scrambling at the last minute to find an alternative solution.

There’s no one-size-fits-all approach here, but it’s worth pointing out that a high-quality enterprise solution will support customization by allowing you to choose between different models (both close-source and open-source). This way, you needn’t concern yourself with forking repos or fret over looming updates, you can just use whichever model performs the best for your particular application.

Getting A Large Language Models Comparison Through Leaderboards and Websites

Instead of doing your LLM comparisons yourself, you could avail yourself of a service built for this purpose.

Whatever rumors you may have heard, programmers are human beings, and human beings have a fondness for ranking and categorizing pretty much everything – sports teams, guitar solos, classic video games, you name it.

Naturally, as current LLMs have become better known, leaderboards and websites have popped up comparing them along all sorts of different dimensions. Here are a few you can use as you search around for the best current LLMs.

Leaderboards for Comparing LLMs

In the past couple of months, leaderboards have emerged which directly compare various current LLMs.

One is AlpacaEval, which uses a custom dataset to compare ChatGPT, Claude, Cohere, and other LLMs on how well they can follow instructions. AlpacaEval boasts high agreement with human evaluators, so in our estimation, it’s probably a suitable way of initially comparing LLMs, though more extensive checks might be required to settle on a final list.

Another good choice is Chatbot Arena, which pits two anonymous models side-by-side, has you rank which one is better, then aggregates all the scores into a leaderboard.

Finally, there is Hugging Face’s Open LLM Leaderboard, which is similar. Anyone can submit a new model for evaluation, which is then assessed based on a small set of key benchmarks from the Eleuther AI Language Model Evaluation Harness. These capture how well the models do in answering simple science questions, common-sense queries, and more, which will be of interest to CX leaders.

When combined with the criteria we discussed earlier, these leaderboards and comparison websites ought to give you everything you need to execute a constructive large language models comparison.

What are the Currently-Available Large Language Models?

Okay! Now that we’ve worked through all this background material, let’s turn to discussing some of the major LLMs that are available today. We make no promises about these entries being comprehensive (and even if they were, there’d be new models out next week), but they should be sufficient to give you an idea as to the range of options you have.

ChatGPT and GPT

Obviously, the titan in the field is OpenAI’s ChatGPT, which is really just a version of GPT that has been fine-tuned through reinforcement learning from human feedback to be especially good at sustained dialogue.

ChatGPT and GPT have been used in many domains, including customer service, question answering, and many others. As of this writing, the most recent GPT is version 4o (note: that’s the letter ‘o’, not the number ‘0’).

LLaMA

In April 2024, Facebook’s AI team released version three of its Large Language Model Meta AI (LLaMa 3). At 70 billion parameters it is not quite as big as GPT; this is intentional, as its purpose is to aid researchers who may not have the budget or expertise required to provision a behemoth LLM.

Gemini

Like GPT-4, Google’s Gemini is aimed squarely at dialogue. It is able to converse on a nearly infinite number of subjects, and from the beginning, the Google team has focused on having Gemini produce interesting responses that are nevertheless absent of abuse and harmful language.

StableLM

StableLM is a lightweight, open-source language model built by Stability AI. It’s trained on a new dataset called “The Pile”, which is itself made up of over 20 smaller, high-quality datasets which together amount to over 825 GB of natural language.

GPT4All

What would you get if you trained an LLM on “…on a massive curated corpus of assistant interactions, which included word problems, multi-turn dialogue, code, poems, songs, and stories,” and then released it on an Apache 2.0 license? The answer is GPT4All, an open-source model whose purpose is to encourage research into what these technologies can accomplish.

BLOOM

The BigScience Large Open-Science Open-Access Multilingual Language Model (BLOOM) was released in late 2022. The team that put it together consisted of more than a thousand researchers from all over the worlds, and unlike the other models on this list, it’s specifically meant to be interpretable.

Pathways Language Model (PaLM)

PaLM is from Google, and is also enormous (540 billion parameters). It excels in many language-related tasks, and became famous when it produced really high-level explanations of tricky jokes. The most recent version is PaLM 2.

Claude

Anthropic’s Claude is billed as a “next-generation AI assistant.” The recent release of Claude 3.5 Sonnet “sets new industry benchmarks” in speed and intelligence, according to materials put out by the company. We haven’t looked at all the data ourselves, but we have played with the model and we know it’s very high-quality.

Command and Command R+

These are models created by Cohere, one of the major commercial platforms for current LLMs. They are comparable to most of the other big models, but Cohere has placed a special focus on enterprise applications, like agents, tools, and RAG.

What are the Best Ways of Overcoming the Limitations of Large Language Models?

Large language models are remarkable tools, but they nevertheless suffer from some well-known limitations. They tend to hallucinate facts, for example, sometimes fail at basic arithmetic, and can get lost in the course of lengthy conversations.

Overcoming the limitations of large language models is mostly a matter of either monitoring them and building scaffolding to enable RAG, or partnering with a conversational AI platform for CX that handles this tedium for you.

An additional wrinkle involves tradeoffs between different models. As we discuss below, sometimes models may outperform the competition on a task like code generation while being notably worse at a task like faithfully following instructions; in such cases, many opt to have an ensemble of models so they can pick and choose which to deploy in a given scenario. (It’s worth pointing out that even if you want to use one model for everything, you’ll absolutely need to swap in an upgraded version of that model eventually, so you still have the same model-management problem.)

This, too, is a place where a conversational AI platform for CX will make your life easier. The best such platforms are model-agnostic, meaning that they can use ChatGPT, Claude, Gemini, or whatever makes sense in a particular situation. This removes yet another headache, smoothing the way for you to use generative AI in your contact center with little fuss.

What are the Best Large Language Models?

Having read the foregoing, it’s natural to wonder if there’s a single model that best suits your enterprise. The answer is “it depends on the specifics of your use case.” You’ll have to think about whether you want an open-source model you control or you’re comfortable hitting an API, whether your use case is outside the scope of ChatGPT and better handled with a bespoke model, etc.

Speaking of use cases, in the next few sections, we’ll offer some advice on which current LLMs are best suited for which applications. However, this advice is based mostly on personal experience and other people’s reports of their experiences. This should be good enough to get you started, but bear in mind that these claims haven’t been born out by rigorous testing and hard evidence—the field is too young for most of that to exist yet.

What’s the Best LLM if I’m on a Budget?

Pretty much any open-source model is given away for free, by definition. You can just Google “free open-source LLMs”, but one of the more frequently recommended open-source models is LLaMA 2 (there’s also the new LLaMA 3), both of which are free.

But many LLMs (both free and paid) also use the data you feed them for training purposes, which means you could be exposing proprietary or sensitive data if you’re not careful. Your best bet is to find a cost-effective platform that has an explicit promise not to use your data for training.

When you deal with an open-source model, you also have to pay for hosting, either your own or through a cloud service like Amazon Bedrock.

What’s the Best LLM for a Large Context Window?

The context window is the amount of text an LLM can handle at a time. When ChatGPT was released, it had a context window of around 4,000 tokens. (A “token” isn’t exactly a word, but it’s close enough for our purposes.)

Generally (and up to a point), the longer the context window the better the model is able to perform. Today’s models generally have context windows of at least a few tens of thousands, and some getting into the lower 100,000 range.

But, at a staggering 1 million tokens–equivalent to an hour-long video or the full text of a long novel–Google’s Gemini simply towers over the others like Hagrid in the Shire.

That having been said, this space moves quickly, and context window length is an active area of research and development. These figures will likely be different next month, so be sure to check the latest information as you begin shopping for a model.

Choosing Among the Current Large Language Models

With all the different LLMs on offer, it’s hard to narrow the search down to the one that’s best for you. By carefully weighing the different metrics we’ve discussed in this article, you can choose an LLM that meets your needs with as little hassle as possible.

Pulling back a bit, let’s close by recalling that the whole purpose of choosing among current LLMs in the first place is to better meet the needs of our customers.

For this reason, you might want to consider working with a conversational AI platform for CX, like Quiq, that puts a plethora of LLMs at your fingertips through one simple interface.

The Truth About APIs for AI: What You Need to Know

Large language models hold a lot of power to improve your customer experience and make your agents more effective, but they won’t do you much good if you don’t have a way to actually access them.

This is where application programming interfaces (APIs) come into play. If you want to leverage LLMs, you’ll either have to build one in-house, use an AI API deployment to interact with an external model, or go with a customer-centric AI for CX platform. The latter choice is most ideal because it offers a guided building environment that removes complexity while providing the tools you need for scalability, observability, hallucination prevention, and more.

From a cost and ease-of-use perspective this third option is almost always best, but there are many misconceptions that could potentially stand in the way of AI API adoption.

In fact, a stronger claim is warranted: to maximize AI API effectiveness, you need a platform to orchestrate between AI, your business logic, and the rest of your CX stack.

Otherwise, it’s useless.

This article aims to bridge the gap between what CX leaders might think is required to integrate a platform, and what’s actually involved. By the end, you’ll understand what APIs are, their role in personalization and scalability, and why they work best in the context of a customer-centric AI for CX platform.

How APIs Facilitate Access to AI Capabilities

Let’s start by defining an API. As the name suggests, APIs are essentially structured protocols that allow two systems (“applications”) to communicate with one another (“interface”). For instance, if you’re using a third-party CRM to track your contacts, you’ll probably update it through an API.

All the well-known foundation model providers (e.g., OpenAI, Anthropic, etc.) have a real-world AI API implementation that allows you to use their service. For an AI API practical example, let’s look at OpenAI’s documentation:

(Let’s take a second to understand what we’re looking at. Don’t worry – we’ll break it down for you. Understanding the basics will give you a sense for what your engineers will be doing.)

The top line points us to a URL where we can access OpenAI’s models, and the next three lines require us to pass in an API key (which is kind of like a password giving access to the platform), our organization ID (a unique designator for our particular company, not unlike a username), and a project ID (a way to refer to this specific project, useful if you’re working on a few different projects at once).

This is only one example, but you can reasonably assume that most protocols built according to AI API best practices will have a similar structure.

This alone isn’t enough to support most AI API use cases, but it illustrates the key takeaway of this section: APIs are attractive because they make it easy to access the capabilities of LLMs without needing to manage them on your own infrastructure, though they’re still best when used as part of a move to a customer-centric AI orchestration platform.

How Do APIs Facilitate Customer Support AI Assistants?

It’s good to understand what APIs are used for in AI assistants. It’s pretty straightforward—here’s the bulk of it:

  • Personalizing customer communications: One of the most exciting real-world benefits of AI is that it enables personalization at scale because you can integrate an LLM with trusted systems containing customer profiles, transaction data, etc., which can be incorporated into a model’s reply. So, for example, when a customer asks for shipping information, you’re not limited to generic responses like “your item will be shipped within 3 days of your order date.” Instead, you can take a more customer-centric approach and offer specific details, such as, “The order for your new couch was placed on Monday, and will be sent out on Wednesday. According to your location, we expect that it’ll arrive by Friday. Would you like to select a delivery window or upgrade to white glove service?”
  • Improving response quality: Generative AI is plagued by a tendency to fabricate information. With an AI API, work can be decomposed into smaller, concrete tasks before being passed to an LLM, which improves performance. You can also do other things to get better outputs, such as create bespoke modifications of the prompt that change the model’s tone, the length of its reply, etc.
  • Scalability and flexibility in deployment: A good customer-centric, AI-for-CX platform will offer volume-based pricing, meaning you can scale up or down as needed. If customer issues are coming in thick and fast (such as might occur during a new product release, or over a holiday), just keep passing them to the API while paying a bit more for the increased load; if things are quiet because it’s 2 a.m., the API just sits there, waiting to spring into action when required and costing you very little.
  • Analyzing customer feedback and sentiment: Incredible insights are waiting within your spreadsheets and databases, if you only know how to find them. This, too, is something APIs help with. If, for example, you need to unify measurements across your organization to send them to a VOC (voice of customer) platform, you can do that with an API.

Looking Beyond an API for AI Assistants

For all this, it’s worth pointing out that there’s still many real-world AI API challenges. By far the quickest way to begin building an AI assistant for CX is to pair with a customer-centric AI platform that removes as much of the difficulty as possible.

The best such platforms not only allow you to utilize a bevy of underlying LLM models, they also facilitate gathering and analyzing data, monitoring and supporting your agents, and automating substantial parts of your workflow.

Crucially, almost all of those critical tasks are facilitated through APIs, but they can be united in a good platform.

3 Common Misconceptions about Customer-Centric AI for CX Platforms.

Now, let’s address some of the biggest myths surrounding the use of AI orchestration platforms.

Myth 1: Working with a customer-centric AI for CX Platform Will be a Hassle

Some CX leaders may worry that working with a platform will be too difficult. There are challenges, to be sure, but a well-designed platform with an intuitive user interface is easy to slip into a broader engineering project.

Such platforms are designed to support easy integration with existing systems, and they generally have ample documentation available to make this task as straightforward as possible.

Myth 2: AI Platforms Cost Too Much

Another concern CX leaders have is the cost of using an AI orchestration platform. Platform costs can add up over time, but this pales in comparison to the cost of building in-house solutions. Not to mention the potential costs associated with the risks that come with building AI in an environment that doesn’t protect you from things like hallucinations.

When you weigh all the factors impacting your decision to use AI in your contact center, the long-run return on using an AI orchestration platform is almost always better.

Myth 3: Customer-Centric AI Platforms are Just Too Insecure

The smart CX leader always has one eye on the overall security of their enterprise, so they may be worried about vulnerabilities introduced by using an AI platform.

This is a perfectly reasonable concern. If you’re trying to choose between a few different providers, it’s worth investigating the security measures they’ve implemented. Specifically, you want to figure out what data encryption and protection protocols they use, and how they think about compliance with industry standards and regulations.

At a minimum, the provider should be taking basic steps to make sure data transmitted to the platform isn’t exposed.

Is an AI Platform Right for Me?

With a platform focused on optimizing CX outcomes, you can quickly bring the awesome power and flexibility of generative AI into your contact center – without ever spinning up a server or fretting over what “backpropagation” means. To the best of our knowledge, this is the cheapest and fastest way to demo this technology in your workflow to determine whether it warrants a deeper investment.

To parse out more generative AI facts from fiction, download our e-book on AI misconceptions and how to overcome them. If you’re concerned about hallucinations, data privacy, and similar issues, you won’t find a better one-stop read!