Introducing Email AI: How to Slash High Volumes While Improving CSAT Save your Spot →

Current Large Language Models and How They Compare

From ChatGPT and Bard to BLOOM and Claude, there is a veritable ocean of current LLMs (large language models) for you to choose from. Some are specialized for specific use cases, some are open-source, and there’s a huge variance in the number of parameters they contain.

If you’re a CX leader and find yourself fascinated by the potential of using this technology in your contact center, it can be hard to know how to run proper LLM comparisons.

Today, we’re going to tackle this issue head-on by talking about specific criteria you can use to compare LLMs, sources of additional information, and some of the better-known options.

But always remember that the point of using an LLM is to deliver a world-class customer experience, and the best option is usually the one that delivers multi-model functionality with a minimum of technical overhead.

With that in mind, let’s get started!

What is Generative AI?

While it may seem like large language models (LLMs) and generative AI have only recently emerged, the work they’re based on goes back decades. The journey began in the 1940s with Walter Pitts and Warren McCulloch, who designed artificial neurons based on early brain research. However, practical applications became feasible only after the development of the backpropagation algorithm in 1985, which enabled effective training of larger neural networks.

By 1989, researchers had developed a convolutional system capable of recognizing handwritten numbers. Innovations such as long short-term memory networks further enhanced machine learning capabilities during this period, setting the stage for more complex applications.

The 2000s ushered in the era of big data, crucial for training generative pre-trained models like ChatGPT. This combination of decades of foundational research and vast datasets culminated in the sophisticated generative AI and current LLMs we see transforming contact centers and related industries today.

What’s the Best Way to do a Large Language Models Comparison?

If you’re shopping around for a current LLM for a particular application, it makes sense to first clarify the evaluation criteria you should be using. We’ll cover that in the sections below.

Large Language Models Comparison By Industry Use Case

One of the more remarkable aspects of current LLMs is that they’re good at so many things. Out of the box, most can do very well at answering questions, summarizing text, translating between natural languages, and much more.

But there might be situations in which you’d want to boost the performance of one of the current LLMs on certain tasks. The two most popular ways of doing this are retrieval-augmented generation (RAG) and fine-tuning a pre-trained model.

Here’s a quick recap of what both of these are:

  • Retrieval-augmented generation refers to getting one of the general-purpose, current LLMs to perform better by giving them access to additional resources they can use to improve their outputs. You might hook it up to a contact-center CRM so that it can provide specific details about orders, for example.
  • Fine-tuning refers to taking a pre-trained model and honing it for specific tasks by continuing its training on data related to that task. A generic model might be shown hundreds of polite interactions between customers and CX agents, for example, so that it’s more courteous and helpful.

So, if you’re considering using one of the current LLMs in your business, there are a few questions you should ask yourself. First, are any of them perfectly adequate as-is? If they’re not, the next question is how “adaptable” they are. It’s possible to use RAG or fine-tuning with most of the current LLMs, the question is how easy they make it.

Of course, by far the easiest option would be to leverage a model-agnostic conversational AI platform for CX. These can switch seamlessly between different models, and some support RAG out of the box, meaning you aren’t locked into one current LLM and can always reach for the right tool when needed.

What’s a Good Way To Think About an Open-Source or Closed-Source Large Language Models Comparison?

You’ve probably heard of “open-source,” which refers to the practice of releasing source code to the public so that it can be forked, modified, and scrutinized.

The open-source approach has become incredibly popular, and this enthusiasm has partially bled over into artificial intelligence and machine learning. It is now fairly common to open-source software, datasets, and training frameworks like TensorFlow.

How does this translate to the realm of large language models? In truth, it’s a bit of a mixture. Some models are proudly open-sourced, while others jealously guard their model’s weights, training data, and source code.

This is one thing you might want to consider as you carry out your LLM comparisons. Some of the very best models, like ChatGPT, are closed-source. The downside of using such a model is that you’re entirely beholden to the team that built it. If they make updates or go bankrupt, you could be left scrambling at the last minute to find an alternative solution.

There’s no one-size-fits-all approach here, but it’s worth pointing out that a high-quality enterprise solution will support customization by allowing you to choose between different models (both close-source and open-source). This way, you needn’t concern yourself with forking repos or fret over looming updates, you can just use whichever model performs the best for your particular application.

Getting A Large Language Models Comparison Through Leaderboards and Websites

Instead of doing your LLM comparisons yourself, you could avail yourself of a service built for this purpose.

Whatever rumors you may have heard, programmers are human beings, and human beings have a fondness for ranking and categorizing pretty much everything – sports teams, guitar solos, classic video games, you name it.

Naturally, as current LLMs have become better known, leaderboards and websites have popped up comparing them along all sorts of different dimensions. Here are a few you can use as you search around for the best current LLMs.

Leaderboards for Comparing LLMs

In the past couple of months, leaderboards have emerged which directly compare various current LLMs.

One is AlpacaEval, which uses a custom dataset to compare ChatGPT, Claude, Cohere, and other LLMs on how well they can follow instructions. AlpacaEval boasts high agreement with human evaluators, so in our estimation, it’s probably a suitable way of initially comparing LLMs, though more extensive checks might be required to settle on a final list.

Another good choice is Chatbot Arena, which pits two anonymous models side-by-side, has you rank which one is better, then aggregates all the scores into a leaderboard.

Finally, there is Hugging Face’s Open LLM Leaderboard, which is similar. Anyone can submit a new model for evaluation, which is then assessed based on a small set of key benchmarks from the Eleuther AI Language Model Evaluation Harness. These capture how well the models do in answering simple science questions, common-sense queries, and more, which will be of interest to CX leaders.

When combined with the criteria we discussed earlier, these leaderboards and comparison websites ought to give you everything you need to execute a constructive large language models comparison.

What are the Currently-Available Large Language Models?

Okay! Now that we’ve worked through all this background material, let’s turn to discussing some of the major LLMs that are available today. We make no promises about these entries being comprehensive (and even if they were, there’d be new models out next week), but they should be sufficient to give you an idea as to the range of options you have.

ChatGPT and GPT

Obviously, the titan in the field is OpenAI’s ChatGPT, which is really just a version of GPT that has been fine-tuned through reinforcement learning from human feedback to be especially good at sustained dialogue.

ChatGPT and GPT have been used in many domains, including customer service, question answering, and many others. As of this writing, the most recent GPT is version 4o (note: that’s the letter ‘o’, not the number ‘0’).

LLaMA

In April 2024, Facebook’s AI team released version three of its Large Language Model Meta AI (LLaMa 3). At 70 billion parameters it is not quite as big as GPT; this is intentional, as its purpose is to aid researchers who may not have the budget or expertise required to provision a behemoth LLM.

Gemini

Like GPT-4, Google’s Gemini is aimed squarely at dialogue. It is able to converse on a nearly infinite number of subjects, and from the beginning, the Google team has focused on having Gemini produce interesting responses that are nevertheless absent of abuse and harmful language.

StableLM

StableLM is a lightweight, open-source language model built by Stability AI. It’s trained on a new dataset called “The Pile”, which is itself made up of over 20 smaller, high-quality datasets which together amount to over 825 GB of natural language.

GPT4All

What would you get if you trained an LLM on “…on a massive curated corpus of assistant interactions, which included word problems, multi-turn dialogue, code, poems, songs, and stories,” and then released it on an Apache 2.0 license? The answer is GPT4All, an open-source model whose purpose is to encourage research into what these technologies can accomplish.

BLOOM

The BigScience Large Open-Science Open-Access Multilingual Language Model (BLOOM) was released in late 2022. The team that put it together consisted of more than a thousand researchers from all over the worlds, and unlike the other models on this list, it’s specifically meant to be interpretable.

Pathways Language Model (PaLM)

PaLM is from Google, and is also enormous (540 billion parameters). It excels in many language-related tasks, and became famous when it produced really high-level explanations of tricky jokes. The most recent version is PaLM 2.

Claude

Anthropic’s Claude is billed as a “next-generation AI assistant.” The recent release of Claude 3.5 Sonnet “sets new industry benchmarks” in speed and intelligence, according to materials put out by the company. We haven’t looked at all the data ourselves, but we have played with the model and we know it’s very high-quality.

Command and Command R+

These are models created by Cohere, one of the major commercial platforms for current LLMs. They are comparable to most of the other big models, but Cohere has placed a special focus on enterprise applications, like agents, tools, and RAG.

What are the Best Ways of Overcoming the Limitations of Large Language Models?

Large language models are remarkable tools, but they nevertheless suffer from some well-known limitations. They tend to hallucinate facts, for example, sometimes fail at basic arithmetic, and can get lost in the course of lengthy conversations.

Overcoming the limitations of large language models is mostly a matter of either monitoring them and building scaffolding to enable RAG, or partnering with a conversational AI platform for CX that handles this tedium for you.

An additional wrinkle involves tradeoffs between different models. As we discuss below, sometimes models may outperform the competition on a task like code generation while being notably worse at a task like faithfully following instructions; in such cases, many opt to have an ensemble of models so they can pick and choose which to deploy in a given scenario. (It’s worth pointing out that even if you want to use one model for everything, you’ll absolutely need to swap in an upgraded version of that model eventually, so you still have the same model-management problem.)

This, too, is a place where a conversational AI platform for CX will make your life easier. The best such platforms are model-agnostic, meaning that they can use ChatGPT, Claude, Gemini, or whatever makes sense in a particular situation. This removes yet another headache, smoothing the way for you to use generative AI in your contact center with little fuss.

What are the Best Large Language Models?

Having read the foregoing, it’s natural to wonder if there’s a single model that best suits your enterprise. The answer is “it depends on the specifics of your use case.” You’ll have to think about whether you want an open-source model you control or you’re comfortable hitting an API, whether your use case is outside the scope of ChatGPT and better handled with a bespoke model, etc.

Speaking of use cases, in the next few sections, we’ll offer some advice on which current LLMs are best suited for which applications. However, this advice is based mostly on personal experience and other people’s reports of their experiences. This should be good enough to get you started, but bear in mind that these claims haven’t been born out by rigorous testing and hard evidence—the field is too young for most of that to exist yet.

What’s the Best LLM if I’m on a Budget?

Pretty much any open-source model is given away for free, by definition. You can just Google “free open-source LLMs”, but one of the more frequently recommended open-source models is LLaMA 2 (there’s also the new LLaMA 3), both of which are free.

But many LLMs (both free and paid) also use the data you feed them for training purposes, which means you could be exposing proprietary or sensitive data if you’re not careful. Your best bet is to find a cost-effective platform that has an explicit promise not to use your data for training.

When you deal with an open-source model, you also have to pay for hosting, either your own or through a cloud service like Amazon Bedrock.

What’s the Best LLM for a Large Context Window?

The context window is the amount of text an LLM can handle at a time. When ChatGPT was released, it had a context window of around 4,000 tokens. (A “token” isn’t exactly a word, but it’s close enough for our purposes.)

Generally (and up to a point), the longer the context window the better the model is able to perform. Today’s models generally have context windows of at least a few tens of thousands, and some getting into the lower 100,000 range.

But, at a staggering 1 million tokens–equivalent to an hour-long video or the full text of a long novel–Google’s Gemini simply towers over the others like Hagrid in the Shire.

That having been said, this space moves quickly, and context window length is an active area of research and development. These figures will likely be different next month, so be sure to check the latest information as you begin shopping for a model.

Choosing Among the Current Large Language Models

With all the different LLMs on offer, it’s hard to narrow the search down to the one that’s best for you. By carefully weighing the different metrics we’ve discussed in this article, you can choose an LLM that meets your needs with as little hassle as possible.

Pulling back a bit, let’s close by recalling that the whole purpose of choosing among current LLMs in the first place is to better meet the needs of our customers.

For this reason, you might want to consider working with a conversational AI platform for CX, like Quiq, that puts a plethora of LLMs at your fingertips through one simple interface.

The Truth About APIs for AI: What You Need to Know

Large language models hold a lot of power to improve your customer experience and make your agents more effective, but they won’t do you much good if you don’t have a way to actually access them.

This is where application programming interfaces (APIs) come into play. If you want to leverage LLMs, you’ll either have to build one in-house, use an AI API deployment to interact with an external model, or go with a customer-centric AI for CX platform. The latter choice is most ideal because it offers a guided building environment that removes complexity while providing the tools you need for scalability, observability, hallucination prevention, and more.

From a cost and ease-of-use perspective this third option is almost always best, but there are many misconceptions that could potentially stand in the way of AI API adoption.

In fact, a stronger claim is warranted: to maximize AI API effectiveness, you need a platform to orchestrate between AI, your business logic, and the rest of your CX stack.

Otherwise, it’s useless.

This article aims to bridge the gap between what CX leaders might think is required to integrate a platform, and what’s actually involved. By the end, you’ll understand what APIs are, their role in personalization and scalability, and why they work best in the context of a customer-centric AI for CX platform.

How APIs Facilitate Access to AI Capabilities

Let’s start by defining an API. As the name suggests, APIs are essentially structured protocols that allow two systems (“applications”) to communicate with one another (“interface”). For instance, if you’re using a third-party CRM to track your contacts, you’ll probably update it through an API.

All the well-known foundation model providers (e.g., OpenAI, Anthropic, etc.) have a real-world AI API implementation that allows you to use their service. For an AI API practical example, let’s look at OpenAI’s documentation:

(Let’s take a second to understand what we’re looking at. Don’t worry – we’ll break it down for you. Understanding the basics will give you a sense for what your engineers will be doing.)

The top line points us to a URL where we can access OpenAI’s models, and the next three lines require us to pass in an API key (which is kind of like a password giving access to the platform), our organization ID (a unique designator for our particular company, not unlike a username), and a project ID (a way to refer to this specific project, useful if you’re working on a few different projects at once).

This is only one example, but you can reasonably assume that most protocols built according to AI API best practices will have a similar structure.

This alone isn’t enough to support most AI API use cases, but it illustrates the key takeaway of this section: APIs are attractive because they make it easy to access the capabilities of LLMs without needing to manage them on your own infrastructure, though they’re still best when used as part of a move to a customer-centric AI orchestration platform.

How Do APIs Facilitate Customer Support AI Assistants?

It’s good to understand what APIs are used for in AI assistants. It’s pretty straightforward—here’s the bulk of it:

  • Personalizing customer communications: One of the most exciting real-world benefits of AI is that it enables personalization at scale because you can integrate an LLM with trusted systems containing customer profiles, transaction data, etc., which can be incorporated into a model’s reply. So, for example, when a customer asks for shipping information, you’re not limited to generic responses like “your item will be shipped within 3 days of your order date.” Instead, you can take a more customer-centric approach and offer specific details, such as, “The order for your new couch was placed on Monday, and will be sent out on Wednesday. According to your location, we expect that it’ll arrive by Friday. Would you like to select a delivery window or upgrade to white glove service?”
  • Improving response quality: Generative AI is plagued by a tendency to fabricate information. With an AI API, work can be decomposed into smaller, concrete tasks before being passed to an LLM, which improves performance. You can also do other things to get better outputs, such as create bespoke modifications of the prompt that change the model’s tone, the length of its reply, etc.
  • Scalability and flexibility in deployment: A good customer-centric, AI-for-CX platform will offer volume-based pricing, meaning you can scale up or down as needed. If customer issues are coming in thick and fast (such as might occur during a new product release, or over a holiday), just keep passing them to the API while paying a bit more for the increased load; if things are quiet because it’s 2 a.m., the API just sits there, waiting to spring into action when required and costing you very little.
  • Analyzing customer feedback and sentiment: Incredible insights are waiting within your spreadsheets and databases, if you only know how to find them. This, too, is something APIs help with. If, for example, you need to unify measurements across your organization to send them to a VOC (voice of customer) platform, you can do that with an API.

Looking Beyond an API for AI Assistants

For all this, it’s worth pointing out that there’s still many real-world AI API challenges. By far the quickest way to begin building an AI assistant for CX is to pair with a customer-centric AI platform that removes as much of the difficulty as possible.

The best such platforms not only allow you to utilize a bevy of underlying LLM models, they also facilitate gathering and analyzing data, monitoring and supporting your agents, and automating substantial parts of your workflow.

Crucially, almost all of those critical tasks are facilitated through APIs, but they can be united in a good platform.

3 Common Misconceptions about Customer-Centric AI for CX Platforms.

Now, let’s address some of the biggest myths surrounding the use of AI orchestration platforms.

Myth 1: Working with a customer-centric AI for CX Platform Will be a Hassle

Some CX leaders may worry that working with a platform will be too difficult. There are challenges, to be sure, but a well-designed platform with an intuitive user interface is easy to slip into a broader engineering project.

Such platforms are designed to support easy integration with existing systems, and they generally have ample documentation available to make this task as straightforward as possible.

Myth 2: AI Platforms Cost Too Much

Another concern CX leaders have is the cost of using an AI orchestration platform. Platform costs can add up over time, but this pales in comparison to the cost of building in-house solutions. Not to mention the potential costs associated with the risks that come with building AI in an environment that doesn’t protect you from things like hallucinations.

When you weigh all the factors impacting your decision to use AI in your contact center, the long-run return on using an AI orchestration platform is almost always better.

Myth 3: Customer-Centric AI Platforms are Just Too Insecure

The smart CX leader always has one eye on the overall security of their enterprise, so they may be worried about vulnerabilities introduced by using an AI platform.

This is a perfectly reasonable concern. If you’re trying to choose between a few different providers, it’s worth investigating the security measures they’ve implemented. Specifically, you want to figure out what data encryption and protection protocols they use, and how they think about compliance with industry standards and regulations.

At a minimum, the provider should be taking basic steps to make sure data transmitted to the platform isn’t exposed.

Is an AI Platform Right for Me?

With a platform focused on optimizing CX outcomes, you can quickly bring the awesome power and flexibility of generative AI into your contact center – without ever spinning up a server or fretting over what “backpropagation” means. To the best of our knowledge, this is the cheapest and fastest way to demo this technology in your workflow to determine whether it warrants a deeper investment.

To parse out more generative AI facts from fiction, download our e-book on AI misconceptions and how to overcome them. If you’re concerned about hallucinations, data privacy, and similar issues, you won’t find a better one-stop read!

Going Beyond the GenAI Hype — Your Questions, Answered

We recently hosted a webinar all about how CX leaders can go beyond the hype surrounding GenAI, sift out the misinformation, and start driving real business value with AI Assistants. During the session, our speakers shared specific steps CX leaders can take to get their knowledge ready for AI, eliminate harmful hallucinations, and solve the build vs. buy dilemma.

We were overwhelmed with the number of folks who tuned in to learn more and hear real-life challenges, best practices, and success stories from Quiq’s own AI Assistant experts and customers. At the end of the webinar, we received so many amazing audience questions that we ran out of time to answer them all!

So, we asked speaker and Quiq Product Manager Max Fortis, to respond to a few of our favorites. Check out his answers in the clips below, and be sure to watch the full 35-minute webinar on-demand.

Ensuring Assistant Access to Personal and Account Information

 

 

Using a Knowledge Base Written for Internal Agents

 

 

Teaching a Voice Assistant vs. a Chat Assistant

 

 

Monitoring and Improving Assistant Performance Over Time

 

 

Watch the Full Webinar to Dive Deeper

Whether you were unable to tune in live or want to watch the rerun, this webinar is available on-demand. Give it a listen to hear Max and his Quiq colleagues offer more answers and advice around how to assess and fill critical knowledge gaps, avoid common yet lesser-known hallucination types, and partner with technical teams to get the AI tools you need.

Watch Now

How Does Data Impact Optimal AI Performance in CX? We Break It Down.

Many customer experience leaders are considering how generative AI might impact their businesses. Naturally, this has led to an explosion of related questions, such as whether it’s worth training a model in-house or working with a conversational AI platform, whether generative AI might hallucinate in harmful ways, and how generative AI can enhance agent performance.

One especially acute source of confusion centers on AI’s data reliance, or the role that data—including your internal data—plays in AI systems. This is understandable, as there remains a great deal of misunderstanding about how large language models are trained and how they can be used to create an accurate, helpful AI assistant.

If you count yourself among the confused, don’t worry. This article will provide a careful look at the relationship between AI and your CX data, equipping you to decide whether you have everything you need to support the use of generative AI, and how to efficiently gather more, if you need to.

Let’s dive in!

What’s the Role of CX Data in Teaching AI?

In our deep dive into large language models, we spent a lot of time covering how public large language models are trained to predict the end of some text. They’ll be shown many sentences with the last word or two omitted (“My order is ___”), and from this, they learn that the last word in is something “missing” or “late.”

The latest CX solutions have done an excellent job leveraging these capabilities, but the current generation of language models still tends to hallucinate (i.e., make up) information.

To get around this, savvy CX directors have begun utilizing a technique known as “retrieval augmented generation,” also known as “RAG.”

With RAG, models are given access to additional data sources that they can use when generating a reply. You could hook an AI assistant up to an order database, for example, which would allow it to accurately answer questions like “Does my order still qualify for a refund?”

RAG also plays an important part in managing language models’ well-known tendency to hallucinate. By drawing on the data contained within an authoritative source, these models become much less likely to fabricate information.

How Do I Know If I Have the Right Data for AI?

CX data tends to fall into two broad categories:

  1. Knowledge, like training manuals and PDFs
  2. Data from internal systems, like issue tickets, chats, call transcripts, etc.

Luckily for CX leaders, there’s usually enough of both lying around to meet an AI assistant’s need for data. Dozens of tools exist for tracking important information – customer profiles, information related to payment and shipping, and the like – and nearly all offer API endpoints that allow them to integrate with your existing technology stack.

What’s more, it’s best if this data looks and feels just like the data your human agents see, so you don’t need to curate a bespoke data repository. All of this is to say that you might already have everything you need for optimal AI performance, even if your sources are scattered or need to be updated.

Processing Data for Generative AI

Data processing work is far from trivial, and outsourcing it to a dedicated set of tools is often the wiser choice. A conversational AI platform built for generative AI should make it easy for you to program instructions for data processing.

That said, you might still need to work on cleaning and formatting the data, which can take some effort.

Understanding the steps involved in preparing data for AI is a big subject, but you’ll almost certainly need to do a mix of the following:

  • Extract: 80% of enterprise data exists in various unstructured formats, such as HTML pages, PDFs, CSV files, and images. This data has to be gathered, and you may have to “clean” it by removing unwanted content and irrelevant sections, just as you would for a human agent.
  • Transform: Your AI assistant will likely support answering various kinds of questions. If you’re using retrieval augmented generation, you may need to create a language “embedding” to answer those questions effectively, or you may need to prepare and enrich your answers so your assistant can find them more effectively.
  • Load: Finally, you will need to “feed” your AI assistant the answers stored in (say) a vector database.

Remember: The GenAI data process isn’t trivial, but it’s also easier than you think, especially if you find the right partner. Quiq’s native “dataset transformation” functionality, for example, facilitates rewriting text, scrubbing unwanted characters, augmenting a dataset (by generating a summary of it), structuring it in new ways, and much more.

What Do I Need to Create Additional Data for AI?

As we said above, your existing data may already be sufficient for optimal AI performance. This isn’t always the case, however, and it’s worth saying a few words about when you will need to create a new resource for a model.

In our experience, the most common data gaps occur when common or important questions are not addressed anywhere in your documentation. Start by creating text about them that a model can use to generate replies, and then work your way out to questions that are less frequent.

One idea our clients use successfully is to ask human agents what questions they see most frequently. Here’s an example of an awesome, simple FAQ from LOOP auto insurance:

When you’re doing this, remember: it’s fine to start small. The quality of your supplementary content is more important than the quantity, and a few sentences in a single paragraph will usually do the trick.

The most important task is to make sure you have a framework to understand what data gaps you have so that you can improve. This could include analyzing previous questions or proactively labeling existing questions you don’t have answers for.

Wrapping Up

There’s no denying the significance of relevant data in AI advancements, but as we’ve hopefully made clear, you probably have most of what you already need—and the process to prepare it for AI is a lot more straightforward than many people think.

If you’re interested in learning more about optimal AI performance and how to achieve it, check out our free e-book addressing the misconceptions surrounding generative AI. Armed with the insights it contains, you can figure out how much AI could impact your contact center, and how to proceed.

Google Business Messaging is Ending – Here’s How You Should Adapt

Google Business Messaging (GBM) has long been one of the primary rich messaging channels for Android, but it’s now in the process of being phased out.

GBM is being sunsetted, but that doesn’t mean your customer experience has to suffer. This piece will walk you through the main alternatives to GBM, ensuring you have everything you need to keep your organization running smoothly.

What’s Happening with Google Business Messaging Exactly?

According to an announcement from Google, Google Business Messaging will be phased out on the following schedule. First, starting July 15, 2024, GBM entry points will disappear from Google’s Maps and Search properties, and it will no longer be possible to start GBM conversations from entry points on your website. Existing conversations will be able to continue until July 31, 2024, when the GBM service will be shut down entirely.

What are the Alternatives to Google Business Messaging?

If you’re wondering which communication channel you should switch to now that GBM is going away, here are some you should consider. They’re divided into two groups. Group one consists of the channels we personally recommend, based on our years of experience in customer service and contact center management. Section two deals with communication channels that we still support but which, in our view, are not as promising as alternatives to GBM.

Recommended Alternatives to Google Business Messaging

Here are the best channels to serve as replacements to GBM

  • WhatsApp: WhatsApp enables text, voice, and video communications for over two billion global users. The platform includes several built-in features that appeal to businesses looking to forge deeper, more personal connections with their customers. Most importantly, it is a cross-platform messaging app, meaning it will allow you to chat with both Android and Apple users.
  • Text Messaging or Short Message Service (SMS): SMS is a long-standing staple for a reason, and with a conversational AI platform like Quiq, you can put large language models to work automating substantial parts of your SMS-based customer interactions.

Other Alternatives to Google Business Messaging

Here are the other channels you might look into.

  • Live web chat: When wondering about whether to invest in live chat support, customer experience directors may encounter skepticism about how useful customers will find it. But with nearly a third of female users of the internet indicating that they prefer contacting support via live chat, it’s clearly worth the time. This is especially true when live chat is used to provide an interactive experience, readily available, helpful agents, and swift responses. There are plenty of ways to encourage your customers to actually use your live chat offering, including mentioning it during phone calls, linking to it in blog posts or emails, and promoting it on social media.
  • Apple Messages for Business: Unlike standard text messaging available on mobile phones, Apple Messages is a specialized service designed for businesses to engage with customers. It facilitates easy setup of touchpoints such as QR codes, apps, or email messages, enabling appointments, issue resolution, and payments, among other things.
  • Facebook Messenger: Facebook Messenger for Business enables brands to handle incoming queries efficiently, providing immediate responses through AI assistants or routing complex issues to human agents. Clients integrating with a tool like Quiq have seen a massive ROI – a 95% customer satisfaction (CSAT), a 70-80% resolution rate for incoming customer inquiries automatically, and more. Like WhatsApp, FB messenger is a cross-platform messaging app, meaning it can help you reach users on both Android and Apple devices.
  • Instagram: Instagram isn’t just for posting pictures anymore – your target audience is likely using it to discover brands, shop, and make purchases. They’re reaching out through direct messages (DM), responding to stories, and commenting on posts. Instagram’s messaging API simplifies the handling of these customer interactions; it has automated features that help initiate conversations, such as Ice Breakers, as well as features that facilitate automated responses, such Quick Replies. Integrating Quiq’s conversational AI with Instagram’s messaging API makes it easier to automate responses to frequently asked questions, thereby reducing the workload on your human agents.
  • X (formerly Twitter): With nearly 400 million registered users and native, secure payment options, X is not a platform you can ignore. And the data supports this – 50% of surveyed X users mentioned brands in their posts more than 15 times in seven months, 80% of surveyed X users referred to a brand in their posts, and 99% of X users encountered a brand-related post in just over a month. By utilizing X business messaging, you can connect with your customers directly, providing them with excellent service experiences. Over time, this approach helps you build strong relationships and positive brand perceptions. Remember, posts—even those related to customer service—occur publicly. Thus, a positive interaction satisfies your customer and showcases your company’s engagement quality to others. Even better, the X API enables you to send detailed messages while keeping the conversation within X’s platform. This avoids the need for customers to switch platforms, enhancing their overall satisfaction.

How to Switch Away From Google Business Messaging

Even though GBM is going the way of the Dodo, the good news is that you have tons of other options. Check out our dedicated pages to learn more about SMS, WhatsApp, and Facebook Messenger, and you’re warmly invited to consult with our team if you are currently using GBM with another managed service provider and are not sure what the best direction forward is!

Does Quiq Train Models on Your Data? No (And Here’s Why.)

Customer experience directors tend to have a lot of questions about AI, especially as it becomes more and more important to the way modern contact centers function.

These can range from “Will generative AI’s well-known tendency to hallucinate eventually hurt my brand?” to “How are large language models trained in the first place?” along with many others.

Speaking of training, one question that’s often top of mind for prospective users of Quiq’s conversational AI platform is whether we train the LLMs we use with your data. This is a perfectly reasonable question, especially given famous examples of LLMs exposing proprietary data, such as happened at Samsung. Needless to say, if you have sensitive customer information, you absolutely don’t want it getting leaked – and if you’re not clear on what is going on with an LLM, you might not have the confidence you need to use one in your contact center.

The purpose of this piece is to assure you that no, we do not train LLMs with your data. To hammer that point home, we’ll briefly cover how models are trained, then discuss the two ways that Quiq optimizes model behavior: prompt engineering and retrieval augmented generation.

How are Large Language Models Trained?

Part of the confusion stems from the fact that the term ‘training’ means different things to different people. Let’s start by clarifying what this term means, but don’t worry–we’ll go very light on technical details!

First, generative language models work with tokens, which are units of language such as a part of a word (“kitch”), a whole word (“kitchen”), or sometimes small clusters of words (“kitchen sink”). When a model is trained, it’s learning to predict the token that’s most likely to follow a string of prior tokens.

Once a model has seen a great deal of text, for example, it learns that “Mary had a little ____” probably ends with the token “lamb” rather than the token “lightbulb.”

Crucially, this process involves changing the model’s internal weights, i.e. its internal structure. Quiq has various ways of optimizing a model to perform in settings such as contact centers (discussed in the next section), but we do not change any model’s weights.

How Does Quiq Optimize Model Behavior?

There are a few basic ways to influence a model’s output. The two used by Quiq are prompt engineering and retrieval augmented generation (RAG), neither of which does anything whatsoever to modify a model’s weights or its structure.

In the next two sections, we’ll briefly cover each so that you have a bit more context on what’s going on under the hood.

Prompt Engineering

Prompt engineering involves changing how you format the query you feed the model to elicit a slightly different response. Rather than saying, “Write me some social media copy,” for example, you might also include an example outline you want the model to follow.

Quiq uses an approach to prompt engineering called “atomic prompting,” wherein the process of generating an answer to a question is broken down into multiple subtasks. This ensures you’re instructing a Large Language Model in a smaller context with specific, relevant task information, which can help the model perform better.

This is not the same thing as training. If you were to train or fine-tune a model on company-specific data, then the model’s internal structure would change to represent that data, and it might inadvertently reveal it in a future reply. However, including the data in a prompt doesn’t carry that risk because prompt engineering doesn’t change a model’s weights.

Retrieval Augmented Generation (RAG)

RAG refers to giving a language model an information source – such as a database or the Internet – that it can use to improve its output. It has emerged as the most popular technique to control the information the model needs to know when generating answers.

As before, that is not the same thing as training because it does not change the model’s weights.

RAG doesn’t modify the underlying model, but if you connect it to sensitive information and then ask it a question, it may very well reveal something sensitive. RAG is very powerful, but you need to use it with caution. Your AI development platform should provide ways to securely connect to APIs that can help authenticate and retrieve account information, thus allowing you to provide customers with personalized responses.

This is why you still need to think about security when using RAG. Whatever tools or information sources you give your model must meet the strictest security standards and be certified, as appropriate.

Quiq is one such platform, built from the ground-up with data security (encryption in transit) and compliance (SOC 2 certified) in mind. We never store or use data without permission, and we’ve crafted our tools so it’s as easy as possible to utilize RAG on just the information stores you want to plug a model into. Being a security-first company, this extends to our utilization of Large Language Models and agreements with AI providers like Microsoft Open AI.

Wrapping Up on How Quiq Trains LLMs

Hopefully, you now have a much clearer picture of what Quiq does to ensure the models we use are as performant and useful as possible. With them, you can make your customers happier, improve your agents’ performance, and reduce turnover at your contact center.

If you’re interested in exploring some other common misconceptions that CX leaders face when considering incorporating generative AI into their technology stack, check out our ebook on the subject. It contains a great deal of information to help you make the best possible decision!

Request A Demo

Does GenAI Leak Your Sensitive Data? Exposing Common AI Misconceptions (Part Three)

This is the final post in a three-part series clarifying the biggest misconceptions holding CX leaders like you back from integrating GenAI into their CX strategies. Our goal? To assuage your fears and help you start getting real about adding an AI Assistant to your contact center — all in a fun “two truths and a lie” format.

There are few faux pas as damaging and embarrassing for brands as sensitive data getting into the wrong hands. So it makes sense that data security concerns are a major deterrent for CX leaders thinking about getting started with GenAI.

In the first post of our AI Misconceptions series, we discussed why your data is definitely good enough to make GenAI work for your business. Next, we explored the different types of hallucinations that CX leaders should be aware of, and how they are 100% preventable with the right guardrails in place.

Now, let’s wrap up our series by exposing the truth about GenAI potentially leaking your company or customer data.

Misconception #3: “GenAI inadvertently leaks sensitive data.”

As we discussed in part one, AI needs training data to work. One way to collect that data is from the questions users ask. For example, if a large language model (LLM) is asked to summarize a paragraph of text, that text could be stored and used to train future models.

Unfortunately, there have been some famous examples of companies’ sensitive information becoming part of datasets used to train LLMs — take Samsung, for instance. Because of this, CX leaders often fear that using GenAI will result in their company’s proprietary data being disclosed when users interact with these models.

Truth #1: Public GenAI tools use conversation data to train their models.

Tools like OpenAI’s ChatGPT and Google Gemini (formerly Bard) are public-facing and often free — and that’s because their purpose is to collect training data. This means that any information that users enter while using these tools is free game to be used for training future models.

This is precisely how the Samsung data leak happened. The company’s semiconductor division allowed its engineers to use ChatGPT to check their source code. Not only did multiple employees copy/paste confidential code into ChatGPT, but one team member even used the tool to transcribe a recording of an internal-only meeting!

Truth #2: Properly licensed GenAI is safe.

People often confuse ChatGPT, the application or web portal, with the LLM behind it. While the free version of ChatGPT collects conversation data, OpenAI offers an enterprise LLM that does not. Other LLM providers offer similar enterprise licenses that specify that all interactions with the LLM and any data provided will not be stored or used for training purposes.

When used through an enterprise license, LLMs are also Service Organization Control Type 2, or SOC 2, compliant. This means they have to undergo regular audits from third parties to prove that they have the processes and procedures in place to protect companies’ proprietary data and customers’ personally identifiable information (PII).

The Lie: Enterprises must use internally-developed models only to protect their data.

Given these concerns over data leaks and hallucinations, some organizations believe that the only safe way to use GenAI is to build their own AI models. Case in point: Samsung is now “considering building its own internal AI chatbot to prevent future embarrassing mishaps.”

However, it’s simply not feasible for companies whose core business is not AI to build AI that is as powerful as commercially available LLMs — even if the company is as big and successful as Samsung. Not to mention the opportunity cost and risk of having your technical resources tied up in AI instead of continuing to innovate on your core business.

It’s estimated that training the LLM behind ChatGPT cost upwards of $4 million. It also required specialized supercomputers and access to a data set equivalent to nearly the entire Internet. And don’t forget about maintenance: AI startup Hugging Face recently revealed that retraining its Bloom LLM cost around $10 million.

GenAI Misconceptions

Using a commercially available LLM provides enterprises with the most powerful AI available without breaking the bank— and it’s perfectly safe when properly licensed. However, it’s also important to remember that building a successful AI Assistant requires much more than developing basic question/answer functionality.

Finding a Conversational CX Platform that harnesses an enterprise-licensed LLM, empowers teams to build complex conversation flows, and makes it easy to monitor and measure Assistant performance is a CX leader’s safest bet. Not to mention, your engineering team will thank you for giving them optionality for the control and visibility they want—without the risk and overhead of building it themselves!

Feel Secure About GenAI Data Security

Companies that use free, public-facing GenAI tools should be aware that any information employees enter can (and most likely will) be used for future model-training purposes.

However, properly-licensed GenAI will not collect or use your data to train the model. Building your own GenAI tools for security purposes is completely unnecessary — and very expensive!

Want to read more or revisit the first two misconceptions in our series? Check out our full guide, Two Truths and a Lie: Breaking Down the Major GenAI Misconceptions Holding CX Leaders Back.

Will GenAI Hallucinate and Hurt Your Brand? Exposing Common AI Misconceptions (Part Two)

This is the second post in a three-part series clarifying the biggest misconceptions holding CX leaders like you back from integrating GenAI into their CX strategies. Our goal? To assuage your fears and help you start getting real about adding an AI Assistant to your contact center — all in a fun “two truths and a lie” format.

Did you know that the Golden Gate Bridge was transported for the second time across Egypt in October of 2016?

Or that the world record for crossing the English Channel entirely on foot is held by Christof Wandratsch of Germany, who completed the crossing in 14 hours and 51 minutes on August 14, 2020?

Probably not, because GenAI made these “facts” up. They’re called hallucinations, and AI hallucination misconceptions are holding a lot of CX leaders back from getting started with GenAI.

In the first post of our AI Misconceptions series, we discussed why your data is definitely good enough to make GenAI work for your business. In fact, you actually need a lot less data to get started with an AI Assistant than you probably think.

Now, we’re debunking AI hallucination myths and separating some of the biggest AI hallucination facts from fiction. Could adding an AI Assistant to your contact center put your brand at risk? Let’s find out.

Misconception #2: “GenAI will hallucinate and hurt my brand.”

While the example hallucinations provided above are harmless and even a little funny, this isn’t always the case. Unfortunately, there are many examples of times chatbots have cussed out customers or made racist or sexist remarks. This causes a lot of concern among CX leaders looking to use an AI Assistant to represent their brand.

Truth #1: Hallucinations are real (no pun intended).

Understanding AI hallucinations hinges on realizing that GenAI wants to provide answers — whether or not it has the right data. Hallucinations like those in the examples above occur for two common reasons.

AI-Induced Hallucinations Explained:

  1. The large language model (LLM) simply does not have the correct information it needs to answer a given question. This is what causes GenAI to get overly creative and start making up stories that it presents as truth.
  2. The LLM has been given an overly broad and/or contradictory dataset. In other words, the model gets confused and begins to draw conclusions that are not directly supported in the data, much like a human would do if they were inundated with irrelevant and conflicting information on a particular topic.

Truth #2: There’s more than one type of hallucination.

Contrary to popular belief, hallucinations aren’t just incorrect answers: They can also be classified as correct answers to the wrong questions. And these types of hallucinations are actually more common and more difficult to control.

For example, imagine a company’s AI Assistant is asked to help troubleshoot a problem that a customer is having with their TV. The Assistant could give the customer correct troubleshooting instructions — but for the wrong television model. In this case, GenAI isn’t wrong, it just didn’t fully understand the context of the question.

GenAI Misconceptions

The Lie: There’s no way to prevent your AI Assistant from hallucinating.

Many GenAI “bot” vendors attempt to fine-tune an LLM, connect clients’ knowledge bases, and then trust it to generate responses to their customers’ questions. This approach will always result in hallucinations. A common workaround is to pre-program “canned” responses to specific questions. However, this leads to unhelpful and unnatural-sounding answers even to basic questions, which then wind up being escalated to live agents.

In contrast, true AI Assistants powered by the latest Conversational CX Platforms leverage LLMs as a tool to understand and generate language — but there’s a lot more going on under the hood.

First of all, preventing hallucinations is not just a technical task. It requires a layer of business logic that controls the flow of the conversation by providing a framework for how the Assistant should respond to users’ questions.

This framework guides a user down a specific path that enables the Assistant to gather the information the LLM needs to give the right answer to the right question. This is very similar to how you would train a human agent to ask a specific series of questions before diagnosing an issue and offering a solution. Meanwhile, in addition to understanding what the intent of the customer’s question is, the LLM can be used to extract additional information from the question.

Referred to as “pre-generation checks,” these filters are used to determine attributes such as whether the question was from an existing customer or prospect, which of the company’s products or services the question is about, and more. These checks happen in the background in mere seconds and can be used to select the right information to answer the question. Only once the Assistant understands the context of the client’s question and knows that it’s within scope of what it’s allowed to talk about does it ask the LLM to craft a response.

But the checks and balances don’t end there: The LLM is only allowed to generate responses using information from specific, trusted sources that have been pre-approved, and not from the dataset it was trained on.

In other words, humans are responsible for providing the LLM with a source of truth that it must “ground” its response in. In technical terms, this is called Retrieval Augmented Generation, or RAG — and if you want to get nerdy, you can read all about it here!

Last but not least, once a response has been crafted, a series of “post- generation checks” happens in the background before returning it to the user. You can check out the end-to-end process in the diagram below:

RAG

Give Hallucination Concerns the Heave-Ho

To sum it up: Yes, hallucinations happen. In fact, there’s more than one type of hallucination that CX leaders should be aware of.

However, now that you understand the reality of AI hallucination, you know that it’s totally preventable. All you need are the proper checks, balances, and guardrails in place, both from a technical and a business logic standpoint.

Now that you’ve had your biggest misconceptions about AI hallucination debunked, keep an eye out for the next blog in our series, all about GenAI data leaks. Or, learn the truth about all three of CX leaders’ biggest GenAI misconceptions now when you download our guide, Two Truths and a Lie: Breaking Down the Major GenAI Misconceptions Holding CX Leaders Back.

Request A Demo

Is Your CX Data Good Enough for GenAI? Exposing Common AI Misconceptions (Part One)

If you’re feeling unprepared for the impact of generative artificial intelligence (GenAI), you’re not alone. In fact, nearly 85% of CX leaders feel the same way. But the truth is that the transformative nature of this technology simply can’t be ignored — and neither can your boss, who asked you to look into it.

We’ve all heard horror stories of racist chatbots and massive data leaks ruining brands’ reputations. But we’ve also seen statistics around the massive time and cost savings brands can achieve by offloading customers’ frequently asked questions to AI Assistants. So which is it?

This is the first post in a three-part series clarifying the biggest misconceptions holding CX leaders like you back from integrating GenAI into their CX strategies. Our goal? To assuage your fears and help you start getting real about adding an AI Assistant to your contact center — all in a fun “two truths and a lie” format. Prepare to have your most common AI misconceptions debunked!

Misconception #1: “My data isn’t good enough for GenAI.”

Answering customer inquiries usually requires two types of data:

  1. Knowledge (e.g. an order return policy) and
  2. Information from internal systems (e.g. the specific details of an order).

It’s easy to get caught up in overthinking the impact of data quality on AI performance and wondering whether or not your knowledge is even good enough to make an AI Assistant useful for your customers.

Updating hundreds of help desk articles is no small task, let alone building an entire knowledge base from scratch. Many CX leaders are worried about the amount of work it will require to clean up their data and whether their team has enough resources to support a GenAI initiative. In order for GenAI to be as effective as a human agent, it needs the same level of access to internal systems as human agents.

Truth #1: You have to have some amount of data.

Data is necessary to make AI work — there’s no way around it. You must provide some data for the model to access in order to generate answers. This is one of the most basic AI performance factors.

But we have good news: You need a lot less data than you think.

One of the most common myths about AI and data in CX is that it’s necessary to answer every possible customer question. Instead, focus on ensuring you have the knowledge necessary to answer your most frequently asked questions. This small step forward will have a major impact for your team without requiring a ton of time and resources to get started

Truth #2: Quality matters more than quantity.

Given the importance of relevant data in AI, a few succinct paragraphs of accurate information is better than volumes of outdated or conflicting documentation. But even then, don’t sweat the small stuff.

For example, did a product name change fail to make its way through half of your help desk articles? Are there unnecessary hyperlinks scattered throughout? Was it written for live agents versus customers?

No problem — the right Conversational CX Platform can easily address these AI data dependency concerns without requiring additional support from your team.

The Lie: Your data has to be perfectly unified and specifically formatted to train an AI Assistant.

Don’t worry if your data isn’t well-organized or perfectly formatted. The reality is that most companies have services and support materials scattered across websites, knowledge bases, PDFs, .csvs, and dozens of other places — and that’s okay!

Today, the tools and technology exist to make aggregating this fragmented data a breeze. They’re then able to cleanse and format it in a way that makes sense for a large language model (LLM) to use.

For example if you have an agent training manual in Google Docs and a product manual in PDF, this information can be disassembled, reformatted, and rewritten by an AI-powered transformation that makes it subsequently usable.

What’s more, the data used by your AI Assistant should be consistent with the data you use to train your human agents. This means that not only is it not required to build a special repository of information for your AI Assistant to learn from, but it’s not recommended. The very best AI platforms take on the work of maintaining this continuity by automatically processing and formatting new information for your Assistant as it’s published, as well as removing any information that’s been deleted.

Put Those Data Doubts to Bed

Now you know that your data is definitely good enough for GenAI to work for your business. Yes, quality matters more than quantity, but it doesn’t have to be perfect.

The technology exists to unify and format your data so that it’s usable by an LLM. And providing knowledge around even a handful of frequently asked questions can give your team a major lift right out the gate.

Keep an eye out for the next blog in our series, all about GenAI hallucinations. Or, learn the truth about all three of CX leaders’ biggest GenAI misconceptions now when you download our guide, Two Truths and a Lie: Breaking Down the Major GenAI Misconceptions Holding CX Leaders Back.

Request A Demo

9 Top Customer Service Challenges — and How to Overcome Them

It’s a shame that customer service doesn’t always get the respect and attention it deserves because it’s among the most important ingredients in any business’s success. There’s no better marketing than an enthusiastic user base, so every organization should strive to excel at making customers happy.

Alas, this is easier said than done. When someone comes to you with a problem, they can be angry, stubborn, mercurial, and—let’s be honest—extremely frustrating. Some of this just comes with the territory, but some stems from the fact that many customer service professionals simply don’t have a detailed, high-level view of customer service challenges or how to overcome them.

That’s what we’re going to remedy in this post. Let’s jump right in!

What are The Top Customer Service Challenges?

After years of running a generative AI platform for contact centers and interacting with leaders in this space, we have discovered that the top customer service challenges are:

  1. Understanding Customer Expectations
  2. Next Step: Exceeding Customer Expectations
  3. Dealing with Unreasonable Customer Demands
  4. Improving Your Internal Operations
  5. Not Offering a Preferred Communication Channel
  6. Not Offering Real-Time Options
  7. Handling Angry Customers
  8. Dealing With a Service Outage Crisis
  9. Retaining, Hiring, and Training Service Professionals

In the sections below, we’ll break each of these down and offer strategies for addressing them.

1. Understanding Customer Expectations

No matter how specialized a business is, it will inevitably cater to a wide variety of customers. Every customer has different desires, expectations, and needs regarding a product or service, which means you need to put real effort into meeting them where they are.

One of the best ways to foster this understanding is to remain in consistent contact with your customers. Deciding which communication channels to offer customers depends a great deal on the kinds of customers you’re serving. That said, in our experience, text messaging is a universally successful method of communication because it mimics how people communicate in their personal lives. The same goes for web chat and WhatsApp.

Beyond this, setting the right expectations upfront is another good way to address common customer service challenges. For example, if you are not available 24/7, only provide support via email, or don’t have dedicated account managers , you should  make that clear right at the beginning.

Nothing will make a customer angrier than thinking they can text you only to realize that’s not an option in the middle of a crisis.

2. Next Step: Exceed Customer Expectations

Once you understand what your customers want and need, the next step is to go above and beyond to make them happy. Everyone wants to stand out in a fiercely competitive market, and going the extra mile is a great way to do that. One of the major customer service challenges is knowing how to do this proactively, but there are many ways you can succeed without a huge amount of effort.

Consider a few examples, such as:

  • Treating the customer as you would a friend in your personal life, i.e. by apologizing for any negative experiences and empathizing with how they feel;
  • Offering a credit or discount for a future purchase;
  • Sending them a card referencing their experience and thanking them for being a loyal customer;

The key is making sure they feel seen and heard. If you do this consistently, you’ll exceed your customers’ expectations, and the chances of them becoming active promoters of your company will increase dramatically.

3. Dealing with Unreasonable Demands

Of course, sometimes a customer has expectations that simply can’t be met, and this, too, counts as one of the serious customer service challenges. Customer service professionals often find themselves in situations where someone wants a discount that can’t be given, a feature that can’t be built, or a bespoke customization that can’t be done, and they wonder what they should do.

The only thing to do in this situation is to gently let the customer down, using respectful and diplomatic language. Something like, “We’re really sorry we’re not able to fulfill your request, but we’d be happy to help you choose an option that we currently have available” should do the trick.

4. Improving Your Internal Operations

Customer service teams face the constant pressure to improve efficiency, maintain high CSAT scores, drive revenue, and keep costs to service customers low. This matters a lot; slow response times and being kicked from one department to another are two of the more common complaints contact centers get from irate customers, and both are fixable with appropriate changes to your procedures.

Improving contact center performance is among the thorniest customer service challenges, but there’s no reason to give up hope!

One thing you can do is gather and utilize better data regarding your internal workflows. Data has been called “the new oil,” and with good reason—when used correctly, it’s unbelievably powerful.

What might this look like?

Well, you are probably already tracking metrics like first contact resolution (FCR) and (AHT), but this is easier when you have a unified, comprehensive dashboard that gives you quick insight into what’s happening across your organization.

You might also consider leveraging the power of generative AI, which has led to AI assistants that can boost agent performance in a variety of different tasks. You have to tread lightly here because too much bad automation will also drive customers away. But when you use technology like large language models according to best practices, you can get more done and make your customers happier while still reducing the burden on your agents.

5. Not Offering a Preferred Communication Channel

In general, contact centers often deal with customer service challenges stemming from new technologies. One way this can manifest is the need to cultivate new channels in line with changing patterns in the way we all communicate.

You can probably see where this is going – something like 96% of Americans have some kind of cell phone, and if you’ve looked up from your own phone recently, you’ve probably noticed everyone else glued to theirs.

It isn’t just that customers now want to be able to text you instead of calling or emailing; the ubiquity of cell phones has changed their basic expectations. They now take it for granted that your agents will be available round the clock, that they can chat with an agent asynchronously as they go about other tasks, etc.

We can’t tell you whether it’s worth investing in multiple communication channels for your industry. But based on our research, we can tell you that having multiple channels—and text messaging in particular—is something most people want and expect.

6. Not Offering Real-Time Options

When customers reach out asking for help, their problems likely feel unique to them. But since you have so much more context, you’re aware that a very high percentage of inquiries fall into a few common buckets, like “Where is my order?”, “How do I handle a return?”, “My item arrived damaged, how can I exchange it for a new one?”, etc.

These and similar inquiries can easily be resolved instantly using AI, leaving customers and agents happier and more productive.

7. Handling Angry Customers

A common story in the customer service world involves an interaction going south and a customer getting angry.

Gracefully handling angry customers is one of those perennial customer service challenges; the very first merchants had to deal with angry customers, and our robot descendants will be dealing with angry customers long after the sun has burned out.

Whenever you find yourself dealing with a customer who has become irate, there are two main things you have to do:

  1. Empathize with them
  2. Do not lose your cool

It can be hard to remember, but the customer isn’t frustrated with you, they’re frustrated with the company and products. If you always keep your responses calm and rooted in the facts of the situation, you’ll always be moving toward providing a solution.

8. Dealing With a Service Outage Crisis

Sometimes, our technology fails us. The wifi isn’t working on the airplane, a cell phone tower is down following a lightning storm, or that printer from Office Space jams so often it starts to drive people insane.

As a customer service professional, you might find yourself facing the wrath of your customers if your service is down. Unfortunately, in a situation like this, there’s not much you can do except honestly convey to your customers that your team is putting all their effort into getting things back on track. You should go into these conversations expecting frustrated customers, but make sure you avoid the temptation to overpromise.

Talk with your tech team and give customers a realistic timeline, don’t assure them it’ll be back in three hours if you have no way to back that up. Though Elon Musk seems to get away with it, the worst thing the rest of us can do is repeatedly promise unrealistic timelines and miss the mark.

9. Retaining, Hiring, and Training Service Professionals

You may have seen this famous Maya Angelou quote, which succinctly captures what the customer service business is all about:

“I’ve learned that people will forget what you said, people will forget what you did, but people will never forget how you made them feel.”

Learning how to comfort a person or reassure them is high on the list of customer service challenges, and it’s something that is certainly covered in your training for new agents.

But training is also important because it eases the strain on agents and reduces turnover. For customer service professionals, the median time to stick with one company is less than a year, and every time someone leaves, that means finding a replacement, training them, and hoping they don’t head for the exits before your investment has paid off.

Keeping your agents happy will save you more money than you imagine, so invest in a proper training program. Ensure they know what’s expected of them, how to ask for help when needed, and how to handle challenging customers.

Final Thoughts on the Top Customer Service Challenges

Customer service challenges abound, but with the right approach, there’s no reason you shouldn’t be able to meet them head-on!

Check out our report for a more detailed treatment of three major customer service challenges and how to resolve them. Between the report and this post, you should be armed with enough information to identify your own internal challenges, fix them, and rise to new heights.

Request A Demo

Everything You Need To Know About The Role Of Vector Databases In AI for CX

All businesses are influenced by the emergence of new technologies, and contact centers are no different. In the constant battle to provide a better experience for agents and customers, contact center managers and their technical partners are always on the lookout for new tools that will make everyone’s lives easier.

We’ve talked a lot about this subject, and today we’re going to continue this streak by diving into the fundamentals of vector databases. If you’re researching the potential of generative AI for your CX teams, vector databases and their role in AI for customer experience is a key strategic component to understand.

Why You Should Care About Vector Databases

Vector databases matter because, amongst many other things, they help you understand how your AI experience is working and where you can improve. If you pick a vendor that has an integrated vector database, you’ll want to make sure that the toolkit gives you visibility into how your data is stored.

AI is impacting use cases across the enterprise. Organizations are therefore identifying which use cases are core to their differentiation and where they have unique data.

Most enterprises choose to buy CX solutions since the industry is so well-developed and mature. With this next generation of AI, vector databases are a critical part of the stack — and we will explain why in this article.

We’ll also touch on why you should choose an AI software vendor with an integrated vector database offering (Pro tip: This is how you get all the benefits with none of the risks).

Why Are Vector Databases Useful in Building an AI Assistant for CX?

As you may know, databases are essentially like warehouses where various kinds of information can be stored, and a vector database is just a warehouse whose function is to store vectors.

A vector is essentially a high-dimensional mathematical representation of something like an image or a word. There are many ways of generating vectors, but at the end of the process, what you’ll have is an array of floating-point (i.e. non-integer) numbers that look like this:

[.8, 1.1, -0.4, 21.3,….,17.8]

A vector embedding for a word might contain thousands of these floating-point numbers, and a corpus of text might contain thousands of words that need to be embedded. This is far too much information to store in a spreadsheet or .txt file, so vector databases were invented to hold these data structures and make them easy to access. In addition, a dedicated vector database will have all sorts of special functions that allow you to calculate the similarity of different vectors, search over them with a query, and do myriad other things people do with data.

The reason this impacts building AI assistants for CX use cases is that much of the power of these tools comes from the underlying vectors. If you build an application that’s able to dynamically answer user questions based on your internal documentation, then it will almost certainly be working with vector embeddings of those documents.

You might wonder why traditional relational databases or NoSQL databases couldn’t be used for this purpose. It’s possible that they could, but different kinds of databases are optimized for different use cases. Relational databases, for example, are excellent at storing structured data, such as customer IDs, purchase histories, etc.

How Does a Vector Database Work for AI Assistants?

There are really only a few things happening inside a vector database when we focus on the main concepts.

First, you have your content, which is whatever you want to vectorize. This content is passed into an embedding model, and that model generates the embeddings we discussed above. Those embeddings are stored in the vector database where an AI assistant can use them, and there’s always some pointer tying each vector to the content that was used to generate it.

When your AI assistant needs to use these embeddings, it does so with a query. This query is vectorized using the same embedding model that generated the vectors in the database, and any vectors that are similar to the query can, therefore, be located quickly and efficiently. Because each vector remains tied to its originating content, that content can be returned to the application.

To concretize this, suppose you had a vector database containing a lot of content related to retail, and your AI assistant submits a query like: “My new jacket arrived in a medium. Can I exchange it for a small?” The database will be able to locate articles containing relevant information based on the similarity between the vectors for the query and the vectors in the database.

Importantly, this is not a simple keyword search. The vector database will return useful results even if there are no strict word matches at all. So, if the retail content says “coat” instead of jacket and “return” instead of exchange, it’ll still match the content to the query and give you something worthwhile.

How Vector Databases Supercharge AI Assistants

What would you be able to do if you took all of your FAQs, product catalogs, documentation, past conversations, etc., and created embeddings from them?

Well, suppose a customer shows up and asks a fairly basic question about your product. You could vectorize their question and match it against your database, returning relevant material even if the query is phrased in different words (or even an entirely different language).

Or suppose an agent wants to see if the thorny issue they’re dealing with relates to anything other agents have had to tackle in the past. As in the previous example, the agent can submit their conversation to the vector database and turn up similar interactions that have taken place, even if the language is different.

Advantages of AI Vector Databases

Vector databases have many compelling properties that make them popular for working with diverse data types.

First, this data tends to be “high-dimensional,” which is a more precise way of saying “big and complicated.” The way vector databases store and index high-dimensional data means that they operate with a speed and efficiency that would be hard to achieve if you stored the same data in a traditional database.

Then, it turns out that a lot of data can be vectorized. We already mentioned words and images, but you can also turn audio, connected graphs (such as those used to represent social networks), and many other kinds of data into embeddings. Even better, it’s often possible to create “multi-modal embeddings” to simultaneously represent a video’s audio, images, and text. This means you could use simple, textual queries to search over hundreds of hours of audio conversations with customers and textual transcripts, for example.

Finally, vector databases offer support for many complex analytics and machine-learning tasks. They can be used to build recommendation systems, perform sentiment analysis, or power generative AI applications.

As impressive as all this is, you probably don’t want to spend too much time thinking about the intricacies of a specialized database.

Managing a vector database is heavy on resources and can be complicated. So, one option we offer at Quiq is a straightforward GUI (Graphical User Interface) called AI Studio that allows you to load your data in a vector database that’s integrated directly into our platform.

Challenges and Considerations of Vector Databases

For all this, vector databases do, of course, have their drawbacks.

To begin with, vector databases are very specialized tools. While they are wonderful for working with the high-dimensional data that will power AI assistants in a contact center, they are not well-suited to storing tabular data. This means you’ll probably need to accommodate a traditional database and its vector-optimized counterpart – unless you work with a conversational AI vendor that has one built in.

There’s also a lot to think about regarding how it integrates with your existing data infrastructure. These days, most vector database companies consider this problem carefully and try to design their systems so that they’re easy to integrate with the rest of your stack.

But, as with everything else, actually going through the steps will require time and energy from your engineers. That said, there are many options to getting the job done. For example, if you partner with Quiq, we enable teams to build out AI assistants in an environment created specifically for this purpose: AI Studio.

Why does any of this matter when you’re exploring the options of introducing generative AI?

In a nutshell: vector databases are critical to safely and effectively using an AI assistant for your organization. But working with such a specialized technology is far from trivial, which is why so many are choosing instead to partner with a team that can handle vector management, or provide you with a tool to make it easier for you to handle it on your own.

If you have already decided to move forward with a vector database and don’t have multiple engineers to throw at the problem, this is what you should be looking for. Get in touch if you want to talk over your options.

Future Trends and Developments for Vector Databases

In this penultimate section, we’ll speculate a bit about where vector databases are heading.

Let’s begin with an easy prediction: vector databases will become more widely used and important. As generative AI continues to rise, there will be more places to utilize vectors, and as such, more companies will turn to them to store embeddings of their datasets.

But, we also think that many of these companies will then have to take a sober look at their cost structure. Vectors are flexible data structures that are uniquely able to power applications like search based on retrieval augmented generation (RAG), but they’re not equally applicable to every problem.

Finally, the trends indicate the vector databases of the future will have a wider range of capabilities. As things stand, they’re mostly built around doing various kinds of search based on the similarity of the underlying vectors. But there’s no reason they couldn’t handle exact matches, too. Together, these would allow you to get a broad, contextual overview and a precise, targeted result.

In the same vein, vector databases will eventually support other vector-based tasks, like classifying vectors or creating vector clusters. This would make it easier to do anomaly detection and similar kinds of unsupervised learning work.

Final Thoughts on Vector Databases

Vector databases are a remarkable technology that is especially important in the age of generative AI, and their rise is part of a bigger shift toward leveraging AI for many tasks.

That said, for contact center teams that are thinking about building a homegrown AI solution for CX, it’s critical to be realistic about the role that vector databases play in building a solution. It’s equally as important to plan ahead to mitigate the risks by bringing on support to help make the project successful.

Quiq’s AI offering features an integrated vector database, and partnering with us means one less thing to worry about. Reach out if you’d like to learn more.

Request A Demo

5 Tips for Coaching Your Contact Center Agents to Work with AI

Generative AI has enormous potential to change the work done at places like contact centers. For this reason, we’ve spent a lot of energy covering it, from deep dives into the nuts and bolts of large language models to detailed advice for managers considering adopting it.

Here, we will provide tips on using AI tools to coach, manage, and improve your agents.

How Will AI Make My Agents More Productive?

Contact centers can be stressful places to work, but much of that stems from a paucity of good training and feedback. If an agent doesn’t feel confident in assuming their responsibilities or doesn’t know how to handle a tricky situation, that will cause stress.

Tip #1: Make Collaboration Easier

With the right AI tools for coaching agents, you can get state-of-the-art collaboration tools that allow agents to invite their managers or colleagues to silently appear in the background of a challenging issue. The customer never knows there’s a team operating on their behalf, but the agent won’t feel as overwhelmed. These same tools also let managers dynamically monitor all their agents’ ongoing conversations, intervening directly if a situation gets out of hand.

Agents can learn from these experiences to become more performant over time.

Tip #2: Use Data-Driven Management

Speaking of improvement, a good AI platform will have resources that help managers get the most out of their agents in a rigorous, data-driven way. Of course, you’re probably already monitoring contact center metrics, such as CSAT and FCR scores, but this barely scratches the surface.

What you really need is a granular look into agent interactions and their long-term trends. This will let you answer questions like “Am I overstaffed?” and “Who are my top performers?” This is the only way to run a tight ship and keep all the pieces moving effectively.

Tip #3: Use AI To Supercharge Your Agents

As its name implies, generative AI excels at generating text, and there are several ways this can improve your contact center’s performance.

To start, these systems can sometimes answer simple questions directly, which reduces the demands on your team. Even when that’s not the case, however, they can help agents draft replies, or clean up already-drafted replies to correct errors in spelling and grammar. This, too, reduces their stress, but it also contributes to customers having a smooth, consistent, high-quality experience.

Tip #4: Use AI to Power Your Workflows

A related (but distinct) point concerns how AI can be used to structure the broader work your agents are engaged in.

Let’s illustrate using sentiment analysis, which makes it possible to assess the emotional state of a person doing something like filing a complaint. This can form part of a pipeline that sorts and routes tickets based on their priority, and it can also detect when an issue needs to be escalated to a skilled human professional.

Tip #5: Train Your Agents to Use AI Effectively

It’s easy to get excited about what AI can do to increase your efficiency, but you mustn’t lose sight of the fact that it’s a complex tool your team needs to be trained to use. Otherwise, it’s just going to be one more source of stress.

You need to have policies around the situations in which it’s appropriate to use AI and the situations in which it’s not. These policies should address how agents should deal with phenomena like “hallucination,” in which a language model will fabricate information.

They should also contain procedures for monitoring the performance of the model over time. Because these models are stochastic, they can generate surprising output, and their behavior can change.

You need to know what your model is doing to intervene appropriately.

Wrapping Up

Hopefully, you’re more optimistic about what AI can do for your contact center, and this has helped you understand how to make the most out of it.

If there’s anything else you’d like to go over, you’re always welcome to request a demo of the Quiq platform. Since we focus on contact centers we take customer service pretty seriously ourselves, and we’d love to give you the context you need to make the best possible decision!

Request A Demo

4 Reasons Why Every Hotel Needs an AI Assistant

Artificial intelligence (AI) has been all the rage for the past year, owing to its remarkable abilities to generate convincing text (and video!), automate major parts of different jobs, and boost the productivity of everyone using it.

Naturally, this has sparked the interest of professionals in the hospitality sector, which will be our focus today. We’ll talk about how AI assistants can be used in hotels, the size of the relevant market, and some potential issues you should look out for.

It’s an exciting topic, so let’s dive right in!

What is an AI Assistant for a Hotel?

Leaving aside a bit of nuance, the phrase “AI assistant” broadly covers using algorithmic technologies such as large language models to “assist” in various aspects of your work. A very basic example is the bundle of spell checkers, suggested edits, and autocomplete that is all but ubiquitous in text editors, email clients, and blogging platforms; a more involved example would be carefully crafting a prompt to generate convincing copy to sell a product or service.

If you’re interested in digging in further, check out some of our earlier posts for more details.

What is the Importance of Artificial Intelligence in the Hotel Industry?

In the next section, we cover the nuts and bolts of what AI assistants can do to streamline your operations, reduce the burden on your (human) staff, and improve the experience of guests staying at your hotel.

But in this one, we’re just going to talk dollars and cents. And to be clear, there are a lot of dollars and cents on the table. Experts who’ve studied the potential market for AI assistants in hospitality believe that it was worth something like $90 million in 2022, and this figure is expected to climb to an eye-watering $8 billion over the next decade.

“Hang on,” you’re thinking to yourself. “That’s great for the investors who fund these companies and the early employees that work in them, but the fact that a market is worth a lot of money doesn’t mean it’s actually going to have much impact on day-to-day hospitality.”

We admire your skeptical mind, and this is indeed a worthwhile concern. AI, after all, is renowned for its ups and downs; there’ll be years of frenzied excitement and near-delirious predictions that entire segments of the economy are poised for complete automation, followed by “AI winters” so deep even Ned Stark can’t get warm behind the walls of Winterfell.

Making the case that AI in hospitality will, in fact, be a trend worth thinking about is our next task.

The 4 Reasons Every Hotel Should be Using an AI Assistant

As promised, we’ll now cover all the reasons why you should seriously investigate the potential of AI assistants in your hotel. To paraphrase a famous saying, “Fortune favors the innovative,” and you can’t afford to ignore such a transformative technology.

#1 AI Assistants Can Help Drive Bookings and Sales

There are many ways in which AI will change the hotel booking process because it can act as a dynamic tool for enhancing guest interactions and driving sales directly through your hotel’s website. To start, AI assistants can significantly reduce the likelihood of potential guests abandoning their bookings midway by providing real-time answers to their questions, alleviating doubts about the details of a stay, and offering instant booking confirmations. Not only do such seamless experiences simplify the booking experience, they also contribute to an increase in direct bookings – a crucial advantage for hotels, as it eliminates the need for commission payouts and boosts profitability.

But that’s not all. These assistants are increasingly being integrated into social media and instant messaging platforms, enabling guests to start the booking process through their preferred channel or, failing that, redirecting them to the main hotel booking system. Throughout, they can proactively gather information about the guests’ preferences and budget, making tailored recommendations that increase the likelihood of conversion.

As you’re no doubt aware, a hotel doesn’t just make its money from bookings – there are also many opportunities for upselling and cross-selling hotel services. This, too, is a place where AI assistants can help. While interacting with a potential customer, they can suggest additional breakfast options, spa appointments, room upgrades, etc., based on the customer’s current selection and previous interactions with you.

Moreover, an AI assistant can modernize hotel marketing strategies, which have traditionally relied on relatively static methods like email campaigns. Properly tuned language models are capable of engaging in personalized, two-way conversations via social media or on your website, allowing them to deliver more effective promotional messages and alerts about special events or loyalty programs. All of this makes your messaging more likely to resonate with guests, ultimately boosting the all-important bottom line.

#2 AI Assistants Can Help Reduce Burnout and Turnover

About a year ago, we covered a landmark study from economists Erik Brynjolfsson, Danielle Li, and Lindsey R. Raymond that examined how generative AI was changing contact centers. Though there were (and are) many concerns about automation taking jobs, the study concluded that this new technology was helping newer agents onboard more quickly, was making mid-tier agents perform better, and was overall reducing burnout and turnover by lessening each agent’s burden.

Most of these factors also apply to your hospitality staff. Let’s see how.

Algorithms offer the distinct advantage of providing continuous service, and operating around the clock without needing breaks or sleep. This ensures that guests receive immediate assistance whenever needed, which will go a long way to cementing their perception of your commitment to exceptional service.

Furthermore, these assistants contribute to the efficiency of face-to-face customer interactions, particularly during routine processes like check-ins and check-outs. This dynamic becomes even more powerful when you integrate conversational AI into mobile apps, guests can complete these procedures directly from their smartphones, bypassing the front desk and avoiding any wait.

Hospitality teams often face high workloads, managing in-person guest interactions, responding to digital communications across multiple platforms, and analyzing feedback from customer surveys. A good AI assistant can substantially reduce this burden by handling routine inquiries and requests. Your human staff can then be left to focus on more complex issues, thereby preventing burnout and improving their capacity to deliver quality service via the fabled “human touch.”

#3 AI Assistants Can Help Improve the Guest Experience

Let’s drill a little bit more into how AI assistants can improve your guest’s stay at your hotel.

We’ve already mentioned some of this. If a customer’s booking goes smoothly, changes are handled promptly, their 2-a.m. questions have been answered, and their stay is replete with little personalized touches, they’re probably going to reflect on it fondly.
But this is hardly everything that can be said about how AI assistants will improve the hotel experience. Consider the fact that today’s language models are almost unbelievably good at translating between languages – especially when those are “high resource” languages, such as Mandarin, Russian, and Spanish.

If you’re a monolingual native English speaker, it can be easy to forget how much cognitive effort is involved in speaking a language in which you’re not fluent. But imagine for a moment that you’re a foreign traveler whose flight was delayed and whose kids never once stopped crying. Wouldn’t you appreciate being greeted with a friendly “欢迎” or “Добро пожаловать”, rather than needing to immediately fumble around in English?

Another subject that is slightly off-topic but is nevertheless worth discussing in this context is trust. People have long known that the internet is hardly a shining example of forthrightness and rectitude, but with the rise of generative AI, it has become even harder to believe what you read online.

We’ve discussed how much AI assistants can do for your hotel, but it’s important to use them judiciously, with appropriate guardrails in place, to reap the most benefit. If one of your language models offers up bad information or harasses a guest, that will reflect negatively on you. This is too big a topic for us to cover in this article, but you can check out earlier posts for more information.

A related issue is the collection of data. Upselling customers or personalizing their room can only be done by gathering data about their preferences. This, too, is something people are gradually becoming more aware of (and worried about), so it’s worth proactively crafting a data collection policy that’s available if anyone asks for it.

#4 AI Assistants Can Help Keep Your Operations Running Smoothly

Finally, we’ll finish by considering how AI can be used to streamline your hotel’s basic operations – making sure everything is in stock, that items make it to the right room, etc.

One significant benefit (which is becoming a more important distinguishing feature) is improving energy efficiency. You’re probably already familiar with smart room technologies, such as thermostats that reduce energy consumption by automatically adjusting themselves based on occupancy. But consider how implementing AI to manage HVAC systems for an entire building could not only optimize energy use and save significant costs, but also make guests more comfortable throughout their stay.

Similarly, AI can revolutionize waste management by employing systems that detect when trash receptacles need servicing. This would reduce the time staff spend checking and clearing bins, allowing them to focus on more valuable tasks.

Beyond these sustainability-focused applications, AI’s role in automating routine hospitality operations is vast. A fun example comes from Silicon Valley, where the Crowne Plaza hotel employs a robotic system named “Dash” to deliver snacks and towels directly to guests.

Even if you’re not particularly interested in having robots wandering your halls, it should hopefully be clear that many parts of running a hotel can be outsourced to machines, freeing you and your staff up to focus on more pressing matters.

Riding the AI Wave with Quiq

After decades of false starts and false promises, it looks like AI is finally having a measurable impact on the hospitality sector.

If you want to leverage this remarkable technology to the fullest but aren’t sure where to start, set up a time to talk with us. Quiq is an industry-leading conversational AI platform that makes deploying and monitoring AI systems for hotels much easier. Let’s explore opportunities to work together!

Request A Demo

The Ultimate Guide to RCS Business Messaging

From chiseling words into stone to typing them directly on our screens, changes in technology can bring profound changes to the way we communicate. Rich Communication Services (RCS) Business Messaging is one such technological change, and it offers the forward-looking contact center a sophisticated upgrade over traditional SMS.

In this piece, we’ll discuss RCS Business Messaging, illustrating its significance, its inner workings, and how it can be leveraged as part of a broader customer service strategy. This context will equip you to understand RCS and determine whether and how to invest in it.

Let’s get going!

What is RCS Business Messaging?

Smartphones have become enormously popular for surfing the internet, shopping, connecting with friends, and conducting many other aspects of our daily lives. One consequence of this development is that it’s much more common for contact centers to interact with customers through text messaging.

Once text messaging began to replace phone calls, emails, and in-person visits as the go-to communication channel, it was clear that it required an upgrade. The old Short Messaging Service (SMS) was replaced with Rich Communication Services (RCS), which supports audio messages, video, high-quality photos, group chats, encryption, and everything else we’ve come to expect from our messaging experience.

And, on the whole, the data indicate that this is a favorable trend:

  • More than 70% of people report feeling inclined to make an online purchase when they have the ability to get timely answers to questions;
  • Almost three-quarters indicated that they were more likely to interact with a brand when they have the option of doing so through RCS;
  • Messages sent through RCS are a staggering 35 times more likely to be read than an equivalent email.

For all these reasons, your contact center needs to be thinking about how RCS fits into your overall customer service strategy–it’s simply not a channel you can afford to ignore any longer.

How is RCS Business Messaging Different from Google Business Messages?

Distinguishing between Google’s Rich Communication Services (RCS) and Google Business Messages can be tricky because they’re similar in many ways. That said, keeping their differences in mind is crucial.

You may not remember this if you’re young enough, but text messaging was once much more limited. Texts could not be very long, and were unable to accommodate modern staples like GIFs, videos, and emojis. However, as reliance on text messaging grew, there was a clear need to enhance the basic protocol to include these and other multimedia elements.

Since this enhancement enriched the basic functionality of text messaging, it is known as “rich” communication. Beyond adding emojis and the like, RCS is becoming essential for businesses looking to engage in more dynamic interactions with customers. It supports features such as custom logos, collecting data for analytics, adding QR codes, and links to calendars or maps, and enhancing the messaging experience all around.

Google Business Messages, on the other hand, is a mobile messaging channel that seamlessly integrates with Google Maps and Search to deliver high-quality, asynchronous communication between your customers and your contact center agents.

This service is not only a boon to your satisfaction ratings, it can also support other business objectives by reducing the volume of calls and enhancing conversion rates.

While Google Business Messages and RCS have a lot in common, there are two key differences worth highlighting: RCS is not universally available across all Android devices (whereas Business Messages is), and Business Messages does not require a user to install a messaging app (whereas RCS does).

Learn More About the End of Google Business Messages

 

How Does RCS Business Messaging Work?

Okay, now that we’ve convinced you that RCS Business Messaging is worth the effort to cultivate, let’s examine how it works.

Once you set up your account and complete the registration process, you’ll need to create an “agent,” which is the basic interface connecting your contact center to your customers. Agents are quite flexible and able to handle very simple workflows (such as sending a notification) as well as much more complicated sequences of tasks (such as those required to help book a reservation).

From the customer’s side, communicating with an agent is more or less indistinguishable from having a standard conversation. Each participant will speak in turn, waiting for the other to respond.

Agents can be configured to initiate a conversation under a wide variety of external circumstances. They could reach out when a user’s order has been shipped, for example, or when a new sushi restaurant has opened and is offering discounts. Since we’re focused on contact centers, our agent configurations will likely revolve around events like “the customer reached out for support,” “there’s been an update on an outstanding ticket,” or “the issue has been resolved.”

However you’ve chosen to set up your agent, when it is supposed to initiate a conversation, it will use the RCS Business Messaging API to send a message. These messages are always sent as standard HTTP requests with a corresponding JSON payload (if you’re curious about the technical underpinnings), but the most important thing to know is that the message ultimately ends up in front of the user, where they can respond.

Unless, that is, their device doesn’t support RCS. RCS has become popular and prominent enough that we’d be surprised if you ran into this situation very often. Just in case, you should have your messaging set up such that you can default to something like SMS.

Any subsequent messages between the agent and the customer are also sent as JSON. Herein lies the enormous potential for customization, because you can utilize powerful technologies like natural language understanding to have your agent dynamically generate different responses in different contexts. This not only makes it feel more lifelike, it also means that it can solve a much broader range of problems.

If you don’t want to roll up your sleeves and do this yourself, you always have the option of partnering with a good conversational AI platform. Ideally, you’d want to use one that makes integrating generative AI painless, and which has a robust set of features that make it easy to monitor the quality of agent interactions, collect data, and make decisions quickly.

Best Practices for Using RCS Business Messaging

By now, you should hopefully understand RCS Business Messaging, why it’s exciting, and the many ways in which you can use it to take your contact center to new heights. In this penultimate section, we’ll discuss some of Google’s best practices for RCS.

RCS is not a General-Purpose User Interface

Tools are incredibly powerful ways of extending basic human abilities, but only if you understand when and how to use them. Hammers are great for carpentry, but they’re worse than useless when making pancakes (trust us on this–we’ve tried, and it went poorly).

The same goes for Google’s RCS Business Messaging, which is a conversational interface. Your RCS agents are great at resolving queries, directing customers to information, executing tasks, and (failing that) escalating to a human being. But in order to do all of this, you should try to make sure they speak in a way that is natural, restricted to the question at hand, and easy for the customer to follow.

For this same reason, your agents shouldn’t be seen as a simple replacement for a phone tree, requiring the user to tediously input numbers to navigate a menu of delimited options. Part of the reason agents are a step forward in contact center management is precisely because they eliminate the need to lean on such an approach.

Check Device Compatibility Beforehand

Above, we pointed out that some devices don’t support RCS, and you should therefore have a failsafe in place if you send a message to one. This is sage advice, but it’s also possible to send a “capability request” ahead of a message telling you what kind of device the user has and what messaging it supports.

This will allow you to configure your agent in advance so that it stays within the limits of a given device.

Begin at the Beginning

As you’ve undoubtedly heard from marketing experts, first impressions matter a lot. The way your agent initiates a conversation will determine the user’s experience, and thereby figure prominently in how successful you are in making them happy.

In general, it’s a good idea to have the initial message be friendly, warm, and human, to contain some of the information the user is likely to want, and to list out a few of the things the agent is capable of. This way, the person who reached out to you with a problem immediately feels more at ease, knowing they’ll be able to reach a speedy resolution.

Be Mindful of Technical Constraints

There are a few low-level facts about RCS that could bear on the end user’s experience, and you should know about them as you integrate RCS into your text messaging strategy.

To take one example, messages containing media may process more slowly than text-only messages. This means that you could end up with messages getting out of order if you send several of them in a row.

For this reason, you should wait for the RBM platform to return a 200 OK response for each message before proceeding to send the next. This response indicates the platform has received the message, ensuring users receive them as intended.

Additionally, it’s important to be on the lookout for duplicate incoming messages. When receiving messages from users, always check the `messageId` to confirm that the message hasn’t been processed before. By keeping track of `messageId` strings, duplicate messages can be easily identified and disregarded, ensuring efficient and accurate communication.

Integrate with Quiq

RCS is the next step in text messaging, opening up many more ways of interacting with the people reaching out to you for help.

There are many ways to leverage RCS, one of which is turbo-charging your agents with the power of large language models. The easiest way to do this is to team up with a conversation AI platform to do the technical heavy lifting for you.

Quiq is one such platform. Reach out to schedule a demo with us today!

Request A Demo

AI Gold Rush: How Quiq Won the Land Grab for AI Contact Centers (& How You Can Benefit)

There have been many transformational moments throughout the history of the United States, going back all the way to its unique founding.

Take for instance the year 1849.

For all of you SFO 49ers fans (sorry, maybe next year), you are very well aware of the land grab that was the birth of the state of California. That year, tens of thousands of people from the Eastern United States flocked to the California Territory hoping to strike it rich in a placer gold strike.

A lesser-known fact of that moment in history is that the gold strike in California was actually in 1848. And while all of those easterners were lining up for the rush, a small number of people from Latin America and Hawaii were already in production, stuffing their pockets full of nuggets.

176 years later, AI is the new gold rush.

Fast forward to 2024, a new crowd is forming, working toward the land grab once again. Only this time, it’s not physical.

It’s AI in the contact center.

Companies are building infrastructure, hiring engineers, inventing tools, and trying to figure out how to build a wagon that won’t disintegrate on the trail (AKA hallucinate).

While many of those companies are going to make it to the gold fields, one has been there since 2023, and that is Quiq.

Yes, we’ve been mining LLM gold in the contact center since July of 2023 when we released our first customer-facing Generative AI assistant for Loop Insurance. Since then, we have released over a dozen more and have dozens more under construction. More about the quality of that gold in a bit.

This new gold rush in the AI space is becoming more crowded every day.

Everyone is saying they do Generative AI in one way, shape, or form. Most are offering some form of Agent Assist using LLM technologies, keeping that human in the loop and relying on small increments of improvement in AHT (Average Handle Time) and FCR (First Contact Resolution).

However, there is a difference when it comes to how platforms are approaching customer-facing AI Assistants.

Actually, there are a lot of differences. That’s a big reason we invented AI Studio.

AI Studio: Get your shovels and pick axes.

Since we’ve been on the bleeding edge of Generative AI CX deployments, we created called AI Studio. We saw that there was a gap for CX teams, with the myriad of tools they would have had to stitch together and stay focused on business outcomes.

AI Studio is a complete toolkit to empower companies to explore nuances in their AI use within a conversational development environment that’s tailored for customer-facing CX.

That last part is important: Customer-facing AI assistants, which teams can create together using AI Studio. Going back to our gold rush comparison, AI Studio is akin to the pick axes and shovels you need.

Only success is guaranteed and the proverbial gold at the end of the journey is much, much more enticing—precisely because customer-facing AI applications tend to move the needle dramatically further than simpler Agent Assist LLM builds.

That brings me to the results.

So how good is our gold?

Early results are showing that our LLM implementations are increasing resolution rates 50% to 100% above what was achieved using legacy NLU intent-based models, with resolution rates north of 60% in some FAQ-heavy assistants.

Loop Insurance saw a 55% reduction in email tickets in their contact center.

Secondly, intent matching has more than doubled, meaning the percentage of correctly identified intents (especially when there are multiple intents) are being correctly recognized and responded to, which directly correlates to correct answers, fewer agent contacts, and satisfied customers.

That’s just the start though. Molekule hit a 60% resolution rate with a Quiq-built LLM-powered AI assistant. You can read all about that in our case study here.

And then there’s Accor, whose AI assistant across four Rixos properties has doubled (yes, 2X’ed) click-outs on booking links. Check out that case study here.

What’s next?

Like the miners in 1848, digging as much gold out of the ground as possible before the land rush, Quiq sits alone, out in front of a crowd lining up for a land grab.

With a dozen customer-facing LLM-powered AI assistants already living in the market producing incredible results, we have pioneered a space that will be remembered in history as a new day in Customer Experience.

Interested in harnessing Quiq’s power for your CX or contact center? Send us a demo request or get in touch another way and let’s talk.

Request A Demo

Google Business Messages: Meet Your Customers Where They’re At

The world is a distracted and distracting place; between all the alerts, the celebrity drama on Twitter, and the fact that there are more hilarious animal videos on YouTube than you could ever hope to watch even if it were your full-time job, it takes a lot to break through the noise.

That’s one reason customer service-oriented businesses like contact centers are increasingly turning to text messaging. Not only are cell phones all but ubiquitous, but many people have begun to prefer text-message-based interactions to calls, emails, or in-person visits.

In this article, we’ll cover one of the biggest text-messaging channels: Google Business Messages. We’ll discuss what it is, what features it offers, and various ways of leveraging it to the fullest.

Let’s get going!

Learn More About the End of Google Business Messages

 

What is Google Business Messages?

Given that more than nine out of ten online searches go through Google, we will go out on a limb and assume you’ve heard of the Mountain View behemoth. But you may not be aware that Google has a Business Message service that is very popular among companies, like contact centers, that understand the advantages of texting their customers.

Business Messages allows you to create a “messaging surface” on Android or Apple devices. In practice, this essentially means that you can create a little “chat” button that your customers can use to reach out to you.

Behind the scenes, you will have to register for Business Messages, creating an “agent” that your customers will interact with. You have many configuration options for your Business Messages workflows; it’s possible to dynamically route a given message to contact center agents at a specific location, have an AI assistant powered by large language models generate a reply (more on this later), etc.

Regardless of how the reply is generated, it is then routed through the API to your agent, which is what actually interacts with the customer. A conversation is considered over when both the customer and your agent cease replying, but you can resume a conversation up to 30 days later.

What’s the Difference Between Google RCS and Google Business Messages?

It’s easy to confuse Google’s Rich Communication Services (RCS) and Google Business Messages. Although the two are similar, it’s nevertheless worth remembering their differences.

Long ago, text messages had to be short, sweet, and contain nothing but words. But as we all began to lean more on text messaging to communicate, it became necessary to upgrade the basic underlying protocol. This way, we could also use video, images, GIFs, etc., in our conversations.

“Rich” communication is this upgrade, but it’s not relegated to emojis and such. RCS is also quickly becoming a staple for businesses that want to invest in livelier exchanges with their customers. RCS allows for custom logos and consistent branding, for example; it also makes it easier to collect analytics, insert QR codes, link out to calendars or Maps, etc.

As discussed above, Business Messages is a mobile messaging channel that integrates with Google Maps, Search, and brand websites, offering rich, asynchronous communication experiences. This platform not only makes customers happy but also contributes to your business’s bottom line through reduced call volumes, improved CSAT, and better conversion rates.

Importantly, Business Messages are sometimes also prominently featured in Google search results, such as answer cards, place cards, and site links.

In short, there is a great deal of overlap between Google Business Messages and Google RCS. But two major distinctions are that RCS is not available on all Android devices (where Business Messages is), and Business Messages doesn’t require you to have a messaging app installed (where RCS does).

The Advantages of Google Business Messaging

Google Business Messaging has many distinct advantages to offer the contact center entrepreneur. In the next few sections, we’ll discuss some of the biggest.

It Supports Robust Encryption

A key feature of Business Messages is that its commitment to security and privacy is embodied through powerful, end-to-end encryption.

What exactly does end-to-end encryption entail? In short, it ensures that a message remains secure and unreadable from the moment the sender types it to whenever the recipient opens it, even if it’s intercepted in transit. This level of security is baked in, requiring no additional setup or adjustments to security settings by the user.

The significance of this feature cannot be overstated. Today, it’s not at all uncommon to read about yet another multi-million-dollar ransomware attack or a data breach of staggering proportions. This has engendered a growing awareness of (and concern for) data security, meaning that present and future customers will value those platforms that make it a central priority of their offering.

By our estimates, this will only become more important with the rise of generative AI, which has made it increasingly difficult to trust text, images, and even movies seen online—none of which was particularly trustworthy even before it became possible to mass-produce them.

If you successfully position yourself as a pillar your customers can lean on, that will go a long way toward making you stand out in a crowded market.

It Makes Connecting With Customers Easier

Another advantage of Google Business Messages is that it makes it much easier to meet customers where they are. And where we are is “on our phones.”

Now, this may seem too obvious to need pointing out. After all, if your customers are texting all day and you’re launching a text-messaging channel of communication, then of course you’ll be more accessible.

But there’s more to this story. Google Business Messaging allows you to seamlessly integrate with other Google services, like Google Maps. If a customer is trying to find the number for your contact center, therefore, they could instead get in touch simply by clicking the “CHAT” button.

This, too, may seem rather uninspiring because it’s not as though it’s difficult to grab the number and call. But even leaving aside the rising generations’ aversion to making phone calls, there’s a concept known as “trivial inconvenience” that’s worth discussing in this context.

Here’s an example: if you want to stop yourself from snacking on cookies throughout the day, you don’t have to put them on the moon (though that would help). Usually, it’s enough to put them in the next room or downstairs.

Though this only slightly increases the difficulty of accessing your cookie supply, in most cases, it introduces just enough friction to substantially reduce the number of cookies you eat (depending on the severity of your Oreo addiction, of course).

Well, the exact same dynamic works in reverse. Though grabbing your contact center’s phone number from Google and calling you requires only one or two additional steps, that added work will be sufficient to deter some fraction of customers from reaching out. If you want to make yourself easy to contact, there’s no substitute for a clean integration directly into the applications your customers are using, and that’s something Google Business Messages can do extremely well.

It’s Scalable and Supports Integrations

According to legend, the name “Google” originally came from a play on the word “Googol,” which is a “1” followed by a 100 “0”s. Google, in other words, has always been about scale, and that is reflected in the way its software operates today. For our purposes, the most important manifestation of this is the scalability of their API. Though you may currently be operating at a few hundred or a few thousand messages per day, if you plan on growing, you’ll want to invest early in communication channels that can grow along with you.

But this is hardly the end of what integrations can do for you. If you’re in the contact center business there’s a strong possibility that you’ll eventually end up using a large language model like ChatGPT in order to answer questions more quickly, offboard more routine tasks, etc. Unless you plan on dropping millions of dollars to build one in-house, you’ll want to partner with an AI-powered conversational platform. As you go about finding a good vendor, make sure to assess the features they support. The best platforms have many options for increasing the efficiency of your agents, such as reusable snippets, auto-generated suggestions that clean up language and tone, and dashboarding tools that help you track your operation in detail.

Best Practices for Using Google Business Messages

Here, in the penultimate section, we’ll cover a few optimal ways of utilizing Google Business Messages.

Reply in a Timely Fashion

First, it’s important that you get back to customers as quickly as you’re able to. As we noted in the introduction, today’s consumers are perpetually drinking from a firehose of digital information. If it takes you a while to respond to their query, there’s a good chance they’ll either forget they reached out (if you’re lucky) or perceive it as an unpardonable affront and leave you a bad review (if you’re not).

An obvious way to answer immediately is with an automated message that says something like, “Thanks for your question. We’ll respond to you soon!” But you can’t just leave things there, especially if the question requires a human agent to intervene.

Whatever automated system you implement, you need to monitor how well your filters identify and escalate the most urgent queries. Remember that an agent might need a few hours to answer a tricky question, so factor that into your procedures.

This isn’t just something Google suggests; it’s codified in its policies. If you leave a Business Messages chat unanswered for 24 hours, Google might actually deactivate your company’s ability to use chat features.

Don’t Ask for Personal Information

As hackers have gotten more sophisticated, everyday consumers have responded by raising their guard.

On the whole, this is a good thing and will lead to a safer and more secure world. But it also means that you need to be extremely careful not to ask for anything like a social security number or a confirmation code via a service like Business Messages. What’s more, many companies are opting to include a disclaimer to this effect near the beginning of any interactions with customers.

Earlier, we pointed out that Business Messages supports end-to-end encryption, and having a clear, consistent policy about not collecting sensitive information fits into this broader picture. People will trust you more if they know you take their privacy seriously.

Make Business Messages Part of Your Overall Vision

Google Business Messages is a great service, but you’ll get the most out of it if you consider how it is part of a more far-reaching strategy.

At a minimum, this should include investing in other good communication channels, like Apple Messages and WhatsApp. People have had bitter, decades-long battles with each other over which code editor or word processor is best, so we know that they have strong opinions about the technology that they use. If you have many options for customers wanting to contact you, that’ll boost their satisfaction and their overall impression of your contact center.

The prior discussion of trivial inconveniences is also relevant here. It’s not hard to open a different messaging app under most circumstances, but if you don’t force a person to do that, they’re more likely to interact with you.

Schedule a Demo with Quiq

Google has been so monumentally successful its name is now synonymous with “online search.” Even leaving aside rich messaging, encryption, and everything else we covered in this article, you can’t afford to ignore Business Messages for this reason alone.

But setting up an account is only the first step in the process, and it’s much easier when you have ready-made tools that you can integrate on day one. The Quiq conversational AI platform is one such tool, and it has a bevy of features that’ll allow you to reduce the workloads on your agents while making your customers even happier. Check us out or schedule a demo to see what we can do for you!

Request A Demo

6 Amazing Examples of how AI is Changing Hospitality

Recent advances in AI are poised to bring many changes. Though we’re still in the early days of seeing how all this plays out, there’s already clear evidence that generative AI is having a measurable impact in places like contact centers. Looking into the future a bit, multiple reports indicate that AI could add trillions of dollars to the economy before the close of the 2020s, and lead to as much as a doubling in rates of yearly economic growth over the next decade.

The hospitality industry has always been forward-looking, eager to adopt new best practices and technologies. If you’re working in hospitality now, therefore, you might be wondering what AI will mean for you, and what the benefits of AI will be.

That’s exactly what we’re setting out to answer in this article! Below, we’ve collected several of our favorite use cases of AI assistants in both hospitality and travel. Throughout, we’ve tried to anchor the discussion to real-world examples. We hope that, by the end, you’ll feel much better equipped to evaluate whether and how to use AI assistants in your own operations.

Let’s get going!

What is AI in Hospitality and Travel?

The term “artificial intelligence” covers a huge number of topics, approaches, and subdomains, most of which we won’t be able to cover here. But broadly, you can think of AI as being any attempt to train a machine to do useful work.

Two of the more popular methods for accomplishing this task are machine learning and generative AI, the latter of which has become famous due to the recent spectacular successes of large language models.

These are also the methods we’ll be focused on because they’re the ones most commonly used in hospitality. Machine learning, for example, will pop up in examples of dynamic pricing and demand forecasting, while generative AI is a key engine driving advances in automated concierge services.

6 Ways AI Assistants are Transforming Hospitality and Travel

Below, we’ve collected some of the most compelling use cases of AI assistants in the hospitality and travel industry. We’ll begin with their use in educating the rising generation of hospitality professionals, then move on to HR, operations, revenue, and all the other things that go into keeping guests happy!

Use Case #1 – Educating Future Hospitality Professionals

From personalized lesson plans to software-based tutors, applying artificial intelligence to education has long been a dream. This is no different for hospitality, where rising students are using the latest and greatest tools to accelerate their learning.

Students have to figure out how to comport themselves in a variety of challenging circumstances, from interactions at the front desk to ensuring the room service makes it to the right guest. When augmented with artificial intelligence, simulations can help students gain exposure to many of the issues they’ll face in their day-to-day work.

Generative AI, for example, can be used to practice and internalize strategies for dealing with guests who are distraught or downright rude. It can also be used as a general learning tool, helping to break down complex concepts, structure study routines, and more.

Use Case #2 – Hiring and Staffing

Like all businesses, hotels, resorts, and other hospitality staples have to deal with hiring. Talent acquisition is a major unsolved challenge; it can take a long time to find a good hire for a position, and mistakes can cost a lot in terms of time, energy, and money.

This, too, is a place where machine learning can help. A prominent example is Hilton, which has begun using bespoke algorithms to fill its positions. These algorithms can ingest a huge amount of information on the skills and experiences of a set of potential candidates, build profiles for them, and then measure this against the profiles of employees who have been successful in the past. This allows Hilton to better gauge how well these candidates will ultimately be able to live up to the rigors of different roles.

With this approach, Hilton has been able to fill empty positions in as little as a week, all while cutting its turnover in half. Not only does this save a great deal of time for hiring managers and recruiters, it also reduces delays and helps to build a more robust company culture.

This last point warrants a little elaboration. When employees stay with a company for a long time, they gain a very intuitive grasp of its internal workings. When they leave, they take this knowledge with them, and it can take a long time to rebuild. If AI is able to more efficiently find and place candidates, it means that an organization will function better in a thousand little ways, leading to an improved guest experience and more success in the long term.

Use Case #3 – Hotel Operations Management

Hotels have many moving parts. Keeping all the proverbial plates spinning is known as “operations,” and can involve anything from changing a reservation to fielding questions to making sure all the thermostats are functional.

Though much of this still requires the human touch, artificial intelligence can do a lot to lighten the load by automating routine parts of the job. Take booking, for example. It can be complicated, but in many cases, today’s AI assistants are more than capable of helping.

What might that look like? Consider an example of a potential guest who has questions about your amenities. They might want to know whether you have any special programs for kids, whether you have pool-side food service, etc. These are all things that a question-answering AI assistant could help with.

If we assume the guest has decided to book with you, they may later want to change their reservation by a few days. Or, after their stay, they may run into billing issues that need to be reconciled. These are both tasks that are often within the capacity of today’s systems.

This is appealing because it’ll save you time, yes, but there are more opportunities here than may be apparent at first. The Maison Mere hotel in Paris, for example, made the decision to use a contactless check-in service that allowed them to collect little details about their guests before they arrived. Afterward, they used that information to create custom touches in those guest’s rooms, such as personalized greetings and flowers. What’s more, it gave Maison Mere a chance to take advantage of targeted upselling opportunities; guests traveling with pets were offered pet kits, and promotions through the platform led to a boost in reservations at the hotel’s attached restaurant, to name but a few.

Returning to amenities, if you’ve worked in hospitality before, you’ve probably dealt with snack requests, towel deliveries, etc. In Silicon Valley, Crowne Plaza has begun rolling out a robotic system called “Dash” to outsource exactly these kinds of low-level tasks. Dash uses Wi-Fi to move around the hotel, locate guests, and deliver the requested items. It’s even able to check its own battery supply and recharge when it starts running low.

Use Case #4 – Hotel Revenue Management

Like all businesses, hotels exist to make money, and they therefore tend to keep a pretty close eye on their revenue. This might be one of the responsibilities you assume as a hospitality specialist, so it’s worth understanding how AI assistants will impact hotel revenue management.

Some of these developments have been in motion for a while. One tried-and-true technique for maximizing revenue is to better forecast future demand. Unfortunately, most hotels are not booked solid year round, there’ll be periods of extremely high activity and periods of relatively low activity. But these fluctuations aren’t random, and with the right machine learning algorithms, past historical data can be mined to arrive at a pretty accurate picture of when you’re going to be full. This allows you to better plan your inventory, for example, and have all the staff required to ensure everyone enjoys their stay.

For the same reason, many hotels choose to vary their prices based on demand. Premium suites might go for $500 a night in the busy season while commanding a much more affordable $200 a night when no one is visiting.

There exist many AI tools to help with this work, and they’re getting good results. In Thailand, the Narai Hospitality Group utilized a pricing and forecasting platform to grow their average daily rate by more than a quarter, even tripling the rates charged on some rooms during peak traffic months. Grand America Hotels & Resorts was similarly able to keep their revenue management lean and effective as they navigated the post-COVID travel boom using automation-powered software.

Use Case #5 – Marketing and Sales

Another thing the hospitality industry has in common with other industries is that it has to market its services—after all, no one can stay in a hotel they haven’t heard of. Using AI assistants for marketing purposes is hardly new, but there are some exciting developments where hospitality is concerned.

By using an AI-powered marketing intelligence service that dynamically personalizes offerings with real-time data, the U.K.’s Cheval Collection achieved an 82% revenue growth in 2023, compared to just three years prior.

Use Case #6 – Hotel Guest Experience in the AI Age

Above, we’ve discussed operations, revenue, hiring, and all the myriad aspects of running a successful hospitality enterprise. But perhaps the most important part of this process is the one we’ve saved for last: how much people enjoy actually staying with you.

This is generally known as “guest experience,” and it, too, is likely to be disrupted by the widespread use of AI assistants. Consider the example of “Rose,” an AI concierge used by Las Vegas’s Cosmopolitan hotel. When a guest checks in to the Cosmopolitan, they are given a number where they can contact Rose. They can text her if they have requests or call and talk to her if they prefer a voice interface.

Of course, it’s not hard to forecast some of the other ways AI could power an enhanced guest experience. Continuing with the concierge example, imagine smart AI assistants in each guest’s room, offering up recommendations for local restaurants or fun excursions. Since AI has made great strides in personalization, these assistants would be far from generic; they’d be able to utilize information about a guest’s preferences, prior experiences, online profiles or reviews, etc., to offer nuanced, highly-tailored advice.

If you have such a system operational in your hotel, it’s unlikely to be a thing your guests will forget.

Exploring AI in Hospitality: Industry Examples Unveiled

From large language models to machine learning to agentic systems, we’re living in something of a turning point for artificial intelligence. Today’s systems are far from perfect, but they’re clearly capable of doing economically useful work, in the hospitality industry and elsewhere.

But there remain many challenges, not least of which is working with an AI assistant platform you can trust. Quiq is a leader in the conversational AI space, and can help you integrate this cutting-edge technology into your business. Get in touch today to schedule a demo and see how we can help!

Request A Demo

Why Your Business Should Use Rich Messaging

A Brief Overview of Rich Messaging

Along with Eminem and Auto-Tune, text messaging was just becoming really popular back in the halcyon days of the early 2000s. It was a simpler time, and all our texts were sent via “short message service” (SMS), which was mostly confined to words. This was cutting-edge technology back then, and since we weren’t yet expressing ourselves with walls of hieroglyphic emojis or GIFs from Schitt’s Creek, it was all we needed.

Today, this is no longer the case. We’re spending much more time communicating with each other through text messaging and sending much more complicated information, to boot. In response, rich messaging was developed.

Technically known as “rich communication services” (RCS), rich messaging is the next step in the evolution of text messaging. It allows for better use of interactive media, such as high-resolution photos, messaging cards, audio messages, emojis, and GIFs. Even more important for those of us in the contact center industry, it also facilitates an enhanced customer experience, with things like sensory-rich service interactions.

Capabilities of Rich Messaging

Having covered rich messaging, let’s explore some of its vistas. While this is not a comprehensive list, it reflects what we believe to be some of RCS’s most important properties (especially from the perspective of those looking to leverage text messaging for contact centers).

Integrating with Other Services

Today, the rise of generative AI is changing how contact centers work, which also presents an opportunity for integration.

If your business is looking to integrate AI assistants to automate substantial parts of its customer service workflow, you’re almost certainly going to have to do that through rich messaging.

Secure Messaging and Transactions Processing

Big data and AI have both raised serious concerns over privacy. A decade ago, people wouldn’t have thought twice about sharing their location or putting pictures of their kids online. These days, however, more of us are privacy- and security-conscious, so the fact that rich messaging supports end-to-end encryption is important.

People are much more likely to talk to your customer service agents directly if they can rest easy knowing their data isn’t going to be exposed to malicious actors.

Better Analytics

Speaking of big data, rich messaging makes it possible to gather and conduct fairly sophisticated customer service data analysis. You can gather statistics about obvious metrics like reply rates or feed conversations into a sentiment analysis system to determine how people feel about you, your company, and your service.

This allows you to identify patterns in customer behavior, optimize your use of AI, and generally start tinkering with your procedures to better serve customer needs.

It also goes the other way, inasmuch as you can send real-time alerts confirming an issue was received, updating a customer on the status of a ticket, etc. Sure, this isn’t technically “analysis,” but it’ll help people feel more at ease when interacting with your customer service agents, so it’s worth bearing in mind.

Rich Channels of Communication

Where can you use rich messaging? In the sections that follow, we’ll answer this exact question!

WhatsApp

WhatsApp is a platform overseen by Meta (formerly Facebook) that supports rich text messaging, voice messaging, and video calling. With more than two billion users, it’s incredibly popular. A key reason for this is that all this data is sent over the internet, obviating the SMS fees that used to keep us all up at night. And it has a business API that will allow you to scale up with increased demand.

Apple Rich Messaging

Apple’s rich messaging service is called Apple Messages for Business. It offers potential and existing customers a way to communicate with your agents directly via their Apple devices.

This is a market you can’t afford to ignore; with nearly two billion Apple devices, the reach of iOS is simply gigantic, and it’s a communication channel you should be cultivating.

Google Rich Messaging

More than nine out of ten searches happen on Google, meaning that it has become the powerhouse when it comes to finding information online. And if that’s not enough to convince you, consider that the phrase “Google it” is now just what people say when they’re talking about looking something up.

However, you may not be aware that Google offers a Business Messages service that should be part of your overall customer strategy.

Building Trust through Rich Communication Service Messages

Being successful in business requires many things, but one of the more important ones is trust. This has always been true, of course, but it’s only become more so with the rise of artificial intelligence.

We’ve been singing the praises of generative AI for a while, and firmly believe that it will have a huge positive impact on the contact center industry. But there’s a downside to the fact that it’s now trivial to crank industrial quantities of text, video, and images.

There’s always been plenty of nonsense online, but once upon a time, the ability to create such content was limited by the fact that someone, somewhere, had to actually sit down and make it. That’s no longer the case, which means that users are more eager than ever for signs that they’re dealing with customer service they can rely on.

Rich messaging has a part to play in that, and in the next few sections, we’ll explain why.

High-Quality Interactions Will Have Customers Coming Back

Rich messaging has many tools that make it easier to ensure that customers have a first-class experience interacting with your contact center. The rich messaging services described above have APIs, for example, that allow you to better organize conversations. This means agents can stay on top of their workloads, leading to less of the kind of frustration and distress that might negatively color their replies.

These services can also be integrated with high-quality conversational AI platforms. When agents can outsource simple, standard queries to algorithms or reuse snippets, they have more time to focus on solving trickier problems.

The net result is that agents feel less burned out, and customers get better help, faster.

Consistency in Experiences

Another way to build trust is to ensure your style is consistent across channels. Just as you wouldn’t use a different logo on Facebook and Instagram, you shouldn’t use a dramatically different tone of voice on one platform than you use on another.

When people know what to expect from you, they’re more likely to trust you. Because rich messaging supports many different kinds of media, you can ensure that customer experiences remain consistent.

This is also a place where generative AI comes in handy. The best conversational AI platforms train models on the conversations of senior agents and make this available to everyone in the contact center. This means that each agent can format replies with the same empathy, patience, and understanding as their very best peers.

Verified Business Profiles

Finally, using a verified account is a basic step you can take to increase trust. If you thought getting junk mail was bad, pause for a moment and consider the absolute barrage of text messages, bogus phone calls, and DMs most of us get every single day. There is a never-ending sea of bot accounts on Twitter and other platforms trying to dupe everyone into one crypto scam or another, and this has substantially eroded people’s trust in online interactions.

The rich messaging services offered by Google, WhatsApp, and Apple all have a fairly lengthy process for verifying the authenticity of your profile. By itself, this isn’t going to ensure that customers trust you, but it helps. People want to know that they’re talking to a real business, not an imposter; the “proof of work” (speaking of crypto) required to verify a rich messaging account is a crucial part of establishing that rapport.

Rich Messaging is the Future of Text

The world today looks very different from the world of the early 2000s. Our technologies, including our text messaging, have evolved along with it, and businesses have to keep up if they want to remain relevant.

Rich messaging is a great way to build trust and loyalty, and it opens up many new opportunities. But to get the most out of rich messaging, it really helps to work with a platform that offers robust tooling, language models, analytics, and so on.

Quiq is one such platform. Reach out to us to schedule a demo, and see how we can ensure your text-messaging outreach is profitable, productive, and easy!

Request A Demo

Index