Introducing Email AI: How to Slash High Volumes While Improving CSAT Save your Spot →

Does Quiq Train Models on Your Data? No (And Here’s Why.)

Customer experience directors tend to have a lot of questions about AI, especially as it becomes more and more important to the way modern contact centers function.

These can range from “Will generative AI’s well-known tendency to hallucinate eventually hurt my brand?” to “How are large language models trained in the first place?” along with many others.

Speaking of training, one question that’s often top of mind for prospective users of Quiq’s conversational AI platform is whether we train the LLMs we use with your data. This is a perfectly reasonable question, especially given famous examples of LLMs exposing proprietary data, such as happened at Samsung. Needless to say, if you have sensitive customer information, you absolutely don’t want it getting leaked – and if you’re not clear on what is going on with an LLM, you might not have the confidence you need to use one in your contact center.

The purpose of this piece is to assure you that no, we do not train LLMs with your data. To hammer that point home, we’ll briefly cover how models are trained, then discuss the two ways that Quiq optimizes model behavior: prompt engineering and retrieval augmented generation.

How are Large Language Models Trained?

Part of the confusion stems from the fact that the term ‘training’ means different things to different people. Let’s start by clarifying what this term means, but don’t worry–we’ll go very light on technical details!

First, generative language models work with tokens, which are units of language such as a part of a word (“kitch”), a whole word (“kitchen”), or sometimes small clusters of words (“kitchen sink”). When a model is trained, it’s learning to predict the token that’s most likely to follow a string of prior tokens.

Once a model has seen a great deal of text, for example, it learns that “Mary had a little ____” probably ends with the token “lamb” rather than the token “lightbulb.”

Crucially, this process involves changing the model’s internal weights, i.e. its internal structure. Quiq has various ways of optimizing a model to perform in settings such as contact centers (discussed in the next section), but we do not change any model’s weights.

How Does Quiq Optimize Model Behavior?

There are a few basic ways to influence a model’s output. The two used by Quiq are prompt engineering and retrieval augmented generation (RAG), neither of which does anything whatsoever to modify a model’s weights or its structure.

In the next two sections, we’ll briefly cover each so that you have a bit more context on what’s going on under the hood.

Prompt Engineering

Prompt engineering involves changing how you format the query you feed the model to elicit a slightly different response. Rather than saying, “Write me some social media copy,” for example, you might also include an example outline you want the model to follow.

Quiq uses an approach to prompt engineering called “atomic prompting,” wherein the process of generating an answer to a question is broken down into multiple subtasks. This ensures you’re instructing a Large Language Model in a smaller context with specific, relevant task information, which can help the model perform better.

This is not the same thing as training. If you were to train or fine-tune a model on company-specific data, then the model’s internal structure would change to represent that data, and it might inadvertently reveal it in a future reply. However, including the data in a prompt doesn’t carry that risk because prompt engineering doesn’t change a model’s weights.

Retrieval Augmented Generation (RAG)

RAG refers to giving a language model an information source – such as a database or the Internet – that it can use to improve its output. It has emerged as the most popular technique to control the information the model needs to know when generating answers.

As before, that is not the same thing as training because it does not change the model’s weights.

RAG doesn’t modify the underlying model, but if you connect it to sensitive information and then ask it a question, it may very well reveal something sensitive. RAG is very powerful, but you need to use it with caution. Your AI development platform should provide ways to securely connect to APIs that can help authenticate and retrieve account information, thus allowing you to provide customers with personalized responses.

This is why you still need to think about security when using RAG. Whatever tools or information sources you give your model must meet the strictest security standards and be certified, as appropriate.

Quiq is one such platform, built from the ground-up with data security (encryption in transit) and compliance (SOC 2 certified) in mind. We never store or use data without permission, and we’ve crafted our tools so it’s as easy as possible to utilize RAG on just the information stores you want to plug a model into. Being a security-first company, this extends to our utilization of Large Language Models and agreements with AI providers like Microsoft Open AI.

Wrapping Up on How Quiq Trains LLMs

Hopefully, you now have a much clearer picture of what Quiq does to ensure the models we use are as performant and useful as possible. With them, you can make your customers happier, improve your agents’ performance, and reduce turnover at your contact center.

If you’re interested in exploring some other common misconceptions that CX leaders face when considering incorporating generative AI into their technology stack, check out our ebook on the subject. It contains a great deal of information to help you make the best possible decision!

Request A Demo

Does GenAI Leak Your Sensitive Data? Exposing Common AI Misconceptions (Part Three)

This is the final post in a three-part series clarifying the biggest misconceptions holding CX leaders like you back from integrating GenAI into their CX strategies. Our goal? To assuage your fears and help you start getting real about adding an AI Assistant to your contact center — all in a fun “two truths and a lie” format.

There are few faux pas as damaging and embarrassing for brands as sensitive data getting into the wrong hands. So it makes sense that data security concerns are a major deterrent for CX leaders thinking about getting started with GenAI.

In the first post of our AI Misconceptions series, we discussed why your data is definitely good enough to make GenAI work for your business. Next, we explored the different types of hallucinations that CX leaders should be aware of, and how they are 100% preventable with the right guardrails in place.

Now, let’s wrap up our series by exposing the truth about GenAI potentially leaking your company or customer data.

Misconception #3: “GenAI inadvertently leaks sensitive data.”

As we discussed in part one, AI needs training data to work. One way to collect that data is from the questions users ask. For example, if a large language model (LLM) is asked to summarize a paragraph of text, that text could be stored and used to train future models.

Unfortunately, there have been some famous examples of companies’ sensitive information becoming part of datasets used to train LLMs — take Samsung, for instance. Because of this, CX leaders often fear that using GenAI will result in their company’s proprietary data being disclosed when users interact with these models.

Truth #1: Public GenAI tools use conversation data to train their models.

Tools like OpenAI’s ChatGPT and Google Gemini (formerly Bard) are public-facing and often free — and that’s because their purpose is to collect training data. This means that any information that users enter while using these tools is free game to be used for training future models.

This is precisely how the Samsung data leak happened. The company’s semiconductor division allowed its engineers to use ChatGPT to check their source code. Not only did multiple employees copy/paste confidential code into ChatGPT, but one team member even used the tool to transcribe a recording of an internal-only meeting!

Truth #2: Properly licensed GenAI is safe.

People often confuse ChatGPT, the application or web portal, with the LLM behind it. While the free version of ChatGPT collects conversation data, OpenAI offers an enterprise LLM that does not. Other LLM providers offer similar enterprise licenses that specify that all interactions with the LLM and any data provided will not be stored or used for training purposes.

When used through an enterprise license, LLMs are also Service Organization Control Type 2, or SOC 2, compliant. This means they have to undergo regular audits from third parties to prove that they have the processes and procedures in place to protect companies’ proprietary data and customers’ personally identifiable information (PII).

The Lie: Enterprises must use internally-developed models only to protect their data.

Given these concerns over data leaks and hallucinations, some organizations believe that the only safe way to use GenAI is to build their own AI models. Case in point: Samsung is now “considering building its own internal AI chatbot to prevent future embarrassing mishaps.”

However, it’s simply not feasible for companies whose core business is not AI to build AI that is as powerful as commercially available LLMs — even if the company is as big and successful as Samsung. Not to mention the opportunity cost and risk of having your technical resources tied up in AI instead of continuing to innovate on your core business.

It’s estimated that training the LLM behind ChatGPT cost upwards of $4 million. It also required specialized supercomputers and access to a data set equivalent to nearly the entire Internet. And don’t forget about maintenance: AI startup Hugging Face recently revealed that retraining its Bloom LLM cost around $10 million.

GenAI Misconceptions

Using a commercially available LLM provides enterprises with the most powerful AI available without breaking the bank— and it’s perfectly safe when properly licensed. However, it’s also important to remember that building a successful AI Assistant requires much more than developing basic question/answer functionality.

Finding a Conversational CX Platform that harnesses an enterprise-licensed LLM, empowers teams to build complex conversation flows, and makes it easy to monitor and measure Assistant performance is a CX leader’s safest bet. Not to mention, your engineering team will thank you for giving them optionality for the control and visibility they want—without the risk and overhead of building it themselves!

Feel Secure About GenAI Data Security

Companies that use free, public-facing GenAI tools should be aware that any information employees enter can (and most likely will) be used for future model-training purposes.

However, properly-licensed GenAI will not collect or use your data to train the model. Building your own GenAI tools for security purposes is completely unnecessary — and very expensive!

Want to read more or revisit the first two misconceptions in our series? Check out our full guide, Two Truths and a Lie: Breaking Down the Major GenAI Misconceptions Holding CX Leaders Back.

A Deep Dive into Asynchronous vs. Synchronous Messaging — and the Pros and Cons of Each

Text messaging has become more and more important with each successive generation of customers, and CX directors have responded by gradually making it an ever-higher priority.

But text messaging isn’t a one-size-fits-all solution; there are different ways to approach messaging interactions, and they each have their own use cases.

We’ve talked a lot, for example, about the distinction between rich messaging and plain text messaging, but another key divide is around “synchronous” and “asynchronous” messaging.

That will be our focus today. We’ll define synchronous and asynchronous messaging, explain how each applies to your messaging strategy, and provide the information you need to decide when to use one or the other.

Let’s get going!

What’s the Difference Between Asynchronous and Synchronous Messaging?

In this section, we’ll define synchronous messaging and asynchronous messaging in simple and clear terms before we discuss their differences.

What is Synchronous Messaging?

Synchronous messaging is part of a real-time conversation with a clearly defined beginning and end. Both parties must actively engage in the conversation at the same time, whether on their phones or on their keyboards.

You’ve no doubt heard of synchronized swimming or synchronized skating, and the principle is the same with synchronized messaging—everyone must participate at the same time.

What is Asynchronous Messaging?

Asynchronous messaging occurs when two parties have a conversation but don’t have to be present at the same time; what’s more, with asynchronous messaging, there’s generally not a clearly defined end to the conversation.

If you’re like many of us, text messaging with your friends and family occurs asynchronously. When both of you are available, the conversation might go back and forth seamlessly, but you could also have the same conversation over a longer period of time while you’re both working or running errands.

An Overview of Synchronous Messaging

Above, we took time to define synchronous messaging as any interaction that occurs in real-time when two or more participants are actively engaged. All of today’s channels support synchronous messaging, but web chat and many kinds of in-app messaging are particularly associated with this style of communication.

Pros of Synchronous Messaging.

For a number of reasons, synchronous messaging has an important place in customer service. A non-exhaustive list of its benefits includes the fact that:

  • Customers feel more connected: Since conversations are happening in real-time, customers instantly feel more engaged and connected to your contact center agents. They know there’s a real person on the other side of the screen helping them at this very moment, and that can change how they perceive the whole conversation.
  • It’s easy to track performance: Since synchronous messages have a defined beginning and end, it’s easier to track metrics like average resolution time to see whether you’re performance is trending up or down.
  • Resolutions are faster: Simple problems can be resolved faster over synchronous messaging. Customers are able to immediately get answers to their questions, so small issues don’t get dragged out.

Cons of Synchronous Messaging.

Despite this, synchronous messaging nevertheless has challenges. Here are some problems your team can face when relying solely on this type of messaging.

  • Customers spend more time waiting: During busy periods, agents cannot handle multiple conversations simultaneously, and wait times can increase.
  • Agents can only handle one conversation at a time: The key factor in a synchronous conversation is that both parties are there chatting at the same time. But doing so means your agents won’t be able to juggle multiple conversations at once, making them slower overall. The alternative, of course, is to help your agents quickly serve customers by equipping them with an AI-powered agent response tool, helping them handle more conversations in the same amount of time.
  • It’s harder to solve complex problems: Synchronous messaging may be less than ideal for situations where your agents don’t have the expertise to solve a particular problem. Customers may have to repeat themselves if they’re being passed from one agent to the next, and they’ll likely also spend more time on hold, none of which is optimal.
  • Customers can’t get answers outside of business hours: Customers are used to getting what they want when they want it. Since agents must be present for synchronous conversations, customers can only chat during business hours. The alternative, of course, is to hire more agents to work shifts throughout the day or to invest in an AI assistant that is always present.
  • It can cost more money: Since agents can’t handle as many conversations at once, you’ll likely need to hire more agents to cover the same amount of calls.

What is Asynchronous Messaging?

All of today’s messaging options allow for asynchronous messaging, which means that the parties involved don’t have to be present at the same time to hold a conversation.

Interactions that occur over basic SMS, Apple Messages for Business, Instagram, WhatsApp, Twitter Direct Messages, and RCS Business Messaging can be dropped and picked back up again when convenient.

Pros of Asynchronous Messaging.

When compared to synchronous messaging, asynchronous messaging comes out ahead by providing benefits to your customers and your contact center team.

Here are some of the benefits for your customers:

  • They can multitask: Since conversations happen at the customer’s convenience, customers can go about their lives while receiving help from your team. They’re not locked into a phone conversation or waiting on hold while your agents find answers, making the experience much more pleasurable.
  • Customers don’t have to repeat information: The big draw of asynchronous messaging is that it creates an ongoing conversation, meaning that your agents will have access to the conversation history. For this reason, customers won’t have to repeat themselves every time they contact customer service because their information is already there.

Here are just a few ways it improves your contact center teams’ workflows over synchronous messaging:

  • Agents can manage several conversations at once: Since conversations happen at a slower pace, agents can engage in more than one at a time–up to eight at once with a conversational AI platform like Quiq.
  • Agents show improved efficiency: Since agents can manage 3-5 simultaneous conversations, they can move between customers to improve their overall efficiency by a considerable amount.
  • Lower costs for your customer service center: Since agents are working faster and helping multiple customers at once, you need fewer agents. Instead, you can spend money on better training, higher quality tools, or expanding services.
  • It’s friendly to AI assistants: With asynchronous messaging, it’s relatively easy to integrate AI assistants powered by large language models. These assistants can welcome customers, gather information, and answer many basic queries, thus streamlining agents and freeing them up to focus on higher-priority tasks.

Cons of Asynchronous Messaging.

That said, asynchronous messaging does come with a few challenges:

  • It can turn short conversations into long ones: There can be situations in which a customer reaches out with a simple question, your agent has their own follow-up question, and the customer responds hours or even days later. One of the traps of asynchronous messaging is that people tend to be less urgent in replying, which could be reflected in longer resolution times and an increase in the number of open tickets your agents have on their dockets.
  • It can be harder to track: Asynchronous messaging often doesn’t have a clear beginning or end, making it difficult to measure. This issue is ameliorated to a considerable extent if you partner with a purpose-built conversational AI platform able to measure tricky, nebulous metrics like concurrent average handle time.
  • Agents have to be able to multitask: Having multiple conversations at the same time, and switching seamlessly between them, is a skill. If not trained properly, agents can get overwhelmed, which can show in their customer communications.

Implementing Synchronous and Asynchronous Messaging.

Despite their differences (or because of them), both synchronous and asynchronous messaging have a place in your customer service strategy.

When to use Synchronous Messaging.

We’ve already answered “what is synchronous messaging,” now let’s look at the situations in which synchronous messaging is the better approach, including:

  • When customers need quick answers: There’s no better reason to use synchronous messaging than when customers need quick, immediate answers. In such cases, it will often not be worth stretching a conversation out over an asynchronous communication.
  • When diffusing difficult situations: As much effort as we expend trying to address customer service challenges, they inevitably happen. Upset customers don’t want to wait for replies while they go about their day, they want immediate responses so they can get their needs met, and that requires synchronous messaging.
  • When troubleshooting issues with customers: It’s much easier to walk customers through troubleshooting in real-time, instead of stretching out the conversation over hours or days.

When to Use Asynchronous Messaging.

Asynchronous messaging is best used when customer issues aren’t immediate, such:

  • When resolving (certain) complex issues: When customers come to your service team with complex issues that can be solved more slowly, asynchronous messaging really shines. It enables multiple agents and experts to jump in and out of the chat without requiring customers to wait on hold or repeat their information. (Note, however, that there’s a tension between this point and the last point from the previous section, which counseled using synchronous messaging for exactly this purpose. To clarify, urgent issues should probably be handled with synchronous messaging; but if an issue is complex, it’s a good candidate for asynchronous communication, especially if it’s relatively non-urgent and only resolvable with help from experts in multiple areas. Use your judgment.)
  • When building relationships: Asynchronous messaging is a great way to build customer relationships. Since there’s no clear ending, customers can continue to go back to the same chat and have conversations throughout their customer journey.
  • When work is especially busy: When your customer service team is overwhelmed, asynchronous messaging allows them to prioritize customer issues and handle the most timely ones first. The tools provided by a conversational AI platform like Quiq can help by e.g. gauging customer sentiment to determine who needs immediate attention and who can wait for a response.

Embrace Asynchronous and Synchronous Messaging.

We covered a lot of ground in this article! We defined synchronous and asynchronous messaging, discussed the pros and cons of each, and provided invaluable guidance into when to utilize one over the other.

Another subject we’ve touched on repeatedly is the value that generative AI can bring to organizations focused on customer experience. Check out this whitepaper for more details. Our research has convinced us that generative AI is one of the next big trends shaping our industry, and you don’t want to be left behind.

Will GenAI Hallucinate and Hurt Your Brand? Exposing Common AI Misconceptions (Part Two)

This is the second post in a three-part series clarifying the biggest misconceptions holding CX leaders like you back from integrating GenAI into their CX strategies. Our goal? To assuage your fears and help you start getting real about adding an AI Assistant to your contact center — all in a fun “two truths and a lie” format.

Did you know that the Golden Gate Bridge was transported for the second time across Egypt in October of 2016?

Or that the world record for crossing the English Channel entirely on foot is held by Christof Wandratsch of Germany, who completed the crossing in 14 hours and 51 minutes on August 14, 2020?

Probably not, because GenAI made these “facts” up. They’re called hallucinations, and AI hallucination misconceptions are holding a lot of CX leaders back from getting started with GenAI.

In the first post of our AI Misconceptions series, we discussed why your data is definitely good enough to make GenAI work for your business. In fact, you actually need a lot less data to get started with an AI Assistant than you probably think.

Now, we’re debunking AI hallucination myths and separating some of the biggest AI hallucination facts from fiction. Could adding an AI Assistant to your contact center put your brand at risk? Let’s find out.

Misconception #2: “GenAI will hallucinate and hurt my brand.”

While the example hallucinations provided above are harmless and even a little funny, this isn’t always the case. Unfortunately, there are many examples of times chatbots have cussed out customers or made racist or sexist remarks. This causes a lot of concern among CX leaders looking to use an AI Assistant to represent their brand.

Truth #1: Hallucinations are real (no pun intended).

Understanding AI hallucinations hinges on realizing that GenAI wants to provide answers — whether or not it has the right data. Hallucinations like those in the examples above occur for two common reasons.

AI-Induced Hallucinations Explained:

  1. The large language model (LLM) simply does not have the correct information it needs to answer a given question. This is what causes GenAI to get overly creative and start making up stories that it presents as truth.
  2. The LLM has been given an overly broad and/or contradictory dataset. In other words, the model gets confused and begins to draw conclusions that are not directly supported in the data, much like a human would do if they were inundated with irrelevant and conflicting information on a particular topic.

Truth #2: There’s more than one type of hallucination.

Contrary to popular belief, hallucinations aren’t just incorrect answers: They can also be classified as correct answers to the wrong questions. And these types of hallucinations are actually more common and more difficult to control.

For example, imagine a company’s AI Assistant is asked to help troubleshoot a problem that a customer is having with their TV. The Assistant could give the customer correct troubleshooting instructions — but for the wrong television model. In this case, GenAI isn’t wrong, it just didn’t fully understand the context of the question.

GenAI Misconceptions

The Lie: There’s no way to prevent your AI Assistant from hallucinating.

Many GenAI “bot” vendors attempt to fine-tune an LLM, connect clients’ knowledge bases, and then trust it to generate responses to their customers’ questions. This approach will always result in hallucinations. A common workaround is to pre-program “canned” responses to specific questions. However, this leads to unhelpful and unnatural-sounding answers even to basic questions, which then wind up being escalated to live agents.

In contrast, true AI Assistants powered by the latest Conversational CX Platforms leverage LLMs as a tool to understand and generate language — but there’s a lot more going on under the hood.

First of all, preventing hallucinations is not just a technical task. It requires a layer of business logic that controls the flow of the conversation by providing a framework for how the Assistant should respond to users’ questions.

This framework guides a user down a specific path that enables the Assistant to gather the information the LLM needs to give the right answer to the right question. This is very similar to how you would train a human agent to ask a specific series of questions before diagnosing an issue and offering a solution. Meanwhile, in addition to understanding what the intent of the customer’s question is, the LLM can be used to extract additional information from the question.

Referred to as “pre-generation checks,” these filters are used to determine attributes such as whether the question was from an existing customer or prospect, which of the company’s products or services the question is about, and more. These checks happen in the background in mere seconds and can be used to select the right information to answer the question. Only once the Assistant understands the context of the client’s question and knows that it’s within scope of what it’s allowed to talk about does it ask the LLM to craft a response.

But the checks and balances don’t end there: The LLM is only allowed to generate responses using information from specific, trusted sources that have been pre-approved, and not from the dataset it was trained on.

In other words, humans are responsible for providing the LLM with a source of truth that it must “ground” its response in. In technical terms, this is called Retrieval Augmented Generation, or RAG — and if you want to get nerdy, you can read all about it here!

Last but not least, once a response has been crafted, a series of “post- generation checks” happens in the background before returning it to the user. You can check out the end-to-end process in the diagram below:

RAG

Give Hallucination Concerns the Heave-Ho

To sum it up: Yes, hallucinations happen. In fact, there’s more than one type of hallucination that CX leaders should be aware of.

However, now that you understand the reality of AI hallucination, you know that it’s totally preventable. All you need are the proper checks, balances, and guardrails in place, both from a technical and a business logic standpoint.

Now that you’ve had your biggest misconceptions about AI hallucination debunked, keep an eye out for the next blog in our series, all about GenAI data leaks. Or, learn the truth about all three of CX leaders’ biggest GenAI misconceptions now when you download our guide, Two Truths and a Lie: Breaking Down the Major GenAI Misconceptions Holding CX Leaders Back.

Request A Demo

What is Rich Messaging? – A Guide for CX Leaders Weighing the Benefits

If by chance you haven’t heard of this new frontier in text-based customer communication, your first question is probably, “what is rich messaging?”

Well, you’re in luck! We wrote this piece specifically to get to the bottom of this subject. Here, we offer a deep dive into rich messaging, the capabilities it unlocks, and its implications for CX. By the time they’re done, CX directors will better understand why rich messaging should be central to their customer outreach strategy, and the many ways in which it can make their job easier.

What Is Rich Messaging?

Rich messaging aims to support person-to-person or business-to-person communication with upgraded, interactive messages. Senders can attach high-resolution photos, videos, audio messages, GIFs, and an array of other media to enhance the receiver’s experience while conveying a lot more information with each message.

Before going any further, we should specify that “rich messaging” refers to an umbrella of modern messaging applications, but it’s not the same as rich messaging protocols on offer. Google’s Rich Communication Services (RCS), for example, is one approach to rich messaging, but it is not the same thing as rich messaging in general.

That said, you might still be wondering what is a rich communication service message, exactly? As you can guess, a rich communication service message is just a rich message sent over some appropriate protocol with all the advantages that it offers.

For a number of reasons, rich messaging applications have supplanted SMS in both personal and professional outreach. SMS messages simply do not support many staples of modern communication, such as group chats or “read” receipts. What’s more, the reach of SMS will remain limited because it requires a cellular connection, whereas rich messages can be sent over the internet.

Though SMS will probably be around for a while, rich messaging is becoming increasingly popular as companies have been trending toward greater use of applications like WhatsApp. Armed with these and similar channels, CX directors can now:

  • More easily capture new customers with compelling outreach;
  • Resolve customer issues directly via text, chat, or social media messaging (a huge advantage given how obsessed we’ve all become with our phones);
  • Interact with customers in real-time, which is a capability more and more people are looking for when seeking help;
  • Gather and act on analytics;
  • Scale their communications while simultaneously reducing the burden on contact center agents;

Given these facts, it’s no surprise that more and more CX leaders are making texting a key component of building lasting customer relationships.

Why is Expanding Communication Channels Important?

Expanding communication channels is important for the same reason the Internet and the fax machine were important; businesses must work to stay relevant, which is why they are constantly looking for the next technology to improve and expand interactions with customers and provide an edge over the competition. Let’s drill into this a little more.

As you must know, customer expectations change over time. It is startling to think that the first text message was sent nearly 30 years ago, but in that time, texting has become the default way of carrying on many different kinds of interactions. This is why it is crucial to develop communication channels that meet your customers where they are.

Modern consumers are incredibly busy, tech-savvy, and have a massive amount of information at their fingertips. They are not interested in calling a company for help unless there is absolutely no other choice, which is why they’ve been gravitating more toward text messaging for a while.

In the ongoing battle for limited customer attention, therefore, the forward-thinking CX director would be wise to put resources into this channel – along with any platforms that make that channel more fertile.

What is Rich Messaging on Different Platforms?

Now that you have more perspective on what rich messaging is and what it offers, let’s spend some time talking about which platforms you should focus on.

There are a few major providers of rich messaging, but we’ll focus on Apple and WhatsApp. Apple has long been a communication giant, but with billions of users worldwide, Facebook’s WhatsApp has certainly earned its spot at the table.

The sections below provide more details about how rich messaging works on each.

What is Rich Messaging on Apple?

Through Apple Messages for Business, contact centers can offer their customers a direct line of communication. This allows for far greater speed and convenience, to say nothing of the personalization opportunities opened up by artificial intelligence (more on this shortly).

For more information, check out our dedicated article on rich communication with Apple messages for Business.

What is Rich Messaging on WhatsApp?

WhatsApp is a widely-used application that uses rich messaging for texts, voice messages, and video calling for over two billion users worldwide. Utilizing a simple internet connection for its services, WhatsApp allows users to bypass the traditional costs associated with global communication, making it a cost-effective choice.

Given its vast user base, many international brands are adopting WhatsApp to connect with their customers. WhatsApp Business is an extension of WhatsApp, and it offers enhanced features tailored for business use.

It supports integration with tools like the Quiq conversational AI platform, which can automatically transcribe voice messages and allows for the export of these conversations for analysis using technologies like natural language processing.

For more information, check out our dedicated article on WhatsApp Business.

The Benefits of Rich Messages for Businesses

Engaging with consumers in more meaningful ways is one of the keys to driving sales and repeat purchases. Whether on Apple, WhatsApp, or another channel, rich messaging is one of the best ways of interacting with customers; it’s convenient and powerful enough to help a CX leader rise above their competition.

Below, we will get into more specifics about the advantages to be had from using rich messaging.

1. Cost-Effectiveness

It may be called “the bottom line,” but let’s face it, your budget probably ranks pretty highly on your list of priorities. Because it works over the internet, rich messaging is a great way for CX directions to connect with customers without breaking the bank.

But it can also help your organization save money by reducing customer support costs. When consumers need to talk to someone at your business, they can speak to knowledgeable agents (or a large language model trained on those agents’ output) through your rich messaging platform. You will reduce the need to provide the hardware and staffing required to run a full contact center, and you will be able to use those savings to invest in other areas of your business.

In this same vein, rich messaging makes it far easier to engage in asynchronous communications. This means agents are able to handle multiple conversations at the same time, resulting in further savings.

Finally, rich messaging is far more scalable than almost any other approach to customer outreach, especially when you effectively leverage AI. Once you’ve figured out what you want your message to be, communicating it to ten times as many people is relatively straightforward with rich messaging.

2. Real-Time Insights

When they integrate rich messaging with a platform offering excellent support for real-time analytics, companies gain access to conversation analytics that provide the insights they need to improve contact center performance.

They can generate reports on click rates and other helpful interaction metrics, for instance, giving CX leaders a feedback loop they can use to test changes and see what improves customer satisfaction, loyalty, and lifetime value.

3. Rich Messaging is Native to the Devices Customers are Already Using

You could pay for the most compelling billboard in the history of marketing, but if it’s on the moon where no one will see it, it’s not going to do you much good. For this reason, we’ve long pointed out that it’s important to meet your customers where they are – and these days, they’re on their phones.

When combined with the statistics in the following section, we think that the case for rich messaging as a central pillar in the CX director’s communications strategy is very strong.

4. Increased Engagement

When developing a customer communication strategy, it’s important to evaluate the potential engagement level of various channels. As it turns out, text messaging consistently achieves higher open and response rates compared to other methods.

The data supporting this is quite strong: in a 2018 survey, fully three-quarters of respondents indicated that they’d prefer to interact with brands through rich messages.

This high level of engagement demonstrates the significant potential of text messaging as a communication strategy. Considering that only about 25% of emails are opened and read, it becomes clear that investing in text messaging as a primary communication channel is a wise decision for effectively reaching and engaging customers.

5. The Human Touch (but with AI!)

Customers expect more personalization these days, and rich messaging gives businesses a way to customize communication with unprecedented scale and sophistication.

This customization is facilitated by machine learning, a technology at the cutting edge of automated content customization. A familiar example is Netflix, which uses algorithms to detect viewer preferences and recommend corresponding shows. Now, thanks to advancements in generative AI, this same technology is being integrated into text messaging.

Previously, language models lacked the necessary flexibility for personalized customer interactions, often sounding mechanical and inauthentic. However, today’s models have greatly enhanced agents’ abilities to adapt their conversations to fit specific contexts. While these models haven’t replaced the unique qualities of human interaction, they mark a significant improvement for CX directors aiming to improve the customer experience, keep customers loyal, and boost their lifetime value. What’s more, when used over time, these innovations will help a CX leader stand out in a crowded marketplace while making better decisions.

To make use of this, though, it helps to partner with a platform that offers this functionality out of the box.

6. Security

In addition to streamlining connections between your organization and its consumers, rich messaging may offer dependable security and peace of mind.

Trust and transparency have always been important, but with deep fakes and data breaches on the rise, they’re more crucial than ever. Some rich messaging applications, like WhatsApp, support end-to-end encryption, meaning your customers can interact with you knowing full well that their information is safe.

But, to reiterate, this is not the case for all rich messaging services, so be sure to do your own research first.

What is Rich Messaging? It’s the Future!

Businesses across every industry need to update their approach to messaging to remain relevant with consumers, but that’s especially true for CX leaders. Significant data shows that traditional customer communication channels, like phone, email, and web chat, have already fallen to the bottom of the preference list, and you need a plan in place that allows you to react to changes in customer desires.

Rich messaging is the technology that makes this possible, and it’s even more impactful when you partner with a platform like Quiq that enables personalization, analytics, and better engagement with your customers. Read more here to learn about the communication channels we support!

Request A Demo

Is Your CX Data Good Enough for GenAI? Exposing Common AI Misconceptions (Part One)

If you’re feeling unprepared for the impact of generative artificial intelligence (GenAI), you’re not alone. In fact, nearly 85% of CX leaders feel the same way. But the truth is that the transformative nature of this technology simply can’t be ignored — and neither can your boss, who asked you to look into it.

We’ve all heard horror stories of racist chatbots and massive data leaks ruining brands’ reputations. But we’ve also seen statistics around the massive time and cost savings brands can achieve by offloading customers’ frequently asked questions to AI Assistants. So which is it?

This is the first post in a three-part series clarifying the biggest misconceptions holding CX leaders like you back from integrating GenAI into their CX strategies. Our goal? To assuage your fears and help you start getting real about adding an AI Assistant to your contact center — all in a fun “two truths and a lie” format. Prepare to have your most common AI misconceptions debunked!

Misconception #1: “My data isn’t good enough for GenAI.”

Answering customer inquiries usually requires two types of data:

  1. Knowledge (e.g. an order return policy) and
  2. Information from internal systems (e.g. the specific details of an order).

It’s easy to get caught up in overthinking the impact of data quality on AI performance and wondering whether or not your knowledge is even good enough to make an AI Assistant useful for your customers.

Updating hundreds of help desk articles is no small task, let alone building an entire knowledge base from scratch. Many CX leaders are worried about the amount of work it will require to clean up their data and whether their team has enough resources to support a GenAI initiative. In order for GenAI to be as effective as a human agent, it needs the same level of access to internal systems as human agents.

Truth #1: You have to have some amount of data.

Data is necessary to make AI work — there’s no way around it. You must provide some data for the model to access in order to generate answers. This is one of the most basic AI performance factors.

But we have good news: You need a lot less data than you think.

One of the most common myths about AI and data in CX is that it’s necessary to answer every possible customer question. Instead, focus on ensuring you have the knowledge necessary to answer your most frequently asked questions. This small step forward will have a major impact for your team without requiring a ton of time and resources to get started

Truth #2: Quality matters more than quantity.

Given the importance of relevant data in AI, a few succinct paragraphs of accurate information is better than volumes of outdated or conflicting documentation. But even then, don’t sweat the small stuff.

For example, did a product name change fail to make its way through half of your help desk articles? Are there unnecessary hyperlinks scattered throughout? Was it written for live agents versus customers?

No problem — the right Conversational CX Platform can easily address these AI data dependency concerns without requiring additional support from your team.

The Lie: Your data has to be perfectly unified and specifically formatted to train an AI Assistant.

Don’t worry if your data isn’t well-organized or perfectly formatted. The reality is that most companies have services and support materials scattered across websites, knowledge bases, PDFs, .csvs, and dozens of other places — and that’s okay!

Today, the tools and technology exist to make aggregating this fragmented data a breeze. They’re then able to cleanse and format it in a way that makes sense for a large language model (LLM) to use.

For example if you have an agent training manual in Google Docs and a product manual in PDF, this information can be disassembled, reformatted, and rewritten by an AI-powered transformation that makes it subsequently usable.

What’s more, the data used by your AI Assistant should be consistent with the data you use to train your human agents. This means that not only is it not required to build a special repository of information for your AI Assistant to learn from, but it’s not recommended. The very best AI platforms take on the work of maintaining this continuity by automatically processing and formatting new information for your Assistant as it’s published, as well as removing any information that’s been deleted.

Put Those Data Doubts to Bed

Now you know that your data is definitely good enough for GenAI to work for your business. Yes, quality matters more than quantity, but it doesn’t have to be perfect.

The technology exists to unify and format your data so that it’s usable by an LLM. And providing knowledge around even a handful of frequently asked questions can give your team a major lift right out the gate.

Keep an eye out for the next blog in our series, all about GenAI hallucinations. Or, learn the truth about all three of CX leaders’ biggest GenAI misconceptions now when you download our guide, Two Truths and a Lie: Breaking Down the Major GenAI Misconceptions Holding CX Leaders Back.

Request A Demo