• Don't miss our webinar: Take Your Omnichannel CX to New Heights: How Spirit Airlines Is Upgrading Self-Service with Agentic AI  Watch now -->

How to Get the Most out of Your NLP Models with Preprocessing

Along with computer vision, natural language processing (NLP) is one of the great triumphs of modern machine learning. While ChatGPT is all the rage and large language models (LLMs) are drawing everyone’s attention, that doesn’t mean that the rest of the NLP field just goes away.

NLP endeavors to apply computation to human-generated language, whether that be the spoken word or text existing in places like Wikipedia. There are any number of ways in which this would be relevant to customer experience and service leaders, including:

Today, we’re going to briefly touch on what NLP is, but we’ll spend the bulk of our time discussing how textual training data can be preprocessed to get the most out of an NLP system. There are a few branches of NLP, like speech synthesis and text-to-speech, which we’ll be omitting.

Armed with this context, you’ll be better prepared to evaluate using NLP in your business (though if you’re building customer-facing chatbots, you can also let the Quiq platform do the heavy lifting for you).

What is Natural Language Processing?

In the past, we’ve jokingly referred to NLP as “doing computer stuff with words after you’ve tricked them into being math.” This is meant to be humorous, but it does capture the basic essence.

Remember, your computer doesn’t know what words are, all it does is move 1’s and 0’s around. A crucial step in most NLP applications, therefore, is creating a numerical representation out of the words in your training corpus.

There are many ways of doing this, but today a popular method is using word vector embeddings. Also known simply as “embeddings”, these are vectors of real numbers. They come from a neural network or a statistical algorithm like word2vec and stand in for particular words.

The technical details of this process don’t concern us in this post, what’s important is that you end up with vectors that capture a remarkable amount of semantic information. Words with similar meanings also have similar vectors, for example, so you can do things like find synonyms for a word by finding vectors that are mathematically close to it.

These embeddings are the basic data structures used across most of NLP. They power sentiment analysis, topic modeling, and many other applications.

For most projects it’s enough to use pre-existing word vector embeddings without going through the trouble of generating them yourself.

Are Large Language Models Natural Language Processing?

Large language models (LLMs) are a subset of natural language processing. Training an LLM draws on many of the same techniques and best practices as the rest of NLP, but NLP also addresses a wide variety of other language-based tasks.

Conversational AI is a great case in point. One way of building a conversational agent is by hooking your application up to an LLM like ChatGPT, but you can also do it with a rules-based approach, through grounded learning, or with an ensemble that weaves together several methods.

Getting the Most out of Your NLP Models with Preprocessing

Data Preprocessing for NLP

If you’ve ever sent a well-meaning text that was misinterpreted, you know that language is messy. For this reason, NLP places special demands on the data engineers and data scientists who must transform text in various ways before machine learning algorithms can be trained on it.

In the next few sections, we’ll offer a fairly comprehensive overview of data preprocessing for NLP. This will not cover everything you might encounter in the course of preparing data for your NLP application, but it should be more than enough to get started.

Why is Data Preprocessing Important?

They say that data is the new oil, and just as you can’t put oil directly in your gas tank and expect your car to run, you can’t plow a bunch of garbled, poorly-formatted language data into your algorithms and expect magic to come out the other side.

But what, precisely, counts as preprocessing will depend on your goals. You might choose to omit or include emojis, for example, depending on whether you’re training a model to summarize academic papers or write tweets for you.

That having been said, there are certain steps you can almost always expect to take, including standardizing the case of your language data, removing punctuation, white spaces and stop words, segmenting and tokenizing, etc.

We treat each of these common techniques below.

Segmentation and Tokenization

An NLP model is always trained on some consistent chunk of the full data. When ChatGPT was trained, for example, they didn’t put the entire internet in a big truck and back it up to a server farm, they used self-supervised learning.

Simplifying greatly, this means that the underlying algorithm would take, say, the first few three sentences of a paragraph and then try to predict the remaining sentence on the basis of the text that came before. Over time it sees enough language to guess that “to be or not to be, that is ___ ________” ends with “the question.”

But how was ChatGPT shown the first three sentences? How does that process even work?

A big part of the answer is segmentation and tokenization.

With segmentation, we’re breaking a full corpus of training text – which might contain hundreds of books and millions of words – down into units like words or sentences.

This is far from trivial. In English, sentences end with a period, but words like “Mr.” and “etc.” also contain them. It can be a real challenge to divide text into sentences without also breaking “Mr. Smith is cooking the steak.” into “Mr.” and “Smith is cooking the steak.”

Tokenization is a related process of breaking a corpus down into tokens. Tokens are sometimes described as words, but in truth they can be words, short clusters of a few words, sub-words, or even individual characters.

This matters a lot to the training of your NLP model. You could train a generative language model to predict the next sentence based on the preceding sentences, the next word based on the preceding words, or the next character based on the preceding characters.

Regardless, in both segmentation and tokenization, you’re decomposing a whole bunch of text down into individual units that your algorithm can work with.

Making the Case Consistent

It’s standard practice to make the case of your text consistent throughout, as this makes training simpler. This is usually done by lowercasing all the text, though we suppose if you’re feeling rebellious there’s no reason you couldn’t uppercase it (but the NLP engineers might not invite you to their fun Natural Language Parties if you do.)

Fixing Misspellings

NLP, like machine learning more generally, is only as good as its data. If you feed it text with a lot of errors in spelling, it will learn those errors and they’ll show up again later.

This probably isn’t something you’ll want to do manually, and if you’re using a popular language there’s likely a module you can use to do this for you. Python, for example, has TextBlob, Autocorrect, and Pyspellchecker libraries that can handle spelling errors.

Getting Rid of the Punctuation Marks

Natural language tends to have a lot of punctuation, with English utilizing dozens of marks such as ‘!’ and ‘;’ for emphasis and clarification. These are usually removed as part of preprocessing.

This task is something that can be handled with regular expressions (if you have the patience for it…), or you can do it with an NLP library like Natural Language Toolkit (NLTK).

Expanding the Contractions

Contractions are shortened versions of words, like turning “do not” into “don’t” or “would not” into “wouldn’t”. These, too, can be problematic for NLP algorithms and are usually removed during preprocessing.

Stemming

In linguistics, the stem of a word is its root. The words “runs”, “ran”, and “running” all have the word “run” as their base.

Stemming is one of two approaches for reducing the myriad tenses of a word down into a single basic representation. The other is lemmatization, which we’ll discuss in the next section.

Stemming is the cruder of the two, and is usually done with an algorithm known as Porter’s Stemmer. This stemmer doesn’t always produce the stem you’d expect. “Cats” becomes “cat” while “ponies” becomes “poni”, for example. Nevertheless, this is probably sufficient for basic NLP tasks.

Lemmatization

A more sophisticated version of stemming is lemmatization. A stemmer wouldn’t know the difference between the word “left” in “cookies are ahead and to the left” and “he left the book on the table”, whereas a lemmatizer would.

More generally, a lemmatizer uses language-specific context to handle very subtle distinctions between words, and this means it will usually take longer to run than a stemmer.

Whether it makes sense to use a stemmer or a lemmatizer will depend on the use case you’re interested in. Under most circumstances, lemmatizers are more accurate, and stemmers are faster.

Removing Extra White Spaces

It’ll often be the case that a corpus will have an inconsistent set of spacing conventions. This, too, is something algorithm will learn unless it’s remedied during preprocessing.
Removing Stopwords

This is a big one. “Stopwords” are words like “the” or “is” are all stopwords, and they’re almost always removed before training begins because they don’t add much in the way of useful information.

Because this is done so commonly, you can assume that the NLP library you’re using will have some easy way of doing it. NLTK, for example, has a native list of stopwords that can simply be imported:

from nltk.corpus import stopwords

With this, you can simply exclude the stopwords from the corpus.

Ditching the Digits

If you’re building an NLP application that processes data containing numbers, you’ll probably want to remove that as the training algorithm might end up inserting random digits here and there.

This, alas, is something that will probably need to be done with regular expressions.

Part of Speech Tagging

Part of speech tagging refers to the process of automatically tagging a word with extra grammatical information about whether it’s a noun, verb, etc.

This is certainly not something that you always have to do (we’ve completed a number of NLP projects where it never came up), but it’s still worth understanding what it is.

Supercharging Your NLP Applications

Natural language processing is an enormously powerful constellation of techniques that allow computers to do worthwhile work on text data. It can be used to build question-answering systems, tutors, chatbots, and much more.

But to get the most out of it, you’ll need to preprocess the data. No matter how much computing you have access to, machine learning isn’t of much use with bad data. Techniques like removing stopwords, expanding contractions, and lemmatization create corpora of text that can then be fed to NLP algorithms.

Of course, there’s always an easier way. If you’d rather skip straight to the part where cutting-edge conversational AI directly adds value to your business, you can also reach out to see what the Quiq platform can do.

What Is Transfer Learning? – The Role of Transfer Learning in Building Powerful Generative AI Models

Machine learning is hard work. Sure, it only takes a few minutes to knock out a simple tutorial where you’re training an image classifier on the famous iris dataset, but training a big model to do something truly valuable – like interacting with customers over a chat interface – is a much greater challenge.

Transfer learning offers one possible solution to this problem. By making it possible to train a model in one domain and reuse it in another, transfer learning can reduce demands on your engineering team by a substantial amount.

Today, we’re going to get into transfer learning, defining what it is, how it works, where it can be applied, and the advantages it offers.

Let’s get going!

What is Transfer Learning in AI?

In the abstract, transfer learning refers to any situation in which knowledge from one task, problem, or domain is transferred to another. If you learn how to play the guitar well and then successfully use those same skills to pick up a mandolin, that’s an example of transfer learning.

Speaking specifically about machine learning and artificial intelligence, the idea is very similar. Transfer learning is when you pre-train a model on one task or dataset and then figure out a way to reuse it for another (we’ll talk about methods later).

If you train an image model, for example, it will tend to learn certain low-level features (like curves, edges, and lines) that show up in pretty much all images. This means you could fine-tune the pre-trained model to do something more specialized, like recognizing faces.

Why Transfer Learning is Important in Deep Learning Models

Building a deep neural network requires serious expertise, especially if you’re doing something truly novel or untried.

Transfer learning, while far from trivial, is simply not as taxing. GPT-4 is the kind of project that could only have been tackled by some of Earth’s best engineers, but setting up a fine-tuning pipeline to get it to do good sentiment analysis is a much simpler job.

By lowering the barrier to entry, transfer learning brings advanced AI into reach for a much broader swath of people. For this reason alone, it’s an important development.

Transfer Learning vs. Fine-Tuning

And speaking of fine-tuning, it’s natural to wonder how it’s different from transfer learning.

The simple answer is that fine-tuning is a kind of transfer learning. Transfer learning is a broader concept, and there are other ways to approach it besides fine-tuning.

What are the 5 Types of Transfer Learning?

Broadly speaking, there are five major types of transfer learning, which we’ll discuss in the following sections.

Domain Adaptation

Under the hood, most modern machine learning is really just an application of statistics to particular datasets.

The distribution of the data a particular model sees, therefore, matters a lot. Domain adaptation refers to a family of transfer learning techniques in which a model is (hopefully) trained such that it’s able to handle a shift in distributions from one domain to another (see section 5 of this paper for more technical details).

Domain Confusion

Earlier, we referenced the fact that the layers of a neural network can learn representations of particular features – one layer might be good at detecting curves in images, for example.

It’s possible to structure our training such that a model learns more domain invariant features, i.e. features that are likely to show up across multiple domains of interest. This is known as domain confusion because, in effect, we’re making the domains as similar as possible.

Multitask Learning

Multitask learning is arguably not even a type of transfer learning, but it came up repeatedly in our research, so we’re adding a section about it here.

Multitask learning is what it sounds like; rather than simply training a model on a single task (i.e. detecting humans in images), you attempt to train it to do several things at once.

The debate about whether multitask learning is really transfer learning stems from the fact that transfer learning generally revolves around adapting a pre-trained model to a new task, rather than having it learn to do more than one thing at a time.

One-Shot Learning

One thing that distinguishes machine learning from human learning is that the former requires much more data. A human child will probably only need to see two or three apples before they learn to tell apples from oranges, but an ML model might need to see thousands of examples of each.

But what if that weren’t necessary? The field of one-shot learning addresses itself to the task of learning e.g. object categories from either one example or a small number of them. This idea was pioneered in “One-Shot Learning of Object Categories”, a watershed paper co-authored by Fei-Fei Li and her collaborators. Their Bayesian one-shot learner was able to “…to incorporate prior knowledge of the object world into the learning scheme”, and it outperformed a variety of other models in object recognition tasks.

Zero-Shot Learning

Of course, there might be other tasks (like translating a rare or endangered language), for which it is effectively impossible to have any labeled data for a model to train on. In such a case, you’d want to use zero-shot learning, which is a type of transfer learning.

With zero-shot learning, the basic idea is to learn features in one data set (like images of cats) that allow successful performance on a different data set (like images of dogs). Humans have little problem with this, because we’re able to rapidly learn similarities between types of entities. We can see that dogs and cats both have tails, both have fur, etc. Machines can perform the same feat if the data is structured correctly.

How Does Transfer Learning Work?

There are a few different ways you can go about utilizing transfer learning processes in your own projects.

Perhaps the most basic is to use a good pre-trained model off the shelf as a feature extractor. This would mean keeping the pre-trained model in place, but then replacing its final layer with a layer custom-built for your purposes. You could take the famous AlexNet image classifier, remove its last classification layer, and replace it with your own, for example.

Or, you could fine-tune the pre-trained model instead. This is a more involved engineering task and requires that the pre-trained model be modified internally to be better suited to a narrower application. This will often mean that you have to freeze certain layers in your model so that the weights don’t change, while simultaneously allowing the weights in other layers to change.

What are the Applications of Transfer Learning?

As machine learning and deep learning have grown in importance, so too has transfer learning become more crucial. It now shows up in a variety of different industries. The following are some high-level indications of where you might see transfer learning being applied.

Speech recognition across languages: Teaching machines to recognize and process spoken language is an important area of AI research and will be of special interest to those who operate contact centers. Transfer learning can be used to take a model trained in a language like French and repurpose it for Spanish.

Training general-purpose game engines: If you’ve spent any time playing games like chess or go, you know that they’re fairly different. But, at a high enough level of abstraction, they still share many features in common. That’s why transfer learning can be used to train up a model on one game and, under certain conditions, use it in another.

Object recognition and segmentation: Our Jetsons-like future will take a lot longer to get here if our robots can’t learn to distinguish between basic objects. This is why object recognition and object segmentation are both such important areas of research. Transfer learning is one way of speeding up this process. If models can learn to recognize dogs and then quickly be re-purposed for recognizing muffins, then we’ll soon be able to outsource both pet care and cooking breakfast.

transfer_learning_chihuahua
In fairness to the AI, it’s not like we can really tell them apart!

Applying Natural Language Processing: For a long time, computer vision was the major use case of high-end, high-performance AI. But with the release of ChatGPT and other large language models, NLP has taken center stage. Because much of the modern NLP pipeline involves word vector embeddings, it’s often possible to use a baseline, pre-trained NLP model in applications like topic modeling, document classification, or spicing up your chatbot so it doesn’t sound so much like a machine.

What are the Benefits of Transfer Learning?

Transfer learning has become so popular precisely because it offers so many advantages.

For one thing, it can dramatically reduce the amount of time it takes to train a new model. Because you’re using a pre-trained model as the foundation for a new, task-specific model, far fewer engineering hours have to be spent to get good results.

There are also a variety of situations in which transfer learning can actually improve performance. If you’re using a good pre-trained model that was trained on a general enough dataset, many of the features it learned will carry over to the new task.

This is especially true if you’re working in a domain where there is relatively little data to work with. It might simply not be possible to train a big, cutting-edge model on a limited dataset, but it will often be possible to use a pre-trained model that is fine-tuned on that limited dataset.

What’s more, transfer learning can work to prevent the ever-present problem of overfitting. Overfitting has several definitions depending on what resource you consult, but a common way of thinking about it is when the model is complex enough relative to the data that it begins learning noise instead of just signal.

That means that it may do spectacularly well in training only to generalize poorly when it’s shown fresh data. Transfer learning doesn’t completely rule out this possibility, but it makes it less likely to happen.

Transfer learning also has the advantage of being quite flexible. You can use transfer learning for everything from computer vision to natural language processing, and many domains besides.

Relatedly, transfer learning makes it possible for your model to expand into new frontiers. When done correctly, a pre-trained model can be deployed to solve an entirely new problem, even when the underlying data is very different from what it was shown before.

When To Use Transfer Learning

The list of benefits we just enumerated also offers a clue as to when it makes sense to use transfer learning.

Basically, you should consider using transfer learning whenever you have limited data, limited computing resources, or limited engineering brain cycles you can throw at a problem. This will often wind up being the case, so whenever you’re setting your sights on a new goal, it can make sense to spend some time seeing if you can’t get there more quickly by simply using transfer learning instead of training a bespoke model from scratch.

Check out the second video in Quiq’s LLM Intuitions series—created by our Head of AI, Kyle McIntyre—to learn about one of the oldest forms of transfer learning: Word embeddings.

Transfer Learning and You

In the contact center space, we understand how difficult it can be to effectively apply new technologies to solve our problems. It’s one thing to put together a model for a school project, and quite another to have it tactfully respond to customers who might be frustrated or confused.

Transfer learning is one way that you can get more bang for your engineering buck. By training a model on one task or dataset and using it on another, you can reduce your technical budget while still getting great results.

You could also just rely on us to transfer our decades of learning on your behalf (see what we did there). We’ve built an industry-leading conversational AI chat platform that is changing the game in contact centers. Reach out today to see how Quiq can help you leverage the latest advances in AI, without the hassle.

How Generative AI is Supercharging Contact Center Agents

If you’re reading this, you’ve probably had a chance to play around with ChatGPT or one of the other large language models (LLMs) that have been making waves and headlines in recent months.

Concerns around automation go back a long way, but there’s long been extra worry about the possibility that machines will make human labor redundant. If you’ve used generative AI to draft blog posts or answer technical questions, it’s natural to wonder if perhaps algorithms will soon be poised to replace humans in places like contact centers.

Given how new these LLMs are there has been little scholarship on how they’ve changed the way contact centers function. But “Generative AI at Work” by Erik Brynjolfsson, Danielle Li, and Lindsey R. Raymond took aim at exactly this question.

The results are remarkable. They found that access to tools like ChatGPT not only led to a marked increase in productivity among the lowest-skilled workers, it also had positive impacts on other organizational metrics, like reducing turnover.

Today, we’re going to break this economic study down, examining its methods, its conclusions, and what they mean for the contact centers of the future.

Let’s dig in!

A Look At “Generative AI At Work”

The paper studies data from the use of a conversational AI assistant by a little over 5,000 agents working in customer support.

It contains several major sections, beginning with a technical primer on what generative AI is and how it works before moving on to a discussion of the study’s methods and results.

What is Generative AI?

Covering the technical fundamentals of generative AI will inform our efforts to understand the ways in which this AI technology affected work in the study, as well as how it is likely to do so in future deployments.

A good way to do this is to first grasp how traditional, rules-based programming works, then contrast this with generative AI.

When you write a computer program, you’re essentially creating a logical structure that furnishes instructions the computer can execute.

To take a simple case, you might try to reverse a string such as “Hello world”. One way to do this explicitly is to write code in a language like Python which essentially says:

“Create a new, empty list, then start at the end of the string we gave you and work forward, successively adding each character you encounter to that list before joining all the characters into a reversed string”:

Python code demonstrating a reverse string.

Despite the fact that these are fairly basic instructions, it’s possible to weave them into software that can steer satellites and run banking infrastructure.

But this approach is not suitable for every kind of problem. If you’re trying to programmatically identify pictures of roses, for example, it’s effectively impossible to do this with rules like the ones we used to reverse the string.

Machine learning, however, doesn’t even try to explicitly define any such rules. It works instead by feeding a model many pictures of roses, and “training” it to learn a function that lets it identify new pictures of roses it has never seen before.

Generative AI is a kind of machine learning in which gargantuan models are trained on mind-boggling amounts of text data until they’re able to produce their own, new text. Generative AI is a distinct sub-branch of ML because its purpose is generation, while other kinds of models might be aimed at tasks like classification and prediction.

Is Generative AI The Same Thing As Large Language Models?

At this point, you might be wondering how whether generative AI is the same thing as LLMs. With all the hype and movement in the space, it’s easy to lose track of the terminology.

LLMs are a subset of the broader category of generative AI. All LLMs are generative AI, but there are generative algorithms that work with images, music, chess moves, and other things besides natural language.

How Did The Researchers Study the Effects of Generative AI on Work?

Now we understand that ML learns to recognize patterns, how this is different from classical computer programming, and how generative AI fits into the whole picture.

We can now get to the meat of the study, beginning with how Brynjolfsson, Li, and Raymond actually studied the use of generative AI by workers at a contact center.

The firm from which they drew their data is a Fortune 500 company that creates enterprise software. Its support agents are located mainly in the Phillippines (with a smaller number in the U.S.) to resolve customer issues via a chat interface.

Most of the agent’s job boils down to answering questions from the owners of small businesses that use the firm’s software. Their productivity is assessed via how long it takes them to resolve a given issue (“average handle time”), the fraction of total issues a given agent is able to resolve to the customer’s satisfaction (“resolution rate”), and the net number of customers who would recommend the agent (“net promoter score.”)

Line graphs showing handle time, resolution rate and customer satisfaction using AI.

The AI used by the firm is a version of GPT which has received additional training on conversations between customers and agents. It is mostly used for two things: generating appropriate responses to customers in real-time and surfacing links to the firm’s technical documentation to help answer specific questions about the software.

Bear in mind that this generative AI system is meant to help the agents in performing their jobs. It is not intended to – and is not being trained to – completely replace them. They maintain autonomy in deciding whether and how much of the AI’s suggestions to take.

How Did Generative AI Change Work?

Next, we’ll look at what the study actually uncovered.

There were four main findings, touching on how total worker productivity was impacted, whether productivity gains accrued mainly to low-skill or high-skill workers, how access to an AI tool changed learning on the job, and how the organization changed as a result.

1. Access to Generative AI Boosted Worker Productivity

First, being able to use the firm’s AI tool increased worker productivity by almost 14%. This came from three sources: a reduction in how long it took any given agent to resolve a particular issue, an expansion in the total number of resolutions an agent was able to work on in an hour, and a small jump in the fraction of chats that were completed successfully.

The firm's AI tool increased worker productivity by almost 14%

This boost happened very quickly, showing up in the first month after deployment, growing a little in the second month, and then remaining at roughly that level for the duration of the study.

2. Access to Generative AI Was Most Helpful for Lower-Skilled Agents

Intriguingly, the greatest productivity gains were seen among agents that were relatively low-skill, such as those that were new to the job, with longer-serving, higher-skilled agents seeing virtually none.

The agents in the very bottom quintile for skill level, in fact, were able to resolve 35% more calls per hour—a substantial jump.

The agents in the very bottom quintile for skill level were able to resolve more calls per hour 35%.

With the benefit of hindsight it’s tempting to see these results as obvious, but they’re not. Earlier studies have usually found that the benefits of new computing technologies accrued to the ablest workers, or led firms to raise the bar on skill requirements for different positions.

If it’s true that generative AI is primarily going to benefit less able employees, this fact alone will distinguish it from prior waves of innovation. [1]

3. Access To Generative AI Helps New Workers “Move Down the Learning Curve”

Perhaps the most philosophically interesting conclusion drawn by the study’s authors relates to how generative AI is able to partially learn the tacit knowledge of more skilled workers.

The term “tacit knowledge” refers to the hard-to-articulate behaviors you pick up as you get good at something.

Imagine trying to teach a person how to ride a bike. It’s easy enough to give broad instructions (“check your shoelaces”, “don’t brake too hard”), but there ends up being a billion little subtleties related to foot placement, posture, etc. that are difficult to get into words.

This is true for everything, and it’s part of what distinguishes masters from novices. It’s also a major reason for the fact that many professions have been resistant to full automation.

Remember our discussion of how rule-based programming is poorly suited to tasks where the rules are hard to state? Well, that applies to tasks involving a lot of tacit knowledge. If no one, not even an expert, can tell you precisely what steps to take to replicate their results, then no one is going to be able to program a computer to do it either.

But ML and generative AI don’t face this restriction. With data sets that are big enough and rich enough, the algorithms might be able to capture some of the tacit knowledge expert contact center agents have, e.g. how they phrase replies to customers.

This is suggested by the study’s results. By analyzing the text of customer-agent interactions, the authors found that novice agents using generative AI were able to sound more like experienced agents, which contributed to their success.

4. Access to Generative AI Changed the Way the Organization Functioned

Organizations are profoundly shaped by their workers, and we should expect to see organization-level changes when a new technology dramatically changes how employees operate.

Two major findings from the study were that employee turnover was markedly reduced and there were far fewer customers “escalating” an issue by asking to speak to a supervisor. This could be because agents using generative AI were overall treated much better by customers (who have been known to become frustrated and irate), leading to less stress.

The Contact Center of the Future

Generative AI has already impacted many domains, and this trend will likely only continue going forward. “Generative AI At Work” provides a fascinating glimpse into the way that this technology changed a large contact center by boosting productivity among the least-skilled agents, helping disseminate the hard-won experience of the most-skilled agents, and overall reducing turnover and dissatisfaction.

If this piece has piqued your curiosity about how you can use advanced AI tools for customer-facing applications, schedule a demo of the Quiq conversational CX platform today.

From resolving customer complaints with chatbots to automated text-message follow-ups, we’ve worked hard to build a best-in-class solution for businesses that want to scale with AI.

Let’s see what we can do for you!

[1] See e.g. this quote: “Our paper is related to a large literature on the impact of various forms of technological adoption on worker productivity and the organization of work (e.g. Rosen, 1981; Autor et al., 1998; Athey and Stern, 2002; Bresnahan et al., 2002; Bartel et al., 2007; Acemoglu et al., 2007; Hoffman et al., 2017; Bloom et al., 2014; Michaels et al., 2014; Garicano and Rossi-Hansberg, 2015; Acemoglu and Restrepo, 2020). Many of these studies, particularly those focused on information technologies, find evidence that IT complements higher-skill workers (Akerman et al., 2015; Taniguchi and Yamada, 2022). Bartel et al. (2007) shows that firms that adopt IT tend to use more skilled labor and increase skill requirements for their workers. Acemoglu and Restrepo (2020) study the diffusion of robots and find that the negative employment effects of robots are most pronounced for workers in blue-collar occupations and those with less than a college education. In contrast, we study a different type of technology—generative AI—and find evidence that it most effectively augments lower-skill workers.”

A Guide to Fine-Tuning Pretrained Language Models for Specific Use Cases

Over the past half-year, large language models (LLMs) like ChatGPT have proven remarkably useful for a wide range of tasks, including machine translation, code analysis, and customer interactions in places like contact centers.

For all this power and flexibility, however, it is often still necessary to use fine-tuning to get an LLM to generate high-quality output for specific use cases.

Today, we’re going to do a deep dive into this process, understanding how these models work, what fine-tuning is, and how you can leverage it for your business.

What is a Pretrained Language Model?

First, let’s establish some background context by tackling the question of what pretrained models are and how they work.

The “GPT” in ChatGPT stands for “generative pretrained transformer”, and this gives us a clue as to what’s going on under the hood. ChatGPT is a generative model, meaning its purpose is to create new output; it’s pretrained, meaning that it has already seen a vast amount of text data by the time end users like us get our hands on it; and it’s a transformer, which refers to the fact that it’s built out of billions of transformer modules stacked into layers.

If you’re not conversant in the history of machine learning it can be difficult to see what the big deal is, but pretrained models are a relatively new development. Once upon a time in the ancient past (i.e. 15 or 20 years ago), it was an open question as to whether engineers would be able to pretrain a single model on a dataset and then fine-tune its performance, or whether they would need to approach each new problem by training a model from scratch.

This question was largely resolved around 2013, when image models trained on the ImageNet dataset began sweeping competitions left and right. Since then it has become more common to use pretrained models as a starting point, but we want to emphasize that this approach does not always work. There remain a vast number of important projects for which building a bespoke model is the only way to go.

What is Transfer Learning?

Transfer learning refers to when an agent or system figures out how to solve one kind of problem and then uses this knowledge to solve a different kind of problem. It’s a term that shows up all over artificial intelligence, cognitive psychology, and education theory.

Author, chess master, and martial artist Josh Waitzkin captures the idea nicely in the following passage from his blockbuster book, The Art of Learning:

“Since childhood I had treasured the sublime study of chess, the swim through ever-deepening layers of complexity. I could spend hours at a chessboard and stand up from the experience on fire with insight about chess, basketball, the ocean, psychology, love, art.”

Transfer learning is a broader concept than pretraining, but the two ideas are closely related. In machine learning, competence can be transferred from one domain (generating text) to another (translating between natural languages or creating Python code) by pretraining a sufficiently large model.

What is Fine-Tuning A Pretrained Language Model?

Fine-tuning a pretrained language model occurs when the model is repurposed for a particular task by being shown illustrations of the correct behavior.

If you’re in a whimsical mood, for example, you might give ChatGPT a few dozen limericks so that its future output always has that form.

It’s easy to confuse fine-tuning with a few other techniques for getting optimum performance out of LLMs, so it’s worth getting clear on terminology before we attempt to give a precise definition of fine-tuning.

Fine-Tuning a Language Model v.s. Zero-Shot Learning

Zero-shot learning is whatever you get out of a language model when you feed it a prompt without making any special effort to show it what you want. It’s not technically a form of fine-tuning at all, but it comes up in a lot of these conversations so it needs to be mentioned.

(NOTE: It is sometimes claimed that prompt engineering counts as zero-shot learning, and we’ll have more to say about that shortly.)

Fine-Tuning a Language Model v.s. One-Shot Learning

One-shot learning is showing a language model a single example of what you want it to do. Continuing our limerick example, one-shot learning would be giving the model one limerick and instructing it to format its replies with the same structure.

Fine-Tuning a Language Model v.s. Few-Shot Learning

Few-shot learning is more or less the same thing as one-shot learning, but you give the model several examples of how you want it to act.

How many counts as “several”? There’s no agreed-upon number that we know about, but probably 3 to 5, or perhaps as many as 10. More than this and you’re arguably not doing “few”-shot learning anymore.

Fine-Tuning a Language Model v.s. Prompt Engineering

Large language models like ChatGPT are stochastic and incredibly sensitive to the phrasing of the prompts they’re given. For this reason, it can take a while to develop a sense of how to feed the model instructions such that you get what you’re looking for.

The emerging discipline of prompt engineering is focused on cultivating this intuitive feel. Minor tweaks in word choice, sentence structure, etc. can have an enormous impact on the final output, and prompt engineers are those who have spent the time to learn how to make the most effective prompts (or are willing to just keep tinkering until the output is correct).

Does prompt engineering count as fine-tuning? We would argue that it doesn’t, primarily because we want to reserve the term “fine-tuning” for the more extensive process we describe in the next few sections.

Still, none of this is set in stone, and others might take the opposite view.

Distinguishing Fine-Tuning From Other Approaches

Having discussed prompt engineering and zero-, one-, and few-shot learning, we can give a fuller definition of fine-tuning.

Fine-tuning is taking a pretrained language model and optimizing it for a particular use case by giving it many examples to learn from. How many you ultimately need will depend a lot on your task – particularly how different the task is from the model’s training data and how strict your requirements for its output are – but you should expect it to take on the order of a few dozen or a few hundred examples.

Though it bears an obvious similarity to one-shot and few-shot learning, fine-tuning will generally require more work to come up with enough examples, and you might have to build a rudimentary pipeline that feeds the examples in through the API. It’s almost certainly not something you’ll be doing directly in the ChatGPT web interface.

Contact Us

How Can I Fine-Tune a Pretrained Language Model?

Having gotten this far, we can now turn our attention to what the fine-tuning procedure actually consists in. The basic steps are: deciding what you’re wanting to accomplish, gather the requisite data (and formatting it correctly), feeding it to your model, and evaluating the results.

Let’s discuss each, in turn.

Deciding on Your Use Case

The obvious place to begin is figuring out exactly what it is you want to fine-tune a pretrained model to do.

It may seem as though this is too obvious to be included as its own standalone step, but we’ve singled it out is because you need to think through the specifics of what you’re trying to accomplish. It’s not enough to say “We want to fine-tune this model to write tweets for us”, you have to consider questions like “Should the tone by formal or informal” and “Are we okay with it adding emojis”.

This matters because it informs the data that you gather for fine-tuning. If you feed the model irreverent tweets with lots of s and s, you can expect that to show up in the tweets it generates for you.

Picking a Base Model

Next, you have to figure out which model you want to fine-tune. This is not the place to get into a discussion on all the “foundation” models that are available today, but suffice it to say that HuggingFace, OpenAI, Anthropic, and the other companies in the pretrained model space offer a variety of different options. BigScience has BLOOM and T0pp models, OpenAI has Ada, Babbage, Da Vinci, and Curie, and so on.

Different models have different fine-tuning requirements, so this choice will impact how you structure your pipeline later.

Gathering and Formatting the Fine-Tuning Data

The heart of fine-tuning a pretrained model is gathering the samples from which it will learn. This is more or less exactly what it sounds like: you compile a list of prompts and desired responses. This is the data that will adjust the model’s internal parameters, optimizing it for your use case.

AI educator Dave Shapiro has a tutorial on fine-tuning GPT-3 to be a tutor for younger students, and here is one of the training samples he used (“TIM” is the name he’s using for the tutor):

User: Hey TIM, I’m interested in learning more about Charles Darwin. Can you tell me a bit about him?

TIM: Charles Darwin was an English naturalist and geologist who developed the theory of evolution by natural selection. He was born into a wealthy family and was able to attend the prestigious Shrewsbury School. After graduation, he enrolled at the University of Edinburgh to study medicine, but he soon dropped out and enrolled at Christ’s College, Cambridge.

User: Interesting! I didn’t know he was born into a wealthy family.

TIM: Yes, his family was very wealthy and his father was a doctor. Darwin’s grandfather, Erasmus Darwin, was a well-known scientist and wrote several books on botany and evolution.

User: Wow, I had no idea. What made Darwin decide to drop out of medical school and study at Cambridge?

TIM: Darwin was interested in natural history and had developed a keen interest in taxidermy. After dropping out of medical school, he traveled to Wales with some friends to study geology. It was during this time that he began to develop his theory of evolution.

One thing you’ll note is that this sample is pretty long. It could take a while to come up with hundreds of these to fine-tune your model, but that’s actually something a generative pretrained language model like ChatGPT can help with. Just be sure you have a human go over the samples and check them for accuracy, or you risk compromising the quality of your outputs.

Another thing to think about is how you’ll handle adversarial behavior and edge cases. If you’re training a conversational AI chatbot for a contact center, for example, you’ll want to include plenty of instances of the model calmly and politely responding to an irate customer. That way, your output will be similarly calm and polite.

Lastly, you’ll have to format the fine-tuning data according to whatever specifications are required by the base model you’re using. It’ll probably be something similar to JSON, but check the documentation to be sure.

Feeding it to Your Model

Now that you’ve got your samples ready, you’ll have to give them to the model for fine-tuning. This will involve you feeding the examples to the model via its API and waiting until the process has finished.

What is the Difference Between Fine-Tuning and a Pretrained Model?

A pretrained model is one that has been previously trained on a particular dataset or task, and fine-tuning is getting that model to do well on a new task by showing it examples of the output you want to see.

Pretrained models like ChatGPT are often pretty good out of the box, but if you’re wanting it to create legal contracts or work with highly-specialized scientific vocabulary, you’ll likely need to fine-tune it.

Should You Fine-Tune a Pretrained Model For Your Business?

Generative pretrained language models like ChatGPT and Bard have already begun to change the way businesses like contact centers function, and we think this is a trend that is likely to accelerate in the years ahead.

If you’ve been intrigued by the possibility of fine-tuning a pretrained model to supercharge your enterprise, then hopefully the information contained in this article gives you some ideas on how to begin.

Another option is to leverage the power of the Quiq platform. We’ve built a best-in-class conversational AI system that can automate substantial parts of your customer interactions (without you needing to run your own models or set up a fine-tuning pipeline.)

To see how we can help, schedule a demo with us today!

Request A Demo

Brand Voice And Tone Building With Prompt Engineering

Key Takeaways

  • Prompt engineering is essential for shaping AI output. Small changes in wording, context, or tone in a prompt can produce vastly different results, making prompt design a core skill for guiding generative models.
  • Prompts have distinct components that improve reliability. Effective prompts include: background context, clear instructions, example data, and output constraints to steer the model’s behavior.
  • There are risks and challenges tied to brand-level content. LLMs can hallucinate, stray in tone or style, or violate compliance constraints. Mitigations include instructing “what not to do,” iterative refinement, and human oversight.
  • Prompt engineering supports marketing tasks, but isn’t a silver bullet. You can use AI for idea generation, background research, and drafting, but models are best used as aids, not full replacements for skilled human writers.

Artificial intelligence tools like ChatGPT are changing the way strategists are building their brands.

But with the staggering rate of change in the field, it can be hard to know how to utilize its full potential. Should you hire an engineering team? Pay for a subscription and do it yourself?

The truth is, it depends. But one thing you can try is prompt engineering, a term that refers to carefully crafting the instructions you give to the AI to get the best possible results.

In this piece, we’ll cover the basics of prompt engineering and discuss the many ways in which you can build your brand voice with generative AI.

What is Prompt Engineering?

As the name implies, generative AI refers to any machine learning (ML) model whose primary purpose is to generate some output. There are generative AI applications for creating new images, text, code, and music.

There are also ongoing efforts to expand the range of outputs generative models can handle, such as a fascinating project to build a high-level programming language for creating new protein structures.

The way you get output from a generative AI model is by prompting it. Just as you could prompt a friend by asking “How was your vacation in Japan,” you can prompt a generative model by asking it questions and giving it instructions. Here’s an example:

“I’m working on learning Java, and I want you to act as though you’re an experienced Java teacher. I keep seeing terms like `public class` and `public static void`. Can you explain to me the different types of Java classes, giving an example and explanation of each?”

When we tried this prompt with GPT-4, it responded with a lucid breakdown of different Java classes (i.e., static, inner, abstract, final, etc.), complete with code snippets for each one.

When Small Changes Aren’t So Small

Mapping the relationship between human-generated inputs and machine-generated outputs is what the emerging field of “prompt engineering” is all about.

Prompt engineering only entered popular awareness in the past few years, as a direct consequence of the meteoric rise of large language models (LLMs). It rapidly became obvious that GPT-3.5 was vastly better than pretty much anything that had come before, and there arose a concomitant interest in the best ways of crafting prompts to maximize the effectiveness of these (and similar) tools.

At first glance, it may not be obvious why prompt engineering is a standalone profession. After all, how difficult could it be to simply ask the computer to teach you Chinese or explain a coding concept? Why have a “prompt engineer” instead of a regular engineer who sometimes uses GPT-4 for a particular task?

A lot could be said in reply, but the big complication is the fact that a generative AI’s output is extremely dependent upon the input it receives.

An example pulled from common experience will make this clearer. You’ve no doubt noticed that when you ask people different kinds of questions you elicit different kinds of responses. “What’s up?” won’t get the same reply as “I notice you’ve been distant recently, does that have anything to do with losing your job last month?”

The same basic dynamic applies to LLMs. Just as subtleties in word choice and tone will impact the kind of interaction you have with a person, they’ll impact the kind of interaction you have with a generative model.

All this nuance means that conversing with your fellow human beings is a skill that takes a while to develop, and that also holds in trying to productively using LLMs. You must learn to phrase your queries in a way that gives the model good context, includes specific criteria as to what you’re looking for in a reply, etc.

Honestly, it can feel a little like teaching a bright, eager intern who has almost no initial understanding of the problem you’re trying to get them to solve. If you give them clear instructions with a few examples they’ll probably do alright, but you can’t just point them at a task and set them loose.

We’ll have much more to say about crafting the kinds of prompts that help you build your brand voice in upcoming sections, but first, let’s spend some time breaking down the anatomy of a prompt.

This context will come in handy later.

What’s In A Prompt?

In truth, there are very few real restrictions on how you use an LLM. If you ask it to do something immoral or illegal it’ll probably respond along the lines of “I’m sorry Dave, but as a large language model I can’t let you do that,” otherwise you can just start feeding it text and seeing how it responds.

That having been said, prompt engineers have identified some basic constituent parts that go into useful prompts. They’re worth understanding as you go about using prompt engineering to build your brand voice.

Context

First, it helps to offer the LLM some context for the task you want done. Under most circumstances, it’s enough to give it a sentence or two, though there can be instances in which it makes sense to give it a whole paragraph.

Here’s an example prompt without good context:

“Can you write me a title for a blog post?”

Most human beings wouldn’t be able to do a whole lot with this, and neither can an LLM. Here’s an example prompt with better context:

“I’ve just finished a blog post for a client that makes legal software. It’s about how they have the best payments integrations, and the tone is punchy, irreverent, and fun. Could you write me a title for the post that has the same tone?”

To get exactly what you’re looking for you may need to tinker a bit with this prompt, but you’ll have much better chances with the additional context.

Instructions

Of course, the heart of the matter is the actual instructions you give the LLM. Here’s the context-added prompt from the previous section, whose instructions are just okay:

“I’ve just finished a blog post for a client that makes legal software. It’s about how they have the best payments integrations, and the tone is punchy, irreverent, and fun. Could you write me a title for the post that has the same tone?”

A better way to format the instructions is to ask for several alternatives to choose from:

“I’ve just finished a blog post for a client that makes legal software. It’s about how they have the best payments integrations, and the tone is punchy, irreverent, and fun. Could you give me 2-3 titles for the blog post that have the same tone?”

Here again, it’ll often pay to go through a couple of iterations. You might find – as we did when we tested this prompt – that GPT-4 is just a little too irreverent (it used profanity in one of its titles.) If you feel like this doesn’t strike the right tone for your brand identity you can fix it by asking the LLM to be a bit more serious, or rework the titles to remove the profanity, etc.

You may have noticed that “keep iterating and testing” is a common theme here.

Example Data

Though you won’t always need to get the LLM input data, it is sometimes required (as when you need it to summarize or critique an argument) and is often helpful (as when you give it a few examples of titles you like.)

Here’s the reworked prompt from above, with input data:

“I’ve just finished a blog post for a client that makes legal software. It’s about how they have the best payments integrations, and the tone is punchy, irreverent, and fun. Could you give me 2-3 titles for the blog post that have the same tone?

Here’s a list of two titles that strike the right tone:
When software goes hard: dominating the legal payments game.
Put the ‘prudence’ back in ‘jurisprudence’ by streamlining your payment collections.”

Remember, LLMs are highly sensitive to what you give them as input, and they’ll key off your tone and style. Showing them what you want dramatically boosts the chances that you’ll be able to quickly get what you need.

Output Indicators

An output indicator is essentially any concrete metric you use to specify how you want the output to be structured. Our existing prompt already has one, and we’ve added another (both are bolded):

“I’ve just finished a blog post for a client that makes legal software. It’s about how they have the best payments integrations, and the tone is punchy, irreverent, and fun. Could you give me 2-3 titles for the blog post that have the same tone? Each title should be approximately 60 characters long.

Here’s a list of two titles that strike the right tone:
When software goes hard: dominating the legal payments game.
Put the ‘prudence’ back in ‘jurisprudence’ by streamlining your payment collections.”

As you go about playing with LLMs and perfecting the use of prompt engineering in building your brand voice, you’ll notice that the models don’t always follow these instructions. Sometimes you’ll ask for a five-sentence paragraph that actually contains eight sentences, or you’ll ask for 10 post ideas and get back 12.

We’re not aware of any general way of getting an LLM to consistently, strictly follow instructions. Still, if you include good instructions, clear output indicators, and examples, you’ll probably get close enough that only a little further tinkering is required.

What Are The Different Types of Prompts You Can Use For Prompt Engineering?

Though prompt engineering for tasks like brand voice and tone building is still in its infancy, there are nevertheless a few broad types of prompts that are worth knowing.

  • Zero-shot prompting: A zero-shot prompt is one in which you simply ask directly for what you want without providing any examples. It’ll simply generate an output on the basis of its internal weights and prior training, and, surprisingly, this is often more than sufficient.
  • One-shot prompting: With a one-shot prompt, you’re asking the LLM for output and giving it a single example to learn from.
  • Few-shot prompting: Few-shot prompts involve a least a few examples of expected output, as in the two titles we provided our prompt when we asked it for blog post titles.
  • Chain-of-thought prompting: Chain-of-thought prompting is similar to few-shot prompting, but with a twist. Rather than merely giving the model examples of what you want to see, you craft your examples such that they demonstrate a process of explicit reasoning. When done correctly, the model will actually walk through the process it uses to reason about a task. Not only does this make its output more interpretable, but it can also boost accuracy in domains at which LLMs are notoriously bad, like addition.

What Are The Challenges With Prompt Engineering For Brand Voice?

We don’t use the word “dazzling” lightly around here, but that’s the best way of describing the power of ChatGPT and the broader ecosystem of large language models.

You would be hard-pressed to find many people who have spent time with one and come away unmoved.

Still, challenges remain, especially when it comes to using prompt engineering for content marketing or building your brand voice.

One well-known problem is the tendency of LLMs to completely make things up, a phenomenon referred to as “hallucination”. The internet is now filled with examples of ChatGPT completely fabricating URLs, books, papers, professions, and individuals. If you use an LLM to create content for your website and don’t thoroughly vet it, you run the risk of damaging your reputation and your brand if it contains false or misleading information.

A related problem is legal or compliance issues that emerge as a result of using an LLM. Though the technology hasn’t been around long enough to get anyone into serious trouble (we suspect it won’t be long), there are now cases in which attorneys have been caught using faulty research generated by ChatGPT or engineering teams have leaked proprietary secrets by feeding meeting notes into it.

Finally, if you’re offering a fine-tuned model to customers to do something like answer questions, you must be very, very careful in delimiting its scope so that it doesn’t generate unwanted behavior. It’s pretty easy to accidentally wander into fraught territory when engaging with an LLM in an open-ended manner, and that’s not even counting users who deliberately try to get it to respond inappropriately.

One potential solution to this problem is by crafting your prompts such that they contain clear instructions about what not to do. You may tell it not to discuss its own rules, not to change its tone, not to speak negatively about anyone, not to argue, etc.

Crafting a prompt that illustrates the correct behavior while explicitly ruling out any incorrect behaviors is a non-trivial task, requiring a great deal of testing and refinement. But it’s one you’ll have to get right if you want to leverage LLMs for your brand voice while avoiding any possible harm down the line.

Prompt Engineering For Marketers

With all this context having been established, you’re no doubt excited about using prompt engineering in your marketing efforts. In this section, we’ll go through a couple of ways you can do that.

Coming up with ideas

One place where LLMs excel is in helping you with ideation. A lot goes into content marketing, but all of that follows from having a solid list of basic concepts to work from.

We tested ChatGPT’s functionality by feeding it this query:

“I’m starting a blog about becoming a data scientist. Could you give me a list of 10 blog post ideas, with a mix of beginner, intermediate, and advanced topics? The titles should be eye-catching but not too aggressive.

Here are two examples of good blog post titles:

‘Data science: what to know when considering a career in this hot new field.’
‘Data science v.s. data engineering: everything you wanted to know’”

And we got back some excellent ideas:

  1. “Demystifying Data Science: A Beginner’s Guide to Jumpstarting Your Career”
  2. “From Novice to Ninja: Essential Skills for Every Data Science Aspirant”
  3. “Unlocking the Power of Python: A Data Scientist’s Best Friend”
  4. “Exploratory Data Analysis: Unveiling Insights Hidden in the Numbers”
  5. “Machine Learning 101: A Gentle Introduction for Data Science Enthusiasts”
  6. “Feature Engineering: Enhancing Your Models Like a Pro”
  7. “The Art of Natural Language Processing: Extracting Meaning from Text”
  8. “Unsupervised Learning: Unveiling Patterns and Structure in Your Data”
  9. “Deep Dive into Neural Networks: Building Advanced Models for Predictive Analytics”
  10. “Data Science in the Real World: Tackling Complex Business Problems with Analytics”

And look, it even generated the correct number of titles! You’ll notice, though that every single one follows the same template, with an initial phrase, a colon, and then a follow-on phrase. That’s because the two examples I gave have this structure, and that’s what it used as a guide.

Performing Background Research

After you’re done coming up with good ideas for building your brand voice, you’ll likely want to do some additional background research on your domain. This, too, is a place where LLMs excel.

There can be a lot of subtlety to this. You might start with something obvious, like “give me a list of the top authors in the keto diet niche”, but you can also get more creative than this. We’ve heard of copywriters who have used GPT-3.5 to generate lengthy customer testimonials for fictional products, or diary entries for i.e. 40-year-old suburban dads who are into DIY home improvement projects.

Regardless, with a little bit of ingenuity, you can generate a tremendous amount of valuable research that can inform your attempts to develop a brand voice.

Be careful, though; this is one place where model hallucinations could be really problematic. Be sure to manually check a model’s outputs before using them for anything critical.

Generating Actual Content

Of course, one place where content marketers are using LLMs more often is in actually writing full-fledged content. We’re of the opinion that GPT-3.5 is still not at the level of a skilled human writer, but it’s excellent for creating outlines, generating email blasts, and writing relatively boilerplate introductions and conclusions.

Getting better at prompt engineering

Despite the word “engineering” in its title, prompt engineering remains as much an art as it is a science. Hopefully, the tips we’ve provided here will help you structure your prompts in a way that gets you good results, but there’s no substitute for practicing the way you interact with LLMs.

One way to approach this task is by paying careful attention to the ways in which small word choices impact the kinds of output generated. You could begin developing an intuitive feel for the relationship between input text and output text by simply starting multiple sessions with ChatGPT and trying out slight variations of prompts. If you really want to be scientific about it, copy everything over into a spreadsheet and look for patterns. Over time, you’ll become more and more precise in your instructions, just as an experienced teacher or manager does.

Prompt Engineering Can Help You Build Your Brand

Advanced AI models like ChatGPT are changing the way SEO, content marketing, and brand strategy are being done. From creating buyer personas to using chatbots for customer interactions, these tools can help you get far more work done with less effort.

But you have to be cautious, as LLMs are known to hallucinate information, change their tone, and otherwise behave inappropriately.

With the right prompt engineering expertise, these downsides can be ameliorated, and you’ll be on your way to building a strong brand. If you’re interested in other ways AI tools can take your business to the next level, schedule a demo of Quiq’s conversational CX platform today!

Contact Us

Frequently Asked Questions (FAQs)

What is prompt engineering?

Prompt engineering is the process of designing clear, strategic inputs that guide AI models to produce accurate, on-brand outputs.

Why does prompt wording matter?

Even small wording or tone changes can dramatically affect AI output quality, consistency, and alignment with brand voice.

How can prompt engineering help define a brand’s tone?

By setting context, examples, and constraints, teams can train AI tools to replicate their unique communication style and maintain voice consistency.

What are common prompting techniques?

Zero-shot, one-shot, few-shot, and chain-of-thought prompting. Each helps models improve reasoning or stay closer to brand-approved examples.

What are the biggest risks with prompt engineering?

AI can hallucinate, misinterpret tone, or generate non-compliant content. In enterprise applications, these risks are amplified – making it essential to establish clear guardrails for data privacy, security, and brand compliance. Regular review and prompt refinement help ensure reliable, accurate, and consistent brand messaging at scale.

LLMs For the Enterprise: How to Protect Brand Safety While Building Your Brand Persona

It’s long been clear that advances in artificial intelligence change how businesses operate. Whether it’s extremely accurate machine translation, chatbots that automate customer service tasks, or spot-on recommendations for music and shows, enterprises have been using advanced AI systems to better serve their customers and boost their bottom line for years.

Today the big news is generative AI, with large language models (LLMs) in particular capturing the imagination. As we’d expect, businesses in many different industries are enthusiastically looking at incorporating these tools into their workflows, just as prior generations did for the internet, computers, and fax machines.

But this alacrity must be balanced with a clear understanding of the tradeoffs involved. It’s one thing to have a language model answer simple questions, and quite another to have one engaging in open-ended interactions with customers involving little direct human oversight.

If you have an LLM-powered application and it goes off the rails, it could be mildly funny, or it could do serious damage to your brand persona. You need to think through both possibilities before proceeding.

This piece is intended as a primer on effectively using LLMs for the enterprise. If you’re considering integrating LLMs for specific applications and aren’t sure how to weigh the pros and cons, it will provide invaluable advice on the different options available while furnishing the context you need to decide which is the best fit for you.

How Are LLMs Being Used in Business?

LLMs like GPT-4 are truly remarkable artifacts. They’re essentially gigantic neural networks with billions of internal parameters, trained on vast amounts of text data from books and the internet.

Once they’re ready to go, they can be used to ask and answer questions, suggest experiments or research ideas, write code, write blog posts, and perform many other tasks.

Their flexibility, in fact, has come as quite a surprise, which is why they’re showing up in so many places. Before we talk about specific strategies for integrating LLMs into your enterprise, let’s walk through a few business use cases for the technology.

Generating (or rewriting) text

The obvious use case is generating text. GPT-4 and related technologies are very good at writing generic blog posts, copy, and emails. But they’ve also proven useful in more subtle tasks, like producing technical documentation or explaining how pieces of code work.

Sometimes it makes sense to pass this entire job on to LLMs, but in other cases, they can act more like research assistants, generating ideas or taking human-generated bullet points and expanding on them. It really depends on the specifics of what you’re trying to accomplish.

Conversational AI

A subcategory of text generation is using an LLM as a conversational AI agent. Clients or other interested parties may have questions about your product, for instance, and many of them can be answered by a properly fine-tuned LLM instead of by a human. This is a use case where you need to think carefully about protecting your brand persona because LLMs are flexible enough to generate inappropriate responses to questions. You should extensively test any models meant to interact with customers and be sure your tests include belligerent or aggressive language to verify that the model continues to be polite.

Summarizing content

Another place that LLMs have excelled is in summarizing already-existing text. This, too, is something that once would’ve been handled by a human, but can now be scaled up to the greater speed and flexibility of LLMs. People are using LLMs to summarize everything from basic articles on the internet to dense scientific and legal documents (though it’s worth being careful here, as they’re known to sometimes include inaccurate information in these summaries.)

Answering questions

Though it might still be a while before ChatGPT is able to replace Google, it has become more common to simply ask it for help rather than search for the answer online. Programmers, for example, can copy and paste the error messages produced by their malfunctioning code into ChatGPT to get its advice on how to proceed. The same considerations around protecting brand safety that we mentioned in the ‘conversational AI’ section above apply here as well.

Classification

One way to get a handle on a huge amount of data is to use a classification algorithm to sort it into categories. Once you know a data point belongs in a particular bucket you already know a fair bit about it, which can cut down on the amount of time you need to spend on analysis. Classifying documents, tweets, etc. is something LLMs can help with, though at this point a fair bit of technical work is required to get models like GPT-3 to reliably and accurately handle classification tasks.

Sentiment analysis

Sentiment analysis refers to a kind of machine learning in which the overall tone of a piece of text is identified (i.e. is it happy, sarcastic, excited, etc.) It’s not exactly the same thing as classification, but it’s related. Sentiment analysis shows up in many customer-facing applications because you need to know how people are responding to your new brand persona or how they like an update to your core offering, and this is something LLMs have proven useful for.

What Are the Advantages of Using LLMs in Business?

More and more businesses are investigating LLMs for their specific applications because they confer many advantages to those that know how to use them.

For one thing, LLMs are extremely well-suited for certain domains. Though they’re still prone to hallucinations and other problems, LLMs can generate high-quality blog posts, emails, and general copy. At present, the output is usually still not as good as what a skilled human can produce.

But LLMs can generate text so quickly that it often makes sense to have the first draft created by a model and tweaked by a human, or to have relatively low-effort tasks (like generating headlines for social media) delegated to a machine so a human writer can focus on more valuable endeavors.

For another, LLMs are highly flexible. It’s relatively straightforward to take a baseline LLM like GPT-4 and feed it examples of behavior you want to see, such as generating math proofs in the form of poetry (if you’re into that sort of thing.) This can be done with prompt engineering or with a more sophisticated pipeline involving the model’s API, but in either case, you have the option of effectively pointing these general-purpose tools at specific tasks.

None of this is to suggest that LLMs are always and everywhere the right tool for the job. Still, in many domains, it makes sense to examine using LLMs for the enterprise.

What Are the Disadvantages of Using LLMs in Business?

For all their power, flexibility, and jaw-dropping speed, there are nevertheless drawbacks to using LLMs.

One disadvantage of using LLMs in business that people are already familiar with is the variable quality of their output. Sometimes, the text generated by an LLM is almost breathtakingly good. But LLMs can also be biased and inaccurate, and their hallucinations – which may not be a big deal for SEO blog posts – will be a huge liability if they end up damaging your brand.

Exacerbating this problem is the fact that no matter how right or wrong GPT-4 is, it’ll format its response in flawless, confident prose. You might expect a human being who doesn’t understand medicine very well to misspell a specialized word like “Umeclidinium bromide”, and that would offer you a clue that there might be other inaccuracies. But that essentially never happens with an LLM, so special diligence must be exercised in fact-checking their claims.

There can also be substantial operational costs associated with training and using LLMs. If you put together a team to build your own internal LLM you should expect to spend (at least) hundreds of thousands of dollars getting it up and running, to say nothing of the ongoing costs of maintenance.

Of course, you could also build your applications around API calls to external parties like OpenAI, who offer their models’ inferences as an endpoint. This is vastly cheaper, but it comes with downsides of its own. Using this approach means being beholden to another entity, which may release updates that dramatically change the performance of their models and materially impact your business.

Perhaps the biggest underlying disadvantage to using LLMs, however, is their sheer inscrutability. True, it’s not that hard to understand at a high level how models like GPT-4 are trained. But the fact remains that no one really understands what’s happening inside of them. It’s usually not clear why tiny changes to a prompt can result in such wildly different outputs, for example, or why a prompt will work well for a while before performance suddenly starts to decline.

Perhaps you just got unlucky – these models are stochastic, after all – or perhaps OpenAI changed the base model. You might not be able to tell, and either way, it’s hard to build robust, long-range applications around technologies that are difficult to understand and predict.

Contact Us

How Can LLMs Be Integrated Into Enterprise Applications?

If you’ve decided you want to integrate these groundbreaking technologies into your own platforms, there are two basic ways you can proceed. Either you can use a 3rd-party service through an API, or you can try to run your own models instead.

In the following two sections, we’ll cover each of these options and their respective tradeoffs.

Using an LLM through an API

An obvious way of leveraging the power of LLMs is by simply including API calls to a platform that specializes in them, such as OpenAI. Generally, this will involve creating infrastructure that is able to pass a prompt to an LLM and return its output.

If you’re building a user-facing chatbot through this method, that would mean that whenever the user types a question, their question is sent to the model and its response is sent back to the user.

The advantages of this approach are that they offer an extremely low barrier to entry, low costs, and fast response times. Hitting an API is pretty trivial as engineering tasks go, and though you’re charged per token, the bill will surely be less than it would be to stand up an entire machine-learning team to build your own model.

But, of course, the danger is that you’re relying on someone else to deliver crucial functionality. If OpenAI changes its terms of service or simply goes bankrupt, you could find yourself in a very bad spot.

Another disadvantage is that the company running the model may have access to the data you’re passing to its models. A team at Samsung recently made headlines when it was discovered they’d been plowing sensitive meeting notes and proprietary source code directly into ChatGPT, where both were viewable by OpenAI. You should always be careful about the data you’re exposing, particularly if it’s customer data whose privacy you’ve been entrusted to protect.

Running Your Own Model

The way to ameliorate the problems of accessing an LLM through an API is to either roll your own or run an open-source model in an environment that you control.

Building the kind of model that can compete with GPT-4 is really, really difficult, and it simply won’t be an option for any but the most elite engineering teams.

Using an open-source LLM, however, is a much more viable option. There are now many such models for text or code generation, and they can be fine-tuned for the specifics of your use case.

By and large, open-source models tend to be smaller and less performant than their closed-source cousins, so you’ll have to decide whether they’re good enough for you. And you should absolutely not underestimate the complexity of maintaining an open-sourced LLM. Though it’s nowhere near as hard as training one from scratch, maintaining an advanced piece of AI software is far from a trivial task.

All that having been said, this is one path you can take if you have the right applications in mind and the technical skills to pull it off.

How to Protect Brand Safety While Building Your Brand Persona

Throughout this piece, we’ve made mention of various ways in which LLMs can help supercharge your business while also warning of the potential damage a bad LLM response can do to your brand.

At present, there is no general-purpose way of making sure an LLM only does good things while never doing bad things. They can be startlingly creative, and with that power comes the possibility that they’ll be creative in ways you’d rather them not be (same as children, we suppose.)

Still, it is possible to put together an extensive testing suite that substantially reduces the possibility of a damaging incident. You need to feed the model many different kinds of interactions, including ones that are angry, annoyed, sarcastic, poorly spelled or formatted, etc., to see how it behaves.

What’s more, this testing needs to be ongoing. It’s not enough to run a test suite one weekend and declare the model fit for use, it needs to be periodically re-tested to ensure no bad behavior has emerged.

With these techniques, you should be able to build a persona as a company on the cutting edge while protecting yourself from incidents that damage your brand.

What Is the Future of LLMs and AI?

The business world moves fast, and if you’re not keeping up with the latest advances you run the risk of being left behind. At present, large language models like GPT-4 are setting the world ablaze with discussions of their potential to completely transform fields like customer experience chatbots.

If you want in on the action and you have the in-house engineering expertise, you could try to create your own offering. But if you would rather leverage the power of LLMs for chat-based applications by working with a world-class team that’s already done the hard engineering work, reach out to Quiq to schedule a demo.

Request A Demo

Semi-Supervised Learning: What It Is and How It Works

Key Takeaways

  • Semi-supervised learning combines a small set of labeled data with a large set of unlabeled data to improve model performance.
  • Common methods include self-training (model teaches itself with pseudo-labels), co-training (two models teach each other), and graph-based learning (labels spread through data connections).
  • It’s useful when labeling data is expensive or time-consuming, like in fraud detection, content classification, or image analysis.
  • Semi-supervised learning is different from self-supervised learning (predicting parts of data) and active learning (asking for labels on uncertain data).

From movie recommendations to AI agents as customer service reps, it seems like machine learning (ML) is everywhere. But one thing you may not realize is just how much data is required to train these advanced systems, and how much time and energy goes into formatting that data appropriately.

Machine learning engineers have developed many ways of trying to cut down on this bottleneck, and one of the techniques that has emerged from these efforts is semi-supervised learning.

Today, we’re going to discuss semi-supervised learning, how it works, and where it’s being applied.

What is Semi-Supervised Learning?

Semi-supervised learning (SSL) is an approach to machine learning (ML) that is appropriate for tasks where you have a large amount of data that you want to learn from, only a fraction of which is labeled.

Semi-supervised learning sits somewhere between supervised and unsupervised learning, and we’ll start by understanding these techniques because that will make it easier to grasp how semi-supervised learning works.

  • Supervised learning: refers to any ML setup in which a model learns from labeled data. It’s called “supervised” because the model is effectively being trained by showing it many examples of the right answer.
  • Unsupervised learning: requires no such labeled data. Instead, an unsupervised machine learning algorithm is able to ingest data, analyze its underlying structure, and categorize data points according to this learned structure.

Like we stated previously, semi-supervised learning combines elements of supervised and unsupervised learning. Instead of relying solely on labeled data (like supervised models) or unlabeled data (like unsupervised ones), it uses a small labeled dataset alongside a larger unlabeled one to improve accuracy and efficiency.

This approach is especially useful when labeling data is costly or time-consuming, such as tagging support chats or images. The labeled examples guide the model’s understanding, while the unlabeled data helps it generalize to new patterns. By blending both data types, semi-supervised learning strikes a balance between performance and scalability, making it ideal for AI applications like intent detection, chat automation, and content moderation.

Semi-supervised learning

How Does Semi-Supervised Learning Work?

The three main variants of semi-supervised learning are self-training, co-training, and graph-based label propagation, and we’ll discuss each of these in turn.

Self-training

Self-training is the simplest kind of semi-supervised learning, and it works like this.

A small subset of your data will have labels while the rest won’t have any, so you’ll begin by using supervised learning to train a model on the labeled data. With this model, you’ll go over the unlabeled data to generate pseudo-labels, so-called because they are machine-generated and not human-generated.

Now, you have a new dataset; a fraction of it has human-generated labels while the rest contains machine-generated pseudo-labels, but all the data points now have some kind of label, and a model can be trained on them.

Co-training

Co-training has the same basic flavor as self-training, but it has more moving parts. With co-training, you’re going to train two models on the labeled data, each on a different set of features (in literature, these are called “views”).

If we’re still working on that plant classifier from before, one model might be trained on the number of leaves or petals, while another might be trained on their color.

At any rate, now you have a pair of models trained on different views of the labeled data. These models will then generate pseudo-labels for all the unlabeled datasets. When one of the models is very confident in its pseudo-label (i.e., when the probability it assigns to its prediction is very high), that pseudo-label will be used to update the prediction of the other model, and vice versa.

Let’s say both models come to an image of a rose. The first model thinks it’s a rose with 95% probability, while the other thinks it’s a tulip with a 68% probability. Since the first model seems really sure of itself, its label is used to change the label on the other model.

Think of it like studying a complex subject with a friend. Sometimes a given topic will make more sense to you, and you’ll have to explain it to your friend. Other times, they’ll have a better handle on it, and you’ll have to learn from them.

In the end, you’ll both have made each other stronger, and you’ll get more done together than you would’ve done alone. Co-training attempts to utilize the same basic dynamic with ML models.

Graph-based semi-supervised learning

Another way to apply labels to unlabeled data is by utilizing a graph data structure. A graph is a set of nodes (in graph theory, we call them “vertices”) which are linked together through “edges.” The cities on a map would be vertices, and the highways linking them would be edges.

If you put your labeled and unlabeled data on a graph, you can propagate the labels throughout by counting the number of pathways from a given unlabeled node to the labeled nodes.

Imagine that we’ve got our fern and rose images in a graph, together with a bunch of other unlabeled plant images. We can choose one of those unlabeled nodes and count up how many ways we can reach all the “rose” nodes and all the “fern” nodes. If there are more paths to a rose node than a fern node, we classify the unlabeled node as a “rose”, and vice versa. This gives us a powerful alternative means by which to algorithmically generate labels for unlabeled data.

Contact Us

Common Semi-Supervised Applications

The amount of data in the world is increasing at a staggering rate, while the number of human-hours available for labeling it all is increasing at a much less impressive clip. This presents a problem because there’s no end to the places where we want to apply machine learning.

Semi-supervised learning presents a possible solution to this dilemma, and in the next few sections, we’ll describe semi-supervised learning examples in real life.

  • Identifying cases of fraud: In finance, semi-supervised learning can be used to train systems for identifying cases of fraud or extortion.
  • Classifying content on the web: The internet is a big place, and new websites are put up all the time. In order to serve useful search results, it’s necessary to classify huge amounts of this web content, which can be done with semi-supervised learning.
  • Analyzing audio and images: When audio files or image files are generated, they’re often not labeled, which makes it difficult to use them for machine learning. Beginning with a small subset of human-labeled data, however, this problem can be overcome with semi-supervised learning.

What Are the Benefits of Semi-Supervised Learning?

Semi-supervised learning delivers the best of both worlds – strong model performance without the steep cost of fully labeled datasets. Here are a few key benefits:

Key benefits include:

  • Cost Efficiency: Reduces the need for extensive manual labeling, allowing teams to use mostly unlabeled data while still achieving high accuracy.
  • Better Model Generalization: Improves a model’s ability to recognize patterns in new or unseen data by leveraging diverse, unlabeled examples.
  • Enhanced Performance: Even with limited labeled data, semi-supervised models often outperform those trained solely with supervised techniques.
  • Improved Data Utilization: Makes full use of available data resources, turning previously “unused” unlabeled data into valuable training material.
  • Scalability: Easily adapts as new unlabeled data becomes available, allowing continuous improvement without repeating costly labeling cycles.
  • Faster Deployment: Requires less upfront labeled data, meaning models can reach production readiness sooner and refine over time with additional feedback.

In essence, semi-supervised learning helps organizations maximize the value of their data – achieving stronger, more adaptable AI systems without the traditional bottlenecks of data labeling and cost.

When Should You Use Semi-Supervised Learning (and When Not To)

Semi-supervised learning is most effective when you have a large pool of unlabeled data but only a small amount of labeled data. It’s designed for situations where labeling is expensive or time-consuming, but unlabeled data is plentiful and easy to collect.

When to Use It

  • Labeled Data is Scarce or Costly:  Ideal when labeling requires specialized expertise or significant manual effort.
  • Unlabeled Data is Abundant: Works well when you have vast quantities of raw data – like chat transcripts, audio clips, or product images.
  • To Prevent Overfitting: Adding unlabeled data gives the model more context, helping it generalize better and avoid overfitting to a limited labeled set.
  • For Unstructured Data: Especially effective for text, image, and audio datasets where manual labeling is challenging or subjective.
  • When Supervised or Unsupervised Learning Falls Short: If supervised learning lacks enough labels for accuracy and unsupervised learning lacks direction, semi-supervised learning strikes the balance between structure and scale.

When to Not Use It

  • When Labeled Data is Already Plentiful: If you have a lot of high-quality labeled data, fully supervised learning will usually yield better and more predictable results than semi-supervised learning.
  • For Highly Regulated or Sensitive Applications: In domains like finance or healthcare compliance, the uncertainty of unlabeled data may pose additional risk unless carefully validated.
  • When Data Quality Is Poor: If your unlabeled dataset contains errors, duplicates, or inconsistencies, the model can amplify those problems rather than learn from them.

In short: use semi-supervised learning when you have lots of data, little labeling capacity, and need to scale efficiently. Avoid it when your labeled data is already sufficient or your unlabeled data isn’t reliable enough to guide the model.

The Bottom Line

Semi-supervised learning empowers businesses to get more value out of the data they already have, without waiting for fully labeled datasets. By combining a small amount of labeled data with a much larger pool of unlabeled data, teams can build smarter, faster, and more adaptive models that continually improve over time.

That same principle powers Quiq’s Agentic AI – a solution designed to help enterprise teams leverage their own data to train intelligent, context-aware AI agents. With built-in automation and learning loops, Quiq’s platform helps businesses scale insights, personalize customer interactions, and accelerate innovation with no endless data labeling required.

If you’re exploring how to make your data work harder for you, it’s the perfect time to see what’s possible with Quiq’s AI Studio.

 Frequently Asked Questions (FAQs)

What is semi-supervised learning in simple terms?

Semi-supervised learning is a machine learning approach that uses a small amount of labeled data and a large amount of unlabeled data to train models more efficiently.

How is semi-supervised learning different from supervised and unsupervised learning?

Supervised learning relies only on labeled data, while unsupervised learning uses none. Semi-supervised learning blends both, improving accuracy when labeling is costly or limited.

What are some real-world examples of semi-supervised learning?

It’s used in areas like fraud detection, medical image analysis, customer sentiment classification, and speech recognition – where gathering labeled data is time-consuming or expensive.

What are the main techniques in semi-supervised learning?

Common methods include self-training (the model generates pseudo-labels), co-training (multiple models teach one another), and graph-based algorithms (labels spread through data relationships).

Why is semi-supervised learning important?

It helps businesses and researchers make better use of large unlabeled datasets, reducing labeling costs while still achieving high model accuracy.

Request A Demo

A Deep Dive on Large Language Models—And What They Mean For You

The release of OpenAI’s ChatGPT in late 2022 has utterly transformed the conversation around artificial intelligence. Whether it’s generating functioning web apps with just a few prompts, writing Spanish-language children’s stories about the blockchain in the style of Dr. Suess, or opining on the virtues and vices of major political figures, its ability to generate long strings of coherent, grammatically-correct text is shocking.

Seen in this light, it’s perhaps no surprise that ChatGPT has achieved such a staggering rate of growth. The application garnered a million users less than a week after its launch.

It’s believed that by January of 2023, this figure had climbed to 100 million monthly users, blowing past the adoption rates of TikTok (which needed nine months to get to this many monthly users) and Instagram (which took over two years.)

Naturally, many have become curious about the “large language model” (LLM) technology that makes ChatGPT and similar kinds of disruptive generative AI possible.

In this piece, we’re going to do a deep dive on LLMs, exploring how they’re trained, how they work internally, and how they might be deployed in your business. Our hope is that this will arm Quiq’s customers with the context they need to keep up with the ongoing AI revolution.

What Are Large Language Models?

LLMs are pieces of software with the ability to interact with and generate a wide variety of text. In this discussion, “text” is used very broadly to include not just existing natural language but also computer code.

A good way to begin exploring this subject is to analyze each of the terms in “large language model”, so let’s do that now. Here’s our large language models overview:

LLMs Are Models.

In machine learning (ML), you can think of a model as being a function that maps inputs to outputs. Early in their education, for example, machine learning engineers usually figure out how to fit a linear regression model that does something like predict the final price of a house based on its square footage.

They’ll feed their model a bunch of data points that look like this:

House 1: 800 square feet, $120,000
House 2: 1000 square feet, $175,000
House 3: 1500 square feet, $225,000

And the model learns the relationship between square footage and price well enough to roughly predict the price of homes that weren’t in its training data.

We’ll have a lot more to say about how LLMs are trained in the next section. For now, just be aware that when you get down to it, LLMs are inconceivably vast functions that take the input you feed them and generate a corresponding output.

LLMs Are Large.

Speaking of vastness, LLMs are truly gigantic. As with terms like “big data”, there isn’t an exact, agreed-upon point at which a basic language model becomes a large language model. Still, they’re plenty big enough to deserve the extra “L” at the beginning of their name.

There are a few ways to measure the size of machine learning models, but one of the most common is by looking at their parameters.

In the linear regression model just discussed, there would be only one parameter, for square footage. We could make our model better by also showing it the home’s zip code and the number of bathrooms it has, and then it would have three parameters.

It’s hard to say how big most real systems are because that information isn’t usually made public, but a linear regression model might have dozens of parameters, and a basic neural network could range from a few hundred thousand to a few tens of millions of parameters.

GPT-3 has 175 billion parameters, and Google’s Minerva model has 540 billion parameters. It isn’t known how many parameters GPT-4 has, but it’s almost certainly more.

(Note: I say “almost” certainly because better models don’t always have more parameters. They usually do, but it’s not an ironclad rule.)

LLMs Focus On Language.

ChatGPT and its cousins take text as input and produce text as output. This makes them distinct from some of the image-generation tools that are on the market today, such as DALL-E and Midjourney.

It’s worth noting, however, that this might be changing in the future. Though most of what people are using GPT-4 to do revolves around text, technically, the underlying model is multimodal. This means it can theoretically interact with image inputs as well. According to OpenAI’s documentation, support for this feature should arrive in the coming months.

How Are Large Language Models Trained?

Like all machine learning models, LLMs must be trained. We don’t actually know exactly how OpenAI trained the latest GPT models, as they’ve kept those details secret, but we can make some broad comments about how systems like these are generally trained.

Before we get into technical details, let’s frame the overall task that LLMs are trying to perform as a guessing game. Imagine that I start a sentence and leave out the last word, asking you to provide a guess as to how it ends.

Some of these would be fairly trivial; everyone knows that “[i]t was the best of times, it was the worst of _____,” ends with the word “times.” Others would be more ambiguous; “I stopped to pick a flower, and then continued walking down the ____,” could plausibly end with words like “road”, “street”, or “trail.”

For still others, there’d be an almost infinite number of possibilities; “He turned to face the ___,” could end with anything from “firehose” to “firing squad.”

But how is it that you’re able to generate these guesses? How do you know what a good ending to a natural-language sentence sounds like?

The answer is that you’ve been “training” for this task your entire life. You’ve been listening to sentences, reading and writing sentences, or thinking in sentences for most of your waking hours, and have therefore developed a sense of how they work.

The process of training an LLM differs in many specifics, but at a high level, it’s learning to do the same thing. A model like GPT-4 is fed gargantuan amounts of textual data from the internet or other sources, and it learns a statistical distribution that allows it to predict which words come next.

At first, it’ll have no idea how to end the sentence “[i]t was the best of times, it was the worst of ____.” But as it sees more and more examples of human-generated textual content, it improves. It discovers that when someone writes “red, orange, yellow, green, blue, indigo, ______”, the next sequence of letters is probably “violet”. It begins to be more sensitive to context, discovering that the words “bat”, “diamond”, and “plate” are probably occurring in a discussion about baseball and not the weirdest Costco you’ve ever been to.

It’s precisely this nuance that makes advanced LLMs suitable for applications such as customer service.

They’re not simply looking up pre-digested answers to questions, they’re learning a function big enough to account for the subtleties of a specific customer’s specific problem. They still don’t do this job perfectly, but they’ve made remarkable progress, which is why so many companies are looking at integrating them.

Getting into the GPT-weeds

The discussion so far is great for building a basic intuition for how LLMs are trained, but this is a deep dive, so let’s talk technical specifics.

Though we don’t know much about GPT-4, earlier models like GPT and GPT-2 have been studied in great detail. By understanding how they work, we can cultivate a better grasp of cutting-edge models.

When an LLM is trained, it’s fed a great deal of text data. It will grab samples from this data, and try to predict the next token in its sample. To make our earlier explanation easier to understand we implied that a token is a word, but that’s not quite right. A token can be a word, an individual letter, or “sub words”, i.e. small chunks of letters and spaces.

This process is known as “self-supervised learning” because the model can assess its own accuracy by checking its predicted next token against the actual next token in the dataset it’s training on.

At first, its accuracy is likely to be very bad. But as it trains its internal parameters (remember those?) are tuned with an optimizer such as stochastic gradient descent, and it gets better.

One of the crucial architectural building blocks of LLMs is the transformer.

A full discussion of transformers is well beyond the scope of this piece, but the most important thing to know is that transformers can use “attention” to model more complex relationships in language data.

For example: in a sentence like “the dog didn’t chase the cat because it was too tired”, every human knows that “it” refers to the dog and not the cat. Earlier approaches to building language models struggled with such connections in sentences that were longer than a few words, but using attention, transformers can handle them with ease.

In addition to this obvious advantage, transformers have found widespread use in deep learning applications such as language models because they’re easy to parallelize, meaning that training times can be reduced.

Building On Top Of Large Language Models

Out-of-the-box LLMs are pretty powerful, but it’s often necessary to tweak them for specific applications such as enterprise bots. There are a few ways of doing this, and we’re going to confine ourselves to two major approaches: fine-tuning and prompt engineering.

First up, it’s possible to fine-tune some of these models. Fine-tuning an LLM involves providing a training set and letting the model update its internal weights to perform better on a specific task. 

Next, the emerging discipline of prompt engineering refers to the practice of systematically crafting the text fed to the model to get it to better approximate the behavior you want.

LLMs can be surprisingly sensitive to small changes in words, phrases, and context; the job of a prompt engineer, therefore, is to develop a feel for these sensitivities and construct prompts in a way that maximizes the performance of the LLM.

Contact Us

How Can Large Language Models Be Used In Business?

There is a new gold rush in applying AI to business use cases.

For starters, given how good they are at generating text, they’re being deployed to write email copy, blog posts, and social media content, to text or survey customers, and to summarize text.

LLMs are also being used in software development. Tools like Replit’s Ghostwriter are already dramatically improving developer productivity in a variety of domains, from web development to machine learning.

What Are The “LLiMitations” Of LLMs?

For all their power, LLMs have turned out to have certain well-known limitations. To begin with, LLMs are capable of being toxic, harmful, aggressive, and biased.

Though heroic efforts have been made to train this behavior out with techniques such as reinforcement learning from human feedback, it’s possible that it can reemerge under the right conditions.

This is something you should take into account before giving customers access to generative AI offerings.

Another oft-discussed limitation is the tendency of LLMs to “invent” facts. Remember, an LLM is just trying to predict sequences of tokens, and there’s no reason it couldn’t output a sequence of text like “Dr. Micha Sartorius, professor of applied computronics at Santa Grega University”, even though this person, field, and university are fictitious.

This, too, is something you should be cognizant of before letting customers interact with generative AI.

At Quiq, we harness the power of LLMs’ language-generating capabilities, while putting strict guardrails in place to prevent these risks that are inherent to public-facing generative AI.

Should You Be Using Large Language Models?

LLMs are a remarkable engineering achievement, having been trained on vast amounts of human text and able to generate whole conversations, working code, and more.

No doubt, some of the fervor around LLMs will end up being hype. Nevertheless, the technology has been shown to be incredibly powerful, and it is unlikely to go anywhere. If you’re interested in learning about how to integrate generative AI applications like Quiq’s into your business, schedule a demo with us today!

Request A Demo

Prompt Engineering: What Is It—And How Can You Use It To Get The Most Out Of AI?

Think back to your school days. You come into class only to discover a timed writing assignment on the agenda. You have to respond to the provided prompt, quickly and accurately and will be graded against criteria like grammar, vocabulary, factual accuracy, and more.

Well, that’s what natural language processing (NLP) software like ChatGPT does daily. Except, when a computer steps into the classroom, it can’t raise its hand to ask questions.

That’s why it’s so important to provide AI with a prompt that’s clear and thorough enough to produce the best possible response.

What is AI prompt engineering?

A prompt can be a question, a phrase, or several paragraphs. The more specific the prompt is, the better the response.

Writing the perfect prompt — prompt engineering — is critical to ensure the NLP response is not only factually correct but crafted exactly as you intended to best deliver information to a specific target audience.

You can’t use low-quality ingredients in the kitchen to produce gourmet cuisine — and you can’t expect AI to, either.

Let’s revisit your old classroom again: did you ever have a teacher provide a prompt where you just weren’t really sure what the question was asking? So, you guessed a response based on the information provided, only to receive a low score.

In the post-exam review, the teacher explained what she was actually looking for and how the question was graded. You sat there thinking, “If I’d only had that information when I was given the prompt!”

Well, AI feels your pain.

The responses that NLP software provides are only as good as the input data. Learning how to communicate with AI to get it to generate desired responses is a science, and you can learn what works best through trial and error to continuously optimize your prompts.

Prompts that fail to deliver, and why.

What’s the root of the issue of prompt engineering gone wrong? It all comes down to incomplete, inconsistent, or incorrect data.

Even the most advanced AI using neural networks and deep learning techniques still needs to be fed the right information in the right way. When there is too little context provided, not enough examples, conflicting information from different sources, or major typos in the prompt, the AI can generate responses that are undesirable or just plain wrong.

How to craft the perfect prompt.

Here are some important factors to take into consideration for successful prompt engineering.

Clear instructions

Provide specific instructions and multiple examples to illustrate precisely what you want the AI to do. Words like “something,” “things,” “kind of,” and “it” (especially when there are multiple subjects within one sentence) can be indicators that your prompt is too vague.

Try to use descriptive nouns that refer to the subject of your sentence and avoid ambiguity.

  • Example (ambiguity): “She put the book on the desk; it was blue.”
  • What does “it” refer to in this sentence? Is the book blue, or is the desk blue?

Simple language

Use plain language, but avoid shorthand and slang. When in doubt, err on the side of overcommunicating and you can use trial and error to determine what shorthand approaches work for future, similar prompts. Avoid internal company or industry-specific jargon when possible, and be sure to clearly define any terms you may want to integrate.

Quality data

Give examples. Providing a single source of truth — for example, an article you want the AI to respond to questions about — will have a higher probability of returning factually correct responses based on the provided article.

On that note, teach the API how you want it to return responses when it doesn’t know the answer, such as “I don’t know,” “not enough information,” or simply “?”.

Otherwise, the AI may get creative and try to come up with an answer that sounds good but has no basis in reality.

Persona

Develop a persona for your responses. Should the response sound as though it’s being delivered by a subject matter expert or would it be better (legally or otherwise) if the response was written by someone who was only referring to subject matter experts (SMEs)?

  • Example (direct from SMEs): “Our team of specialists…”
  • Example (referring to SMEs): “Based on recent research by experts in the field…”

Voice, style, and tone

Decide how you want to represent your brand’s voice, which will largely be determined by your target audience. Would your customer be more likely to trust information that sounds like it was provided by an academic, or would a colloquial voice be more relatable?

Do you want a matter-of-fact, encyclopedia-type response, a friendly or supportive empathetic approach, or is your brand’s style more quick-witted and edgy?

With the right prompt, AI can capture all that and more.

Quiq takes prompt engineering out of the equation.

Prompt engineering is no easy task. There are many nuances to language that can trick even the most advanced NLP software.

Not only are incorrect AI responses a pain to identify and troubleshoot, but they can also hurt your business’s reputation if they aren’t caught before your content goes public.

On the other hand, manual tasks that could be automated with NLP waste time and money that could be allocated to higher-priority initiatives.

Quiq uses large language models (LLMs) to continuously optimize AI responses to your company’s unique data. With Quiq’s world-class Conversational AI platform, you can reduce the burden on your support team, lower costs, and boost customer satisfaction.

Contact Quiq today to see how our innovative LLM-built features improve business outcomes.

Contact Us

Agent Efficiency: How to Collect Better Metrics

Your contact center experience has a direct impact on your bottom line. A positive customer experience can nudge them toward a purchase, encourage repeat business, or turn them into loyal brand advocates.

But a bad run-in with your contact center? That can turn them off of your business for life.

No matter your industry, customer service plays a vital role in financial success. While it’s easy to look at your contact center as an operational cost, it’s truly an investment in the future of your business.

To maximize your return on investment, your contact center must continually improve. That means tracking contact center effectiveness and agent efficiency is critical.

But before you make any changes, you need to understand how your customer service center currently operates. What’s working? What needs improvement? And what needs to be cut?

Let’s examine how contact centers can measure customer service performance and boost efficiency.

What metrics should you monitor?

The world of contact center metrics is overwhelming—to say the least. There are hundreds of data points to track to assess customer satisfaction, agent effectiveness, and call center success.

But to make meaningful improvements, you need to begin with a few basic metrics. Here are three to start with.

1. Response time.

Response time refers to how long, on average, it takes for a customer to reach an agent. Reducing the amount of time it takes to respond to customers can increase customer satisfaction and prevent customer abandonment.

Response time is a top factor for customer satisfaction, with 83% expecting to interact with someone immediately when they contact a company, according to Salesforce’s State of the Connected Customer report.

When using response time to measure agent efficiency, have different target goals set for different channels. For example, a customer calling in or using web chat will expect an immediate response, while an email may have a slightly longer turnaround. Typically, messaging channels like SMS text fall somewhere in between.

If you want to measure how often your customer service team meets your target response times, you can also track your service level. This metric is the percentage of messages and calls answered by customer service agents within your target time frame.

2. Agent occupancy.

Agent occupancy is the amount of time an agent spends actively occupied on a customer interaction. It’s a great way to quickly measure how busy your customer service team is.

An excessively low occupancy suggests you’ve hired more agents than contact volume demands. At the same time, an excessively high occupancy may lead to agent burnout and turnover, which have their own negative effects on efficiency.

3. Customer satisfaction.

The most important contact center performance metric, customer satisfaction, should be your team’s main focus. Customer satisfaction, or CSAT, typically asks customers one question: How satisfied are you with your experience?

Customers respond using a numerical scale to rate their experience from very dissatisfied (0 or 1) to very satisfied (5). However, the range can vary based on your business’s preferences.

You can calculate CSAT scores using this formula:

Number of satisfied customers ÷ total number of respondents x 100 = CSAT

CSAT’s a great first metric to measure since it’s extremely important in measuring your agents’ effectiveness, and it’s easy for customers to complete.

There are lots of options for measuring different aspects of customer satisfaction, like customer effort score and Net Promoter Score®. Whichever you choose, ensure you use it consistently for continuous customer input.

Bonus tip: Capturing customer feedback and agent performance data is easier with contact center software. Not only can the software help with customer relationship management, but it can facilitate customer surveys, track agent data, and more.

Contact Us

How to assess contact center metrics.

Once you’ve measured your current customer center operations, you can start assessing and taking action to improve performance and boost customer satisfaction. But looking at the data isn’t as easy as it seems. Here are some things to keep in mind as you start to base decisions on your numbers.

Figure out your reporting methods.

How will you gather this information? What timeframes will you measure? Who’s included in your measurements? These are just a few questions you need to answer before you can start analyzing your data.

Contact center software, or even more advanced conversational AI platforms like Quiq, can help you track metrics and even put together reports that are ready for your management team to analyze and take action on.

Analyze data over time.

When you’re just starting out, it can be hard to contextualize your data. You need benchmarks to know whether your CSAT rating or occupancy rates are good or bad. While you can start with industry benchmarks, the most effective way to analyze data is to measure it against yourself over periods of time.

It takes months or even years for trends to reveal themselves. Start with comparative measurements and then work your way up. Month-over-month data or even quarter-over-quarter can give you small windows into what’s working and what’s not working. Just leave the big department-wide changes until you’ve collected enough data for it to be meaningful.

Don’t forget about context.

You can’t measure contact center metrics in a silo. Make sure you look at what’s going on throughout your organization and in the industry as a whole before making any changes. For example, a drop in customer response time might have to do with the influx of messages caused by a faulty product.

While collecting the data is easy, analyzing it and drawing conclusions is much more difficult. Keep the whole picture in mind when making any important decisions.
How to improve call center agent efficiency.
Now that you have the numbers, you can start making changes to improve your agent efficiency. Start with these tips.

Make incremental changes.

Don’t be tempted to make wide-reaching changes across your entire contact center team when you’re not happy with the data. Select specific metrics to target and make incremental changes that move the needle in the right direction.

For example, if your agent occupancy rates are high, don’t rush to add new members to your team. Instead, see what improvements you can make to agent efficiency. Maybe there’s some call center software you can invest in that’ll improve call turnover. Or perhaps all your team needs is some additional training on how to speed up their customer interactions. No matter what you do, track your changes.

Streamline backend processes.

Agents can’t perform if they’re constantly searching for answers on slow intranets or working with outdated information. Time spent fighting with old technology is time not spent serving your contact center customers.

Now’s the perfect time to consider a conversational platform that allows your customers to reach out using the preferred channel but still keeps the backend organized and efficient for your team.

Agents can bounce back and forth between messaging channels without losing track of conversations. Customers get to chat with your brand how they want, where they want, and your team gets to preserve the experience and deliver snag-free customer service.

Improve agent efficiency with Quiq’s Conversational AI Platform

If you want to improve your contact center’s efficiency and customer satisfaction ratings, Quiq’s conversational customer engagement software is your new best friend.

Quiq’s software enables agents to manage multiple conversations simultaneously and message customers across channels, including text and web chat. By giving customers more options for engaging with customer service, Quiq reduces call volume and allows contact center agents to focus on the conversations with the highest priority.

The Rise of Conversational AI: Why Businesses Are Embracing It

Movies may have twisted our expectations of artificial intelligence—either giving us extremely high expectations or making us think it’s ready to wipe out humanity.

But the reality isn’t on those levels. In fact, you’re already using AI in your daily life—but it’s so ingrained in your technology you probably don’t even notice. Netflix and Spotify both use AI to personalize your content recommendations. Siri, Alexa, and Google Assistant use it as well.

Conversational AI, like what Quiq uses to power our chatbots, takes artificial intelligence to the next level. See what it is and how you can use it in your business.

What is conversational AI?

Conversational artificial intelligence (AI) is a collection of technologies that create a human-like experience. It combines natural language processing (NLP), machine learning, and other technologies to enhance streamlined conversations. This can be used in many applications, like chatbots and voice (like Siri and Alexa). The most common use case for conversational AI in the business-to-customer world is through an AI chatbot messaging experience.

Unlike rule-based chatbots, those powered by conversational AI generate responses and adapt to user behavior over time. Rule-based chatbots were also limited to what you put in them—meaning if someone phrased a question differently than you wrote it (or used slang/colloquialisms/etc.), it wouldn’t understand the question. Conversational AI can also help chatbots understand more complex questions.

Putting technical terms in context.

Companies throw around a lot of technical terms when it comes to artificial intelligence, so here are what they mean and how they’re used to improve your business.

Rules-based chatbots: Earlier chatbot iterations (and some current low-cost versions) work mainly through pre-defined rules. Your business (or service provider) writes specific guidelines for the chatbot to follow. For example, when a customer says “Hi,” the chatbot responds, “Hello, how may I help you?”

Another example is when a customer asks about a return. The chatbot is programmed to give a specific response, like, “Here’s a link to the return policy.”

However, the problem with rule-based chatbots is that they can be limiting. It only knows how to handle situations based on the information programmed into it. So if someone says, “I don’t like this product, what can I do?” and you haven’t planned for that question, the chatbot won’t have a response.

Machine learning: Machine learning is a way to combat the problem posed above. Instead of giving the chatbot specific parameters complete with pre-written questions and answers, machine learning helps chatbots make decisions based on the information provided.

Machine learning helps chatbots adapt over time based on customer conversations. Instead of giving the bot specific ways to answer specific questions, you show it the basic rules, and it crafts its own response. Plus, since it means your chatbot is always learning, it gets better the longer you use it.

Natural language processing: As humans and speakers of the English language, we know that there are different ways to ask every question. For example, a customer who wants to know when an item is back in stock may ask, “When is X back in stock?” or they might say, “When will you get X back in?” or even, “When are you restocking X?” Those three questions all mean the same thing, and as humans, we naturally understand that. But a rules-based bot must be told that those mean the same things, or they might not understand it.

Natural language processing (NLP) uses AI technology to help chatbots understand that those questions are all asking the same thing. It also can determine what information it needs to answer your question, like color, size, etc.

NLP also helps chatbots answer questions in a more human-like way. If you want your chatbot to sound more human (and you should), then find one that uses NLP.

Web-based SDK: A web-based SDK (that’s a software development kit for non-developers) is a set of tools and resources developers use to integrate programs (in this case, chatbots) into websites and web-based applications.

What does this mean for your chatbot? Context. When a user says, “I need help with my order,” the chatbot can use NLP to identify “help” and “order.” Then it can look back at previous conversations, pull the customers’ order history, and more—if the data is there.

Contextual conversations are everything in customer service—so this is a big factor in building a successful chatbot using conversational AI. In fact, 70% of customers expect anyone they’re speaking with to have the full context. With a web-based SDK, your chatbot can do that too.

The benefits of conversational AI.

Using chatbots with conversational AI provides benefits across your business, but the clearest wins are in your contact center. Here are three ways chatbots improve your customer service.

24/7 customer support.

Your customer service agents need to sleep, but your conversational AI chatbot doesn’t. A chatbot can answer questions and contain customer issues while your contact center is closed. Any issues they can’t solve, they can pass along to your agents the next day. Not only does that give your customers 24/7 service, but your agents will have less of a backlog when they return to work.

Faster response times.

When your agents are inundated with customers, an AI chatbot can pick up the slack. Send your chatbot in to greet customers immediately, let them know the wait time, or even start collecting information so your agents can get to the root of the problem faster. Chatbots powered with AI can also answer questions and solve easy customer issues, skipping human agents altogether.

For more ways AI chatbots can improve your customer service, read this >

More present customer service agents.

Chatbots can handle low-level customer queries and give agents the time and space to handle more complex issues. Not only will this result in better customer service, but agents will be happier and less stressed overall.

Plus, chatbots can scale during your busy seasons. You’ll save on costs since you won’t have to hire more agents, and the agents you have won’t be overworked.

How to make the most of AI technology.

Unfortunately, you can’t just plug and play with conversational AI and expect to become an AI company. Just like any other technology, it takes prep work and thoughtful implementation to get it right—plus lots of iterations.

Use these tips to make the most of AI technology:

Decide on your AI goals.

How are you planning on using conversational AI? Will it be for marketing? Customer service? All of the above? Think about what your main goals are and use that information to select the right AI partner.

Choose the right conversational AI platform.

Once you’ve decided on how you want to use conversational AI, select the right partner to help you get there. Think about aspects like ease of use, customization, scalability, and budget.

Design your chatbot interactions.

Even with artificial intelligence, you still have to put the work in upfront. What you do and how you do it will vary greatly depending on which platform you go with. Design your chatbot conversations with these things in mind:

  • Your brand voice
  • Personalization
  • Customer service best practices
  • Logical conversation flows
  • Concise messages

Build a partnership between agents and chatbots.

Don’t launch the chatbot independently of your customer service agents. Include them in the training and launch, and start to build a working relationship between the two. Agents and chatbots can work together on customer issues, both popping in and out of the conversation seamlessly. For example, a chatbot can collect information from the customer upfront and pass it to the agent to solve the issue. Then, when the agent is done, they can bring the chatbot back in to deliver a customer survey.

Test and refine.

Sometimes, you don’t know what you don’t know until it happens. Test your chatbot before it launches, but don’t stop there. Keep refining your conversations even after you’ve launched.

What does the future hold for conversational AI?

There are many exciting things happening in AI right now, and we’re only on the cusp of delving into what it can really do.

The big prediction? For now, conversational AI will keep getting better at what it’s already doing. More human-like interactions, better problem-solving, and more in-depth analysis.

In fact, 75% of customers believe AI will become more natural and human-like over time. Gartner is also predicting big things for conversational AI, saying by 2026, conversational AI deployments within contact centers will reduce agent labor costs by $80 billion.

Why should you jump in now when bigger things are coming? It’s simple. You’ll learn to master conversational AI tools ahead of your competitors and earn an early competitive advantage.

How Quiq does conversational AI.

To ensure you give your customers the best experience, Quiq powers our entire platform with conversational AI. Here are a few stand-out ways Quiq uniquely improves your customer service with conversational AI.

Design customized chatbot conversations.

Create chatbot conversations so smooth and intuitive that it feels like you’re talking to a real person. Using the best conversational AI techniques, Quiq’s chatbot gives customers quick and intelligent responses for an up-leveled customer experience.

Help your agents respond to customers faster.

Make your agents more efficient with Quiq Compose. Quiq Compose uses conversational AI to suggest responses to customer questions. How? It uses information from similar conversations in the past to craft the best response.

Empower agent performance.

Tools like our Adaptive Response Timer (ADT) prioritizes conversations based on how fast or slow customers respond. The conversational AI platform also uses AI to analyze customer sentiment to give extra attention to customers who need it.

This is just the beginning.

This is just a taste of what conversational AI can do. See how Quiq can apply the latest technology to your contact center to help you deliver exceptional customer service.

Contact Us

How to Create an Effective Business Text Messaging Strategy – The Ultimate Guide

U up? Text messaging replaced other communication methods for consumers all over the world. So why wouldn’t that extend to businesses?

Business text messaging is a great way to communicate with customers on their terms in their own messaging app. But it can be a challenge when you don’t have a plan.

Customer service is complex on its own, so taking it to a new medium only makes it harder. Knowing how to create an effective customer service text message strategy is the key to succeeding in today’s competitive market.

Why bother with business text messaging?

If you still think text messaging is a new-fangled fad, we’re here to open your eyes to the possibilities. (If you’re already rocking a text messaging strategy and just want to know how to improve it, feel free to skip to the next section—we won’t be offended.)

Your competitors are using it.

While you’re sleeping on text messaging (maybe you still think texting is for sending memes to friends, not business conversations), your competitors have jumped on business messaging and are seeing great returns.

In 2020, business messaging traffic hit 3.5 trillion. That’s up from 3.2 trillion in 2019, a 9.4% year-over-year increase, reports Juniper Research.

You can use business text messaging for all kinds of applications. Here are a few ideas to get your thought train started:

  • Customer support conversations
  • Outbound marketing messages
  • Appointment scheduling
  • Call-to-text in your interactive voice response (IVR) system
  • Complete one-off transactions
  • Use it as an engagement tool

Many businesses have found ways to use text messaging to interact with their customers, and now customers want and expect it.

People respond faster to text messages.

Text messaging has the benefit of being both a quick form of communication and a forgiving one.

Here’s what we mean. According to Forbes, it takes the average person 90 minutes to respond to an email but only 90 seconds to respond to a text message. So customers generally expect quick responses during a text conversation.

However, since the other person’s availability isn’t expected (like it might be with live chat), there’s typically some wiggle room.

So conversations are more likely to follow the customers’ preferred pace. It works when they’re ready for a quick chat, but they can step away whenever they need to.

Your customers want to message you.

Forbes also reported that 74% of customers say their impressions improve when businesses use text messaging. And it makes sense. Customers know how to use text messaging. They don’t have to download a new app or find your website.

When you use text messaging, you fit into your customers’ lives. You’re not asking them to do anything out of the ordinary—and they appreciate that.

If you’re still not convinced, here are nine more reasons why you should consider business text messaging.

Start by dissecting your current text messaging strategy.

Since text messaging is a unique medium with so many aspects to consider, you need a thorough strategy for success. Start by identifying the essentials.

What’s your purpose?

How are you using text messaging? Is it a revenue-driving channel? Are you using it for IVR system overflow? Customer service?

Pick a starting path. Trying to do all the things at once leads to a muddled strategy. Identify why you’re adding text messaging to your business. By starting small and focused, you’ll have the bandwidth to see what’s not working and fix it.

Who’s your audience?

Identify who you’re texting. While nearly every generation uses text messaging on a regular basis, they all use it in different ways. To start, identify who you’re targeting with text messaging. Consider:

  • Demographics like age, location, and income
  • Psychographics like lifestyle, preferences, and needs

Figure out how different audiences want to interact with you using messaging. For example, twenty-something single men will have different preferences than 40-something mothers.

Contact Us

Use what you know to create a voice guide.

This is where phone-based customer service and text messaging customer service start to diverge. Since words have more weight when written (said the writer), it’s important to give your customer service team some direction.

Put everything you learned in the last section and put it together to decide on the tone of voice for your audience. If you’ve gone through this exercise with your marketing team, you can certainly use what they have and adapt it to fit your customer service and text messaging applications.

Pick your tone.

Text messaging is inherently a more casual medium than email or even voice. But that doesn’t mean you should send text slang. Tailor to your audience and your industry.

For example, if you’re selling luxury air travel to middle-aged business travelers, a professional tone is warranted. Avoid text acronyms, and skip the emojis and memes.

For an audience full of elder millennials with an affinity for plants, include emojis and memes. Stay friendly, upbeat, and as positive as possible.

However, if your audience is filled with college students, keep your tone friendly and to the point, but skip the emojis. Apparently, they’re cheugy ‍♀️.

Create parameters.

Deciding on your tone of voice is only as helpful as the guidelines that go with it. Think about telling your customer service team that emojis are okay, only to see this: ❣️❣️❣️

That might be overkill, but you get the point. Put guidelines in place, like maybe they can use three to five emojis per conversation but never more than one per text message.

Do the same for the tone of voice. Provide examples of what “professional” means and how it compares to “friendly.” If you’re already using text messaging in your customer support center, pull some examples directly from past conversations.

How to solve problems in a bite-sized format.

SMS texting has 160 characters—that’s not a lot of space to solve customer problems. There’s a lot to consider to keep the conversation flowing toward a quick resolution. Start with these steps.

Step 1: Introduce yourself.

There’s a lot of spam in the texting world. Whether the customer reached out to you or you’re sending a message (after they’ve opted-in, of course), make sure to introduce yourself just as you would on any other channel.

Step 2: Ask the customer to describe the problem.

Before you can solve the problem, you have to know what it is. Ask probing questions to determine the issue. If it’s an issue that can be seen visually, you can even ask for pictures or videos so you can identify the problem easier and exceed user expectations.

Step 3: Keep answers as simple as possible.

With so little space, you want to ensure messages are easy to understand. While SMS is limited to 160 characters, don’t be afraid to send two messages if that’ll help your customer understand the solution better. Just don’t forget to include an indicator that you’re sending multiple messages (e.g., 1 of 2).

Step 4: Include relevant links, videos, or diagrams.

If you’re using rich messaging, send whatever medium will help your customer solve their problems.

The dos and don’ts of business text messaging.

As you plan and launch your messaging strategy, keep these dos and don’ts in mind.

Do develop a prioritization system.

Prioritization plays a major role in organizing the process and improving customer service efficiency. As questions arise, it can help prioritize them based on urgency and order of importance. This helps ensure that troubleshooting questions and general issues are addressed as quickly as possible. Less urgent questions may be able to wait a little longer if necessary.

Here are a couple of examples of ways you can segment customer service questions in order to prioritize them:

  • The order the questions come in: Do you have a first-in-first-out method?
  • Customer sentiment: Are they frustrated or neutral?
  • Urgent question vs. non-urgent question: What can wait?
  • The service or product they’re asking about: Are some more important? Are there certain team members who can handle certain questions?
  • Members vs. nonmembers: Do members get special priority if you have a special program?
  • Self-prioritization: Ask customers directly how urgent their request is.

The best method is to combine these factors to create a foolproof prioritization system. For example, how would you prioritize a frustrated member with a nonurgent question over a neutral, nonmember’s urgent question? Make sure your AI conversational platform and/or customer service agents prioritize according to your guidance.

Don’t be afraid to ask clarifying questions.

Text messaging is a short medium—but it also lends itself to quick back-and-forth communication. When one small miscommunication can derail a conversation and drive away your customer, it’s imperative that you ask clarifying questions.

Without understanding the problem, you can’t find a solution. If someone has a complex or confusing question, break the question down into parts or ask for clarification. You can send messages like these:

  • “What do you mean when you say [X]?”
  • “Do you mean [Y] when you said [X]?
  • “Can you give me some background on the issue?”
  • “Can you give me an example of when [Z] happened?

Since text messaging can be a limited medium, it’s important to follow up so you understand the problem as best as you can. If you’re still having trouble, don’t be afraid to move to a voice call.

Do make answers clear and understandable.

Communicating with consumers is all about being clear and concise. People come from all types of situations and educational backgrounds, so every customer support agent needs to know how to type a message that’s easy to understand and digest.

A customer who is engaged in the conversation will be more likely to seek help again. Instead of texting long, detailed messages, it’s best to simplify replies into one or two sentences that contain the necessary information. This helps drive more productive conversations and leaves more consumers satisfied at the end of the day.

Don’t forget to follow up with customers.

When a customer has an issue or question, they want to know they’re not just a number. One effective way to show this is by following up after addressing the issue.

Is the consumer satisfied? Do they have any more questions? Do they have any constructive feedback to offer? By asking what they can do to make the customer experience better, customer support agents show that they’re willing to listen and adapt as needed. This can go a long way toward building strong professional relationships.

Do use artificial intelligence to enhance customer service.

There are many ways to use artificial intelligence (AI) to make your business text messaging better. AI can make your agents faster, help serve customers when no one’s around, and even reduce your customer service ticket volume.

  • Predict customer sentiment: A conversational AI platform, like Quiq, can pick up on queues from customers to predict how their feeling so you can prioritize customers whose anger is escalating.
  • Help agents compose messages: Some platforms use natural language processing (NPL) to observe your agents’ responses and suggest sentences as they type. This will help agents stay on tone and write messages more quickly.
  • Respond to customers: Unless your message center is staffed 24/7, messages won’t get answered when no one’s available. That’s where chatbots come in. They can contain conversations by answering simple questions, automating surveys, and even collecting information to route questions to the right agent.

Build business messaging into your business.

Business messaging, whether for customer service, marketing, or even sales, is a great asset to your business—and a great way to engage your customers. But remember: don’t go in blind. Create a thoughtful strategy and see just how quickly your customers respond.

Customer Service in the Travel Industry: How to Do More with Less

Doing more with less is nothing new for the travel industry. It’s been tough out there for the last few years—and while the future is bright, travel and tourism businesses are still facing a labor shortage that’s causing customer satisfaction to plummet.

While HR leaders are facing the labor shortage head-on with recruiting tactics and budget increases, customer service teams need to search for ways to provide the service the industry is known for without the extra body count.

In other words… You need to do more with less.

The best way to do that is with a conversational AI platform. Whether a hotel, airline, car rental company or experience provider, you can provide superior service to your customers without overworking your support team.

Keep reading to take a look at the state of the travel industry’s labor shortage and how you can still provide exceptional customer service.

Travel is back, but labor is not.

In 2019, the travel and tourism industry accounted for 1 in 10 jobs around the world. Then the pandemic happened, and the industry lost 62 million jobs overnight, according to the World Travel & Tourism Council (WTTC).

Now that most travel restrictions, capacity limits, and safety restrictions are lifted, much of the world is ready to travel again. The pent-up demand has caused the tourism and travel industry to outpace overall economic growth. In 2021, the GDP grew by 21.7%, while the overall economy only grew by 5.8%, according to the WTTC.

In 2021, travel added 18.2 million jobs globally, making it difficult to keep up with labor demands. In the U.S., 1 in 9 jobs went unfilled in 2021.

What’s causing the shortage? A combination of factors:

  • Flexibility: Over the last few years, there has been a mindset shift when it comes to work-life balance. Many people aren’t willing to give up weekends and holidays with their families to work in hospitality.
  • Safety: Many jobs in hospitality work on the frontline, interacting with the public on a regular basis. Even though the pandemic has cooled in most parts of the world, some workers are still hesitant to work face-to-face. This goes double for older workers and those with health concerns, who may have either switched industries or dropped out of the workforce altogether.
  • Remote work: The pandemic made remote work more feasible for many industries, and travel requires a lot of in-person work and interactions.

How is the labor shortage impacting customer service?

As much as we try to separate those shortages from affecting service, customers feel it. According to the American Customer Satisfaction Index, hotel guests were 2.7% less satisfied overall between 2021 and 2022. Airlines and car rental companies also dropped 1.3% each.

While there are likely multiple reasons factoring into lower customer satisfaction rates, there’s no denying that the labor shortage has an impact.

As travel ramps back up, there’s an opportunity to reshape the industry at a fundamental level. The world is ready to travel again, but demand is outpacing your ability to grow. While HR is hard at work recruiting new team members, it’s time to look at your operations and see what you can do to deliver great customer service without adding to your staff.

What a conversational AI platform can do in the travel industry.

First, what is conversational AI? Conversational AI combines multiple technologies (like machine learning and natural language processing) to enable human-like interactions between people and computers. For your customer service team, this means there’s a coworker that never sleeps, never argues, and seems to have all the answers.

A conversational AI platform like Quiq can help support your travel business’s customer service team with tools designed to speed conversations and improve your brand experience.

In short, a conversational AI platform can help businesses in the travel industry provide excellent customer service despite the current labor shortage. Here’s how.

Contact Us

Resolve issues faster with conversational AI support.

When you’re short-staffed, you can’t afford inefficient customer conversations. Switching from voice-based customer service to messaging comes with its own set of benefits.

Using natural language processing (NLP), a conversational AI platform can identify customer intent based on their actions or conversational cues. For example, if a customer is stuck on the booking page, maybe they have a question about the cancellation policy. By starting with some basic customer knowledge, chatbots or human agents can go into the conversation with context and get to the root of the problem faster.

Conversational AI platforms can also route conversations to the right agent, so agents spend less time gathering information and more time solving the problem. Plus, messaging’s asynchronous nature means customer service representatives can handle 6–8 conversations at once instead of working one-on-one. But conversational AI for customer service provides even more opportunities for speed.

Anytime access to your customer service team.

Many times, workers leaving the travel industry cite a lack of schedule flexibility as one of their reasons for leaving. Customer service doesn’t stop at 5 o’clock, and support agents end up working odd hours like weekends and holidays. Plus, when you’re short-staffed, it’s harder to cover shifts outside of normal business hours.

Chatbots can help provide customer service 24/7. If you don’t already provide anytime customer service support, you can use chatbots to answer simple questions and route the more complex questions to a live agent to handle the next day. Or, if you already have staff working evening shifts, you can use chatbots to support them. You’ll require fewer human agents during off times while your chatbot can pick up the slack.

Connect with customers in any language.

Five-star experiences start with understanding. You’re in the travel business, so it’s not unlikely that you’ll encounter people who speak different languages. When you’re short-staffed, it’s hard to ensure you have enough multilingual support agents to accommodate your customers.

Conversational AI platforms like Quiq offer translation capabilities. Customers can get the help they need in their native language—even if you don’t have a translator on staff.

Work-from-anywhere capabilities.

One of the labor shortage’s root causes is the move to remote work. Many customer-facing jobs require working in person. That limits your labor pool to people within the immediate area. The high cost of living in cities with increased tourism can push locals out.

Moving to a remote-capable conversational tool will expand your applicant pool outside your immediate area. You can attract a wider range of talented customer service agents to help you fill open positions.

Build automation to anticipate customer needs.

A great way to reduce the strain on a short-staffed customer service team? Prevent problems before they happen.

A lot of customer service inquiries are simple, routine questions that agents have to answer every day. Questions about cancellation policies, cleaning and safety measures, or special requests happen often—and can all be handled using automation.

Use conversational AI to set up personalized messages based on behavioral or timed triggers. Here are a few examples:

  • When customers book a vacation: Automatically send a confirmation text message with their booking information, cancellation policy, and check-in procedures.
  • The day before check-in: Send a reminder with check-in procedures, along with an option for any special requests.
  • During their vacation: Offer up excursion ideas, local restaurant reservations, and more. You can even book the reservation or complete the transaction right within the messaging platform.
  • After an excursion: Send a survey to collect feedback and give customers an outlet for their positive or negative feedback.

By anticipating these customer needs, your agents won’t have to spend as much time fielding simple questions. And the easy ones that do come in can be handled by your chatbot, leaving only more complex issues for your smaller team.

Don’t let a short staff take away from your customer service.

There are few opportunities to make something both cheaper and better. Quiq is one of them. Quiq’s conversational AI Platform isn’t just a stop-gap solution while the labor market catches up with the travel industry’s needs. It will actually improve your customer service experience while helping you do more with less.

7 Ways AI Chatbots Improve Customer Service

If you’ve been using business messaging for a while, you know easy and convenient it is for your customers—and its impact on your customer service team’s output.

With Quiq’s robust messaging platform, it’s easy for contact centers to manage customer conversations while boosting conversion rates, increasing engagement, and reducing costs. But our little slice of digital nirvana only gets better when you add chatbots into the mix.

Enter the business messaging bot. Bots can help increase your agent productivity while delivering an even better customer experience.

We’re diving into seven times business messaging bots made a customer conversation faster and better.

1. Collect customer information upfront.

Let’s say, for example, you own an airline with a great reward program. With Quiq, you can create a bot that greets all your customers right away and asks them to enter their rewards number if they have one.

This “reward bot” will use the information gathered to help recognize platinum-status members—your most elite program. The reward bot reroutes platinum members to a special VIP queue where wait times are shorter and they receive higher support. This is done consistently and without hesitation. Your platinum members don’t have to wade through the customer service queue. It makes them feel more valued and more likely to continue flying with you in the future.

The reward bot can also collect other information, such as confirmation numbers for reservations, updated email addresses, or contact numbers. All of this data gathering can be done before a human agent steps into the conversation. The support chatbot has done the work to arm the agent with the information they need to deliver better service.

2. Decrease customer abandonment.

Acknowledging customers with a fast, friendly greeting lets them know they’ve started on a path to resolution. Agents may be busy with other conversations (we’ve seen agents handle upwards of eight at a time), but that doesn’t mean the customer can’t start engaging with your business. A support chatbot can greet customers immediately while agents are busy.

Instead of waiting in a stagnant queue over the phone or trying to talk to a live chat agent (also known as web chat) who has disappeared, a bot can send a welcome message and let the customer know when they’ll receive a response from a human agent.

3. Get faster, more accurate customer responses.

Remember the last time you had to spell your name out over the phone or repeat your birthday again and again because the call bot couldn’t pick it up? Conversational chatbots eliminate that frustration and ensure it collects fast and accurate information from the customer every time.

Over messaging, the customer can see the data they’re providing and confirm right away if there’s an error. The customer can at least reference the information and catch any typos in their email address or that they’ve provided their old phone number. It happens.

4. Prioritize customer conversations.

In our above example, the reward bot was able to recognize platinum rewards members so they could get the perks that came with their membership. Chatbots can help you prioritize conversations in other ways too.

For example, you can set rules within Quiq to recognize keywords such as “buy” or “purchase” to prioritize customers who may need help with a transaction. Depending on the situation, the platform can prioritize that conversation (likely with high purchase intent) over a password reset or return.

A chatbot platform like Quiq can also use natural language processing (NLP) to predict customer sentiment and prioritize based on that. That way, you can identify a frustrated customer and bump them up in the queue to handle the problem before it escalates.

Contact Us

5. Get customers to the right place.

Chatbots can help route customers to the appropriate department, agent, or even another support bot for help. Much like a call routing system (but more sophisticated), a chatbot can identify a customer’s problem and save them from bouncing around between support agents.

The simplest example is when a bot greets customers and asks, “What can I help you with today?” The bot can either present the user with several options or let them state their problem. A customer can then be routed directly to the support agent best fit for solving their problem.

This also eliminates the need for customers to repeat themselves at each step of the way. Instead of having to explain their situation to the call router and then again to the service agent, the chatbot hands off the messages to the human agent. The agent already knows the problem and can start searching for a solution right away.

6. Reschedule appointments.

Appointment scheduling and rescheduling is a time-consuming and frustrating process. Chatbots can help you reduce delays, ensuring customers avoid back-and-forth emails and long hold times just to move an appointment.

With Quiq business messaging, you can present customers with available dates and times. Customers can choose and confirm a date from available calendar options.

A support chatbot with the right integrations can help present customers with available dates to choose from and schedule the selected appointment.

7. Collect feedback for even more improvement.

Businesses shouldn’t underestimate the power of feedback. Believing you know what customers want and actually asking them can lead to completely different results. Yet, the biggest roadblock to collecting feedback is distributing the survey at the moment when it counts.

A support chatbot can ensure every customer service interaction is followed up with a survey. You can program the bot to send unique surveys based on the conversation and get specific feedback on the spot. Collecting that survey information and putting it into place will help your team improve.

Take the Leap with Quiq.

Implementing customer service chatbots within your organization may seem intimidating now, but Quiq can help you navigate it. We can help you orchestrate bots throughout your organization, whether you need one or many.

With Quiq, you can design conversational experiences your customers will love. Once you create a bot, you can run it across all of our supported channels to deliver a consistent experience no matter where your customers are.

How to Deal with Angry Customers

The worst part of customer service?

Dealing with angry customers.

It’s hands-down the most stressful, uncomfortable part of the job. But it can also make the biggest difference to your business—when you do it right.

Continue reading to see how to handle angry customers.

Why respond to angry customers’ messages at all?

Many of us were taught to turn the other cheek as children. Ignore the kid throwing a fit (it’s about them, not you). And while that is sage advice for many situations, it’s not the best way to handle your angry customers.

Even a casual, “This product sucks!” or “Worst service ever!” deserves a response. It’s easy to delete the comment or ignore the message, but addressing it has some benefits.

Customers are actually pretty forgiving of companies they already frequent. Zendesk’s CX Trends Report says that 74% of customers will forgive a company for its mistake if they receive excellent customer service.

Plus, 81% of customers are more likely to make another purchase after a positive customer service experience, while 76% of customers will switch to a competitor after several bad experiences.

In one study published in the Harvard Business Review, customers who received responses on Twitter from airline and wireless customer service teams saw Net Promoter Score® increases of 37 and 59 points, respectively—a big jump considering NPS® only has a range from -100 to 100.

For airlines, you can see the difference in dollars. When customers complain, how quickly an agent responds correlates with how much more the customer is willing to pay in the future.

  • Under 5 minutes: $19.83 more
  • 6–20 minutes: $8.53 more
  • 21–59 minutes: $3.19 more
  • 60 minutes or more: $2.33

Note that any response at all turns the situation around and helps improve customer perception.

Contact Us

How to deal with angry customers.

When angry customers reach your customer service agents, there’s usually a legitimate issue. Take a look at some of the ways you can turn angry customers into happy ones.

Be proactive about known problems.

It happens. A service falls between the cracks, products get damaged, and shipments get delayed. It’s all about how you handle it.

Customers don’t want to feel like you’re trying to get one over on them. They’ll feel cheated, and you’ll lose their trust. Instead, get ahead of problems by communicating with customers as soon as your team notices an issue. Use outbound text messaging to ensure your message is received (and that it doesn’t end up in the junk mail folder).

When things go drastically wrong, send a message that hits these 5 points:

  1. State the issue.
  2. Apologize.
  3. Offer a solution/discount.
  4. Assure them it won’t happen again (if it’s in your control).
  5. Thank them for their continued support.

Lean into conversational support tools.

A positive (but sometimes negative) benefit of messaging is that you’re always easily accessible to the customer. So it’s not unusual to see more complaints through messaging than you might see through traditional phone and email communication methods.

Yet messaging is a great channel to work with angry customers, especially when you have the right tools in place to help you.

While customers might be more likely to offer negative or inappropriate comments through messaging, it also gives your service agents a chance to respond calmly and succinctly. (Something that isn’t so easy to do when someone is yelling at you on the other end of a phone call.)

As we mentioned earlier in the HBR study, responding quickly to angry messages goes a long way. How do you know which customers to prioritize? Look for conversational AI platforms that use sentiment analysis to help you prioritize tickets. Bump angry or unhappy customers to the top of the list to ensure a fast response time.

Diffuse the situation with empathy.

When customers send upset or angry messages, it’s still easy to get flustered or respond with short, superficial answers. Follow these 5 steps to diffuse the situation.

  1. Remain calm: We know—it’s easier said than done. But if you respond aggressively or defensively, it’ll only make things worse. Remember that the customer is angry at the company, not at you. If you’re having trouble keeping your cool, bring in a manager sooner than later (that’s what they’re there for).
  2. Show empathy and validate their concerns: Use phrases like “I understand” and repeat back their problems to show you’re paying attention. Feeling like they aren’t being heard is often a customer’s top complaint, so show that you’re listening.
  3. Don’t argue: We know it’s tempting, but don’t fall into a debate on the state of the world. Instead of trying to disprove every point, they’re making, stick to the facts and what you can do to solve the problem. If the customer keeps pushing, simply repeat what you can do for them and how you can make it right.
  4. Apologize: As long as your company policy allows it (and it should), apologize for the problem and accept responsibility. This will ensure the customer feels heard and knows you’re not just trying to push the issue under the rug.
  5. Offer a solution: Once you’ve figured out the problem, try to find a solution that works within the bounds of your capabilities and satisfies the customer. Sometimes that’s a full refund. Sometimes it’s just a discount on their next purchase. Identify the severity of the problem and respond accordingly.

Be kind to the customer and yourself.

Remember, a human being is behind that angry message, with their own lives, worries, and stressors. While the mistake or issue they’re coming to you with may have been the spark, their anger is fueled by other things going on in their lives. The best thing you can do for them is to remain positive (or, at the very least, neutral) and find a solution to their immediate problem.

But it’s also important to give yourself (and your team) some grace. Customer service agents face a lot of pressure on all sides, and sometimes that mean-spirited message can be the breaking point. If agents are feeling overwhelmed, do these 3 things:

  1. Bring in a manager: If you’re overwhelmed, or the customer is using threatening and inappropriate language, notify a manager immediately. They can help you de-escalate the situation or decide to terminate the relationship if the customer has crossed a line.
  2. Take a break: If possible, step away from your computer after a challenging situation. Stand up, stretch, take a quick walk, or have a bite to eat. Do what you can to shake off the conversation. (Sometimes, literally shaking your hands and limbs helps!)
  3. Disconnect: Make sure to use your vacation time to disconnect from your computer and truly relax. You’ll be better off for it.

Offense is the best defense.

You’ve heard that phrase, right? It means that if you get ahead and stay ahead, you won’t have to defend as often.

If you’re constantly getting bad customer feedback, there’s probably a disconnect between what you think the customers want and what they actually want.

Instead of going off assumptions, collect information from your customers.

  1. Figure out what type of feedback you want to collect: There are different kinds of information you can gather from your customers. Think product reviews, customer satisfaction surveys, and NPS surveys, to name a few. They all have their place but start with the ones that will have the biggest impact on your current customer concerns.
  2. Roll feedback requests into your existing processes: Send product survey requests shortly after customers receive the product, or ask customers to fill out satisfaction surveys at the end of customer service interactions. Use the processes you already have so you don’t add too much work to your already overwhelmed team.
  3. Measure, evaluate, adjust, and repeat: Metrics work best when you look at your numbers over time. Continue to collect customer feedback and improve your products, services, and processes.

Rebuild customer trust.

When customer trust is so hard to earn, you want to do whatever it takes to keep from losing it. Turning an angry customer into a loyal one isn’t as much of a lost cause as you might think.

Regain customer trust by:

  1. Admitting fault: Yes, we mentioned this earlier, but it’s a big sticking point for many customers. Admitting that someone somewhere actually made a mistake is the first step toward repairing the relationship.
  2. Use sincere, positive language: If you’re unsure how to solve something, use phrases like, “Let me find out for you” or “Let’s figure this out together.” The customer is more likely to exhibit patience, and you won’t add any fuel to the fire with negative language.
  3. Follow up when you say you’re going to: If you can’t solve the issue immediately, schedule a time to follow up with the customer—and don’t forget. Even if you haven’t been able to solve the problem yet, reach out to keep them informed. Doing what you say you’re going to do will help build trust, but it’ll backfire if you forget.

Move forward.

We all have to deal with angry customers, but it’s about how you repair the relationship and move forward. These steps are just a starting point. They’ll help rebuild the bridge between your company and your customer.

15 Ways to Build Customer Rapport

Key Takeaways

  • Always start with a friendly introduction and ask for the customer’s name – it helps humanize the conversation.
  • Match your tone and style to the customer’s messaging style (formal vs. casual) and use their name throughout.
  • Add personalization, use past purchase info, preferences, etc., to make messages feel more meaningful and relevant.
  • Let frustrated customers speak first before pivoting to solutions, and always respond with specificity.
  • Be consistent and trustworthy: follow through on what you promise, stay positive in phrasing, and occasionally break from scripts to show authenticity.

Messaging is quick. It’s casual. It’s easy to breeze through the pleasantries and get straight to the point. But service agents still need to build customer rapport.

It’s harder to do over messaging, but it’s more important than ever in customer serviceespecially if your company does most of its business online. It’s easy for customers to change brands when things go wrong. In fact, 61% of customers say they’ll switch brands after just one bad customer service experience.

To bridge the digital divide, customer service agents need to build customer rapport with every interaction. By utilizing these quick ways to build rapport, you’ll also foster customer trust and loyalty.

What Is Customer Rapport?

Customer rapport is the sense of trust and connection that forms between your team and your customers. It’s what turns a simple exchange into a genuine conversation. When rapport is strong, customers feel understood, valued, and confident that your brand has their best interests in mind.

So, you might be wondering how to build customer rapport. Continue reading to check out these 15 easy ways to enhance customer interactions.

1. Start With Introductions

Start messaging conversations with a simple “Hello, my name is _______.” Just because messaging is the more casual channel doesn’t mean niceties go out the window.

Once you’ve introduced yourself, ask for the customers’ names as well. These simple touches are a fast way to put the customer at ease, and it’s one of the quickest ways to build rapport. This also goes a long way in helping personalize the experience.

Quick tip: This goes for AI Agents, too! Whether you name your bot or not, tell the customer they’re talking to AI. Being upfront leads to more trust, and you guessed it, a better rapport.

2. Add Call-to-Text to Your IVR

Customers don’t want to wait on hold, but it happens. When you’re down a few agents or dealing with heavy call volume, give your customers another way to connect with call-to-text.

Adding call-to-text into your IVR menu makes it easy to transition to messaging and lets your customers go about their days while still getting assistance. They’re not stuck on hold, growing angrier by the minute.

3. Use Channels Your Customers Are On

It’s hard to build rapport with customers who are in unfamiliar territory. For example, if your agents are only available via web chat (also known as live chat), but your customers are used to texting, this will immediately put up a wall between you. They’re adapting their communication methods to fit your business when it should be the other way around.

Instead, pick communication channels that your customers frequent. In fact, 53% of customers want to use communications channels that are familiar to them, according to Zendesk. When you pick channels they use to chat with friends and family, they’re more likely to connect with your brand.

Quick tip: Conversational AI platforms can help you manage multiple channels all from one central dashboard.

4. Offer a Digital Smile

Most customer service advice starts with a smile – but how do you do that over messaging? Think of it as smiling with your voice. It’s all about using a friendly tone in your writing. Be positive and enthusiastic by showing enthusiasm with exclamation points, emojis (if your brand voice allows), and quick responses to make the conversation feel warm and natural.

5. Establish Trust through Mirroring

One critical way to build customer rapport is to match the customer’s conversation style. For in-person conversations, they call it “mirroring.” It’s when you match the other person’s body language. (You’ve probably seen it taken to the extreme on TV for laughs.) Many people do it unconsciously, but it’s a handy way to instantly connect with people.

But how do you do this over messaging? Match their conversation style. If they’re writing out full formal paragraphs, give them thorough responses and avoid any slang. If they’re using text abbreviations, keep it short and casual. You could even throw in some emojis, but maybe avoid using your own abbreviations. (Too much room for miscommunications.)

6. Use the Customer’s Name

You asked the customer’s name, so you should use it. People perk up at the mention of their own name, so using it to punctuate your messages will keep them interested in your responses.

This is especially helpful over messaging since it’s often asynchronous (both parties don’t need to be present at the same time). They’re probably going about their day or dealing with distractions, but the mention of their name will grab their attention so you can finish the conversation. As a pro tip, though, make sure you always remember their name!

7. Ask Questions

Customer service is supposed to be helpful. However, with the pressure to serve more customers in less time and the metrics that reinforce it, agents can often speed through conversations by doing the bare minimum, so it’s important that you be helpful, beyond answering questions.

Yes, speed is important – but so is being helpful! If you’re in the travel industry, ask questions and then provide some recommendations on what to do when your customers get to their destination. If you’re in retail, take a look at what the customer has bought in the past and offer some recommendations. Is there a better account-level tier they could take advantage of for your software? Suggest it!

And since 52% of customers are open to product recommendations for agents, according to Zendesk, it’s also a great opportunity for cross-selling and upselling (as long as you do it in the customer’s best interest). Customers will appreciate the advice and feel like you care about them.

8. Practice Active Listening

Active listening over messaging means proving you’re paying attention – even without body language or verbal cues. Instead of simply replying, show customers you’ve truly read and understood their message. Paraphrase their concern (“It sounds like you’re having trouble logging in”) or restate key details before offering help. This builds confidence that you’re not just sending canned responses.

You can also use short acknowledgments like “Got it!” or “That makes sense!” to keep the conversation feeling natural and responsive. If something’s unclear, don’t be afraid to ask questions instead of just guessing. These small signals of attention replace the nods and eye contact of an in-person exchange – and remind customers that there’s someone who cares behind the screen.

9. Be Empathetic

Empathy is the heart of customer rapport, and it’s just as important in a digital conversation as it is face-to-face. Over messaging, empathy means taking the extra moment to acknowledge a customer’s frustration, confusion, or urgency before offering solutions. When you’re handling angry customers, it’s especially powerful to pause and validate their feelings before diving into fixes. A simple “I can see how that would be frustrating” or “Thanks for your patience while we sort this out” shows understanding and human connection, even in a short exchange.

Tone plays a big role in conveying empathy through text. Choose words that sound warm and conversational, avoid overly formal phrasing, and mirror the customer’s energy. If they’re stressed, keep your tone calm and reassuring; if they’re upbeat, match their enthusiasm. These small adjustments help customers feel cared for and heard – turning a quick chat into a meaningful interaction.

10. Personalize the Conversation

Always start with the customer’s name, but that’s not the only information you should use in your messaging conversations. According to McKinsey, 71% of consumers expect personalization, and 76% get frustrated when they don’t find it.

The less information you have to pull from the customer, the better. According to Zendesk, 72% of customers expect agents to have access to all relevant information. Go beyond simple account information, and look at data like:

  • Past purchases
  • Purchase frequency
  • Messaging preferences
  • Product preferences

Then you can design a conversation that feels personal and meaningful to your customers, making it an easy way to build customer rapport.

11. Handling Angry Customers

We know customer service isn’t all sunshine and rainbows. When angry customers reach out via messaging, agents should tread lightly. While it’s easy to jump in with the next steps (typically a short apology and some kind of solution), that’s not the only thing the customer wants.

Frustrated customers typically want their issues validated first. That means letting them type out their frustrations before moving on. Once they’ve had the appropriate space to share their concerns, read their messages at least twice before responding.

12. Speak Clearly and Transparently

The difficulty with messaging is that certain words and phrases can come off as rote and insincere. Saying “I’m sorry for the inconvenience” or even “We appreciate your business” sounds impersonal.

Instead, get specific to your customer’s problem. Say that you’re sorry that the cleats they ordered came damaged – especially since they’ve been eyeing them for months. Apologize that their package was delayed and that their daughter didn’t get her cleats in time for her first day of softball practice. Being specific will make customers feel more comfortable and understood.

13. Veer Off Script

Whether you have an actual script or conversation guidelines to follow, it’s okay to throw them out the window sometimes. Ask customers about their interests, mention that you love (and own!) the trousers they picked out, or compliment them on their destination choice.

Although you may have to work a little harder on messaging, these types of comments and compliments show customers that you’re a real person and you’re interested in them as a real person, too.

14. Keep Your Responses Positive

This is an old customer service trick that works very well over messaging, plus it’s a quick way to build rapport. Try to turn your phrases so that they remain positive, even if you’re saying something negative.

For example, instead of saying, “I don’t know the answer,” you can say something like, “Let me find that answer for you.” Or, instead of saying you can’t access the customer’s account without their credentials, ask for permission to access their account. It’s kind of like a Jedi mind trick. You’re saying the same thing, but customers see your responses more positively. They’re less likely to get upset or feel put out.

15. Be Honest and Trustworthy

The best way to build rapport and gain customer trust? Do what you say you’re going to do.

Not all customer service inquiries can be solved in one conversation. If agents have to elevate the conversation to a higher service tier, if you need to check with a manager, or if there are any other issues at play, be honest with the customer about when you’ll get back to them, and then do it. Even if you’re just checking in to let them know you’re still working on resolving the issue, make sure you stay in contact.

Quick tip: Use outbound messaging over email for faster communication – and to ensure the message doesn’t get lost in junk mail.

Agentic AI for Building Customer Rapport

Messaging may remove some of the human signals we rely on – tone, face, gestures – but rapport doesn’t have to disappear. With thoughtful conversational design and strategic use of agentic AI, you can amplify the rapport-building techniques you’ve just read about. AI agents can maintain a warm tone, mirror conversation styles, and manage simple follow-ups so human agents can focus on higher-value, more empathetic interactions.

When you deploy AI that understands context and can act autonomously – while surfacing handoff opportunities when needed, you ensure every message feels responsive and personal. The result? You build more trust, reduce friction, and deepen connections at scale.

Frequently Asked Questions (FAQs)

What does building customer rapport mean?

Building customer rapport means creating a sense of trust, understanding, and connection between your business and your customers. It’s about making interactions feel human — even when they happen over digital channels like chat or SMS.

How can I build customer rapport through messaging?

You can build rapport through personalization, empathy, and responsiveness. Use the customer’s name, match their tone, acknowledge their concerns, and reply quickly. Even small touches, like a warm greeting or friendly punctuation, can make digital conversations feel more personal.

What’s the best way to handle angry customers over messaging?

When handling angry customers, start with empathy – acknowledge their frustration before offering solutions. Use calm, clear language and avoid defensive replies. Phrases like “I understand how that would be frustrating” or “Let’s fix this together” show that you care and are taking responsibility.

How does agentic AI help build customer rapport?

Agentic AI enhances rapport-building by recognizing context, mirroring tone, and automating thoughtful responses that still feel human. It can handle repetitive inquiries efficiently while surfacing complex or emotional issues for human agents – helping teams maintain consistency, empathy, and speed at scale.

Why is customer rapport important in digital communication?

Strong rapport leads to higher customer satisfaction, loyalty, and trust. In messaging, where tone and intent can easily get lost, rapport ensures your brand feels approachable and genuine – not robotic or distant.

Are You Tracking These 10 Help Desk Metrics?

Metrics are the lifeblood of help desks and contact centers. Most help desk leaders are using a variety of metrics to measure their team’s performance, but which data should you track?

Data can help drive success, but collecting the wrong metrics (and too many) can cause overwhelm and unnecessary stress on your team.

Traditionally, a help desk refers to IT or internal support. Over time, people have expanded the use of the phrase to include a service desk, general customer support, and customer service teams.

We’ve put together the 10 most vital help desk metrics you should track. Keep reading to learn what they are and how you can use them to improve your customer service.

1. Ticket volume

Your basic metric: How many tickets does your helpdesk receive over a given period of time? Use this information to track busy periods and make important decisions like how many agents you should hire.

2. Ticket channel distribution

This metric helps you track where your tickets are coming from. Do most of your customers use live chat (or web chat)? How many tickets come from Apple Messages for Business? Knowing how many tickets come through each channel will help you allocate resources. You’ll also know which channels to spend more time training your agents on.

3. Response time

Response time measures how fast your agents first respond to customers. This is a big deal for your customer experience. In fact, 83% of customers expect to interact with someone immediately when they contact a company, according to Salesforce’s State of the Connected Customer report.

Response time expectations often vary between channels. For example, customers reaching out on web chat expect an answer within minutes (if not seconds). Yet with channels like SMS/text messaging or email, customers are more forgiving of slower response times.

4. Open tickets vs. resolved tickets

How many tickets are coming in each day and how many are being resolved? This is a good indicator of agent performance and workload. A healthy help desk team will see roughly the same number of new tickets and resolved tickets each day.

You can quickly identify a problem with your team by looking at this metric. Too many unresolved tickets could mean you need to hire more agents, spend more time on training, or redistribute work so that tickets get resolved faster.

5. Average resolution time

Your average resolution time is a vital metric for measuring your help desk’s performance. How long it takes to resolve a customer inquiry directly impacts the customer experience. Resolution times will vary depending on the complexity of the tickets and your industry, but faster is almost always better.

Be sure to include the total time from when a customer first submits a ticket to when the agent closes it out. Yes, this includes response times too!

6. Conversations per agent

Track how many conversations your agents can manage over a given time period. Identify which agents are taking the most calls to see how you can redistribute the workload.

In a similar vein, you can also track your agents’ utilization rate (time spent solving customer issues divided by total time working). This will tell you which agents are overworked and which have time for extra tasks. Here’s a quick tip: Never aim for a 100% utilization rate. You’ll burn out employees and leave no time for administrative tasks.

7. First-contact resolution rate

Your first-contact resolution rate (FCR) measures how many tickets are solved on the first try. Since 80% of customers expect to solve complex problems by speaking to one agent, according to Salesforce, tracking this metric helps you identify if you’re meeting customer expectations.

Getting customers quick and painless answers often comes down to agent training and easy access to information. Use a conversational platform that easily integrates with your CRM or information databases so agents can pull product or customer info for a frictionless customer experience.

8. Containment rate

Containment rate measures how many people interact with a chatbot or IVR help options without speaking with a live agent. This metric helps you track how effective your chatbot conversations are. If too many people still need to switch to a live agent after talking to your chatbot, it can impact customer satisfaction.

Containment standards vary across industries, but with Quiq’s Conversational AI, contact centers see a 70% containment (or contact deflection) rate.

A word of caution: Use this metric with context. Containment shouldn’t be your top priority—helping customers should. While reducing agents’ workload (and operating costs while you’re at it) is beneficial, you don’t want to risk the customer experience to make it happen. Don’t make it more difficult for customers to reach live agents just to improve this metric. Instead, work to make your chatbots as helpful as possible while still giving customers the option to chat with a human.

9. Customer satisfaction

A fast and efficient help desk with the best metrics in the industry will still be the worst performing if customers aren’t happy. While numbers are important to keeping costs down, providing excellent customer service is the best way to keep sales up. According to Salesforce, 94% of customers say a positive customer service experience makes them more likely to purchase again.

Survey customers immediately after helpdesk interactions to ensure customers are leaving those conversations with answers and good feelings about your brand.

10. Agent satisfaction

While most of these metrics rely on agent performance, this one is surveying you. It’s easy to think you need agents to work harder and lower your operational costs. But don’t forget that pushing them too far will lead to stressed employees, burnout, and high turnover. Finding and training agents will cost you much more in the long run.

Survey agents on a regular basis to gauge their workload levels, see if they have the right tools and equipment, and ensure all levels of management are providing the support your agents need.

3 help desk best practices to keep in mind.

Metrics are important to keep your customer contact center running smoothly, but they can’t measure everything. Here are a few additional help desk metrics best practices to keep your contact center running smoothly.

1. Design chatbot conversations to solve problems—not put up roadblocks.

Chatbots are an integral part of a winning customer service strategy. They give customers 24/7 access to help, they help streamline agent conversations, and they reduce ticket volume. But don’t design the conversations as barriers to overcome to reach your live agents. Your customers shouldn’t have to perform the 12 labors of Hercules to reach Mt. Olympus.

Instead, design chatbots to answer common FAQs, collect information, troubleshoot problems, and other helpful tasks. Make sure you include an easy way for customers to connect with live agents and review the conversation so no one will have to repeat information.

2. Don’t keep help desk metrics in a silo.

These metrics are incredibly valuable for your customer service team, but they can also benefit the rest of your organization. If you suddenly have an influx of new ticket requests, maybe there’s a problem with a product. Maybe your web team needs to redesign a customer flow. If you’re seeing a shift from tickets via web chat to Facebook, that’s a good indicator that your customers spend more time there—information that will be helpful for your social media team.

It’s also important to look at your own help desk metrics in context with what’s going on in your organization. Don’t penalize agents for a large backlog when a new product release isn’t going well.

3. Build up your self-service options.

Whether it’s a knowledge-base, FAQ page, or AI chatbot (or hopefully all three), spend time and effort building out these resources. Giving customers the option to help themselves will reduce call volume and reduce the number of menial questions agents have to answer (which they’ll likely thank you for).

And it’s not just for the sake of your help desk team. Customers actually want more self-service options. According to Zendesk’s CX Trends report, 89% of customers will spend more with companies that allow them to find answers online without having to contact anyone.

Pick the right metrics to see your help desk performance soar.

There are so many potential help desk metrics that it’s easy to get overwhelmed. Zero in on those measuring the customer experience and your agency performance to gather the most relevant data and make the biggest impact on your business.

How Messaging Helps Hospitality Get Personal

The hospitality industry is, by nature, a very human business. Whether you’re in hotel and lodging, travel and transportation, food and beverage, or recreation, hospitality is all about creating a unique and personal experience for your guests. Have you considered how technology like SMS hospitality messaging can actually make guest experiences more personalized?

While technology has changed the game, it can sometimes feel antithetical to the warmth the hospitality industry is beloved for. However, when messaging tech is used correctly, it helps you do what the hospitality industry has always done best: Make human connections.

SMS hospitality messaging connects you to guests on their terms.

It’s exactly for this reason that messaging can help transform the customer experience by giving service providers a way to connect and engage with guests in an easy, convenient, and preferred way.

There are major opportunities to leverage SMS hospitality messaging in a way that doesn’t detract from the human connection—but adds to it. Messaging liberates guests from standing in line, waiting for an email, or sitting on hold.

In this article, we’ll explore how you can use SMS hospitality messaging to connect with customers and personalize the industry.

High tech and high touch.

Providing a memorable guest experience is part physical and observable. What thread count are the sheets? What’s the ambiance of the restaurant (do they have table cloths and sommeliers or barstools and air hockey)?

The intangibles are just as important to the overall experience—the care and attention of the staff, the ease of changing bookings, how payments are handled, etc. These smaller details are often your differentiators and play a big factor in how you make your customers feel.

SMS messaging can make all the difference. Instead of forcing customers to stand in lines, wait on hold, or hunt down information on in-room pamphlets, you can bring the service to them.

In fact, guests now expect it as a standard part of their hospitality experience.

The COVID-19 pandemic caused major disruption in the hospitality industry.

While travel is starting to return to 2019 levels (along with occupancy rates and room revenue, according to Accenture), it has permanently influenced customer expectations.

There are fewer business travelers, more local vacationers—and more digital nomads. This is reshaping the hospitality industry in everything from loyalty programs and digital amenities to a demand for SMS booking.

And customer service has only increased in importance.

Accenture’s Life Reimagined report says 53% of consumers think customer service has become more important than price—and 54% of consumers believe it will continue to be so over the next 12 months.

Transparency, clarity, and simplicity have become top decision drivers. More than half of customers who have reimagined life due to the pandemic say they would switch brands if the brand doesn’t create clear and easy options for contacting customer service, according to Accenture’s report.

For hospitality, text messaging is a natural step toward delivering high-touch experiences. Customers are already using their mobile devices to find fun things to do (70%), research destinations (66%), and book transportation or airfare (46%).

With so much emphasis already placed on mobile, a move to messaging is a natural and organic option that customers are likely already looking to do. Continuing to use customers’ mobile devices throughout their stay just makes sense.

In fact, messaging can enhance the customer’s experience across the entire guest journey.

Tap into the power of SMS hospitality messaging.

Messaging allows you to connect and engage with guests in a way that is already an important part of how they communicate daily.

SMS text messaging upgrades your customer communications with more than simple text conversations. Rich messaging brings the hospitality experience to your guests long before they reach your doorstep. With rich messaging, you can:

  • Process secure transactions, from SMS booking for hotel rooms and excursions to in-room upgrades and payments.
  • Send reservation reminders, confirmations, and up-to-the-minute notices.
  • Increase guest excitement with content like images, GIFs, videos, and more.

Easy ways to start using SMS/texting in hospitality.

Getting started with hospitality text messaging may seem overwhelming at first, but there are many ways to introduce it into your existing customer journeys.

Here are some examples of ways you can use SMS hospitality messaging to elevate experiences in hotels, restaurants, and recreational activities.

SMS for hotels.

  1. Answer pre-booking questions. While your website is a great booking tool, nothing beats one-on-one conversations. Your guests may have simple questions about things like early check-in or room preferences. Messaging helps you address these questions quickly and secure the booking right from your customers’ text messages.
  2. Use SMS booking. Schedule stays and process payments right from SMS/text messaging using rich messaging features. Your guests can book their trip right from their phones without ever having to make a phone call or wait on hold.
  3. Get guests excited about their stay. Your guest experience begins before they even arrive. Build the anticipation with a welcome message, semi-personalized itineraries, and local sites and events. If you know guests are there with children, send them itineraries that include amusement parks, a trip to the local zoo, or some family-friendly live shows. For couples on a romantic getaway, suggest date night ideas, local spas, or more secluded beaches. Sending a text message with these personalized touches will go a long way to build excitement and make guests feel welcomed.
  4. Streamline the check-in process. While we love vacations, traveling to get to them is another story. And it’s only gotten worse in the last few years with travel restrictions, fewer flights, and more crowds. When travelers finally reach their destination, they’re tired, frustrated, and likely want as little interaction as possible before reaching their beds. (In other words, they’re 3 of Snow White’s 7 dwarves: Grumpy, Sleepy, and Bashful.) Have guests complete the check-in process through SMS messaging so that all they have to do when they get to their destination is pick up their key. Digital keys are also becoming more popular and complete the contactless check-in experience.
  5. Handle in-room requests. Instead of forcing guests to decide between the front desk, guest services, maid services, and other departments on the hotel phone (not to mention waiting on hold), centralize in-room requests via SMS/text messaging. Quiq’s clients, including those within the hospitality segment, have found that servicing customers via messaging has reduced service costs and work time and increased customer satisfaction scores by 5–10 points.
  6. Close out stays with a bang. Offer a contactless checkout, removing the last bit of friction guests face as they leave your hotel. Plus, give them one final reminder of the excellent service and attention they received with a thank you message.
  7. Ask for reviews. If you’ve given the guest a memorable experience, they may be enticed to share it with others and become your brand ambassador.

Today, reviews are a critical part of the buyers’ process, and word of mouth can build or block that path to purchase.

Not only is this a great opportunity to instantly address any negative feedback, but you can also send exclusive offers and discounts to encourage guests to come back.

You can also encourage guests to share their positive comments about your business with their social networks.

SMS for restaurants.

  1. Accept reservations. Use rich messaging features to schedule reservations right from your guests’ mobile phones.
  2. Send reservation reminders. Help customers remember the reservation they made (especially if you’re booked out for weeks) with a friendly reservation reminder. A text message won’t get lost in junk mail, and you’ll decrease no-shows. SMS hospitality messaging to the rescue!
  3. Enable easy cancellations and rescheduling. Instead of holding a table for no-shows and missing out on potential revenue, give guests an easy way to cancel or reschedule their reservation ahead of time. They’ll be happy with the streamlined customer experience, and you’ll be able to fill those seats with last-minute reservations and walk-ins.
  4. Provide directions and parking information. Sure, everyone has Apple Maps or Waze, but parking can be a beast if you live in a high-tourism city. Add a link for directions and parking information to your appointment reminder to ensure your guests make it to your restaurant.
  5. Streamline take-out orders. Take-out has grown in popularity over the last few years. Since the COVID-19 pandemic, even fine-dining restaurants have jumped in on the action. 54% of adults say purchasing takeout or delivery food is essential to the way they live, including 72% of millennials and 66% of Gen Z adults, according to the National Restaurant Association’s 2022 State of the Restaurant Industry report. Use SMS/text messaging to confirm you’ve received an order, that it’s ready for pickup, or that it’s out for delivery.
  6. Ask for reviews. Restaurants live and die by their online reviews. Encourage guests to leave feedback on a popular review site and offer them an incentive. If you’d rather collect feedback directly, send them a link to a survey and be sure to answer any questions and address concerns quickly.
  7. Get the opt-in. SMS marketing is a great way to connect with customers, and the open rate for text messages often far exceeds that of email.

Ask for permission to send marketing messages, then craft a strategy that personalizes offers and earns repeat business.

SMS for recreational activities.

  1. Book through text messaging. Rich text messaging is a simple way to answer questions, book a reservation, and securely collect payments all in one place.
  2. Take special requests. SMS/text messaging is a convenient and private way for guests to ask about special accommodations like wheelchair accessibility, assistance for people who are hard of hearing, or private tours.
  3. Send links to helpful information. Don’t send guests hunting for information on your website. Send them links to details, like what type of attire participants should wear, dos and don’ts, parking information, and more.
  4. Send reminders. Email reminders get lost in all the itinerary bookings (and junk email) your customers are likely dealing with. Send reservation reminders and any up-to-the-minute notifications via text messaging.
  5. Suggest their next adventure. SMS messaging is a great marketing tool for small business operators, like tour guides, but it’s also easy to scale for larger operations. Once your guests have finished their activity, use text messaging to suggest their next adventure.

If they took a ghost tour of downtown, offer suggestions to other haunted hotspots. If they went on a guided hike, suggest kayaking or another outdoor activity. Personalize messages and include timely discounts to increase the next booking.

Disrupt with SMS hospitality messaging or be disrupted.

The time for hospitality text messaging is here. There’s endless opportunity for hotels, resorts, restaurants, and others within the hospitality segment to simplify and personalize the customer experience.

With new expectations born from the pandemic and an ever-increasing number of millennial and Gen Z travelers, it’s even more critical for the hospitality industry to embrace text messaging.

At Quiq, we help companies in the hospitality industry (and others) engage with guests in personal and meaningful ways. Our Conversational AI Platform makes it easy for customers to connect with your business, so you can provide the information they want in the way they want to receive it.

Connect with customers—and let them connect with you—using Quiq.