A Guide to Fine-Tuning Pretrained Language Models for Specific Use Cases

Over the past half-year, large language models (LLMs) like ChatGPT have proven remarkably useful for a wide range of tasks, including machine translation, code analysis, and customer interactions in places like contact centers.

For all this power and flexibility, however, it is often still necessary to use fine-tuning to get an LLM to generate high-quality output for specific use cases.

Today, we’re going to do a deep dive into this process, understanding how these models work, what fine-tuning is, and how you can leverage it for your business.

What is a Pretrained Language Model?

First, let’s establish some background context by tackling the question of what pretrained models are and how they work.

The “GPT” in ChatGPT stands for “generative pretrained transformer”, and this gives us a clue as to what’s going on under the hood. ChatGPT is a generative model, meaning its purpose is to create new output; it’s pretrained, meaning that it has already seen a vast amount of text data by the time end users like us get our hands on it; and it’s a transformer, which refers to the fact that it’s built out of billions of transformer modules stacked into layers.

If you’re not conversant in the history of machine learning it can be difficult to see what the big deal is, but pretrained models are a relatively new development. Once upon a time in the ancient past (i.e. 15 or 20 years ago), it was an open question as to whether engineers would be able to pretrain a single model on a dataset and then fine-tune its performance, or whether they would need to approach each new problem by training a model from scratch.

This question was largely resolved around 2013, when image models trained on the ImageNet dataset began sweeping competitions left and right. Since then it has become more common to use pretrained models as a starting point, but we want to emphasize that this approach does not always work. There remain a vast number of important projects for which building a bespoke model is the only way to go.

What is Transfer Learning?

Transfer learning refers to when an agent or system figures out how to solve one kind of problem and then uses this knowledge to solve a different kind of problem. It’s a term that shows up all over artificial intelligence, cognitive psychology, and education theory.

Author, chess master, and martial artist Josh Waitzkin captures the idea nicely in the following passage from his blockbuster book, The Art of Learning:

“Since childhood I had treasured the sublime study of chess, the swim through ever-deepening layers of complexity. I could spend hours at a chessboard and stand up from the experience on fire with insight about chess, basketball, the ocean, psychology, love, art.”

Transfer learning is a broader concept than pretraining, but the two ideas are closely related. In machine learning, competence can be transferred from one domain (generating text) to another (translating between natural languages or creating Python code) by pretraining a sufficiently large model.

What is Fine-Tuning A Pretrained Language Model?

Fine-tuning a pretrained language model occurs when the model is repurposed for a particular task by being shown illustrations of the correct behavior.

If you’re in a whimsical mood, for example, you might give ChatGPT a few dozen limericks so that its future output always has that form.

It’s easy to confuse fine-tuning with a few other techniques for getting optimum performance out of LLMs, so it’s worth getting clear on terminology before we attempt to give a precise definition of fine-tuning.

Fine-Tuning a Language Model v.s. Zero-Shot Learning

Zero-shot learning is whatever you get out of a language model when you feed it a prompt without making any special effort to show it what you want. It’s not technically a form of fine-tuning at all, but it comes up in a lot of these conversations so it needs to be mentioned.

(NOTE: It is sometimes claimed that prompt engineering counts as zero-shot learning, and we’ll have more to say about that shortly.)

Fine-Tuning a Language Model v.s. One-Shot Learning

One-shot learning is showing a language model a single example of what you want it to do. Continuing our limerick example, one-shot learning would be giving the model one limerick and instructing it to format its replies with the same structure.

Fine-Tuning a Language Model v.s. Few-Shot Learning

Few-shot learning is more or less the same thing as one-shot learning, but you give the model several examples of how you want it to act.

How many counts as “several”? There’s no agreed-upon number that we know about, but probably 3 to 5, or perhaps as many as 10. More than this and you’re arguably not doing “few”-shot learning anymore.

Fine-Tuning a Language Model v.s. Prompt Engineering

Large language models like ChatGPT are stochastic and incredibly sensitive to the phrasing of the prompts they’re given. For this reason, it can take a while to develop a sense of how to feed the model instructions such that you get what you’re looking for.

The emerging discipline of prompt engineering is focused on cultivating this intuitive feel. Minor tweaks in word choice, sentence structure, etc. can have an enormous impact on the final output, and prompt engineers are those who have spent the time to learn how to make the most effective prompts (or are willing to just keep tinkering until the output is correct).

Does prompt engineering count as fine-tuning? We would argue that it doesn’t, primarily because we want to reserve the term “fine-tuning” for the more extensive process we describe in the next few sections.

Still, none of this is set in stone, and others might take the opposite view.

Distinguishing Fine-Tuning From Other Approaches

Having discussed prompt engineering and zero-, one-, and few-shot learning, we can give a fuller definition of fine-tuning.

Fine-tuning is taking a pretrained language model and optimizing it for a particular use case by giving it many examples to learn from. How many you ultimately need will depend a lot on your task – particularly how different the task is from the model’s training data and how strict your requirements for its output are – but you should expect it to take on the order of a few dozen or a few hundred examples.

Though it bears an obvious similarity to one-shot and few-shot learning, fine-tuning will generally require more work to come up with enough examples, and you might have to build a rudimentary pipeline that feeds the examples in through the API. It’s almost certainly not something you’ll be doing directly in the ChatGPT web interface.

Contact Us

How Can I Fine-Tune a Pretrained Language Model?

Having gotten this far, we can now turn our attention to what the fine-tuning procedure actually consists in. The basic steps are: deciding what you’re wanting to accomplish, gather the requisite data (and formatting it correctly), feeding it to your model, and evaluating the results.

Let’s discuss each, in turn.

Deciding on Your Use Case

The obvious place to begin is figuring out exactly what it is you want to fine-tune a pretrained model to do.

It may seem as though this is too obvious to be included as its own standalone step, but we’ve singled it out is because you need to think through the specifics of what you’re trying to accomplish. It’s not enough to say “We want to fine-tune this model to write tweets for us”, you have to consider questions like “Should the tone by formal or informal” and “Are we okay with it adding emojis”.

This matters because it informs the data that you gather for fine-tuning. If you feed the model irreverent tweets with lots of 🙂s and 🚀s, you can expect that to show up in the tweets it generates for you.

Picking a Base Model

Next, you have to figure out which model you want to fine-tune. This is not the place to get into a discussion on all the “foundation” models that are available today, but suffice it to say that HuggingFace, OpenAI, Anthropic, and the other companies in the pretrained model space offer a variety of different options. BigScience has BLOOM and T0pp models, OpenAI has Ada, Babbage, Da Vinci, and Curie, and so on.

Different models have different fine-tuning requirements, so this choice will impact how you structure your pipeline later.

Gathering and Formatting the Fine-Tuning Data

The heart of fine-tuning a pretrained model is gathering the samples from which it will learn. This is more or less exactly what it sounds like: you compile a list of prompts and desired responses. This is the data that will adjust the model’s internal parameters, optimizing it for your use case.

AI educator Dave Shapiro has a tutorial on fine-tuning GPT-3 to be a tutor for younger students, and here is one of the training samples he used (“TIM” is the name he’s using for the tutor):

User: Hey TIM, I’m interested in learning more about Charles Darwin. Can you tell me a bit about him?

TIM: Charles Darwin was an English naturalist and geologist who developed the theory of evolution by natural selection. He was born into a wealthy family and was able to attend the prestigious Shrewsbury School. After graduation, he enrolled at the University of Edinburgh to study medicine, but he soon dropped out and enrolled at Christ’s College, Cambridge.

User: Interesting! I didn’t know he was born into a wealthy family.

TIM: Yes, his family was very wealthy and his father was a doctor. Darwin’s grandfather, Erasmus Darwin, was a well-known scientist and wrote several books on botany and evolution.

User: Wow, I had no idea. What made Darwin decide to drop out of medical school and study at Cambridge?

TIM: Darwin was interested in natural history and had developed a keen interest in taxidermy. After dropping out of medical school, he traveled to Wales with some friends to study geology. It was during this time that he began to develop his theory of evolution.

One thing you’ll note is that this sample is pretty long. It could take a while to come up with hundreds of these to fine-tune your model, but that’s actually something a generative pretrained language model like ChatGPT can help with. Just be sure you have a human go over the samples and check them for accuracy, or you risk compromising the quality of your outputs.

Another thing to think about is how you’ll handle adversarial behavior and edge cases. If you’re training a conversational AI chatbot for a contact center, for example, you’ll want to include plenty of instances of the model calmly and politely responding to an irate customer. That way, your output will be similarly calm and polite.

Lastly, you’ll have to format the fine-tuning data according to whatever specifications are required by the base model you’re using. It’ll probably be something similar to JSON, but check the documentation to be sure.

Feeding it to Your Model

Now that you’ve got your samples ready, you’ll have to give them to the model for fine-tuning. This will involve you feeding the examples to the model via its API and waiting until the process has finished.

What is the Difference Between Fine-Tuning and a Pretrained Model?

A pretrained model is one that has been previously trained on a particular dataset or task, and fine-tuning is getting that model to do well on a new task by showing it examples of the output you want to see.

Pretrained models like ChatGPT are often pretty good out of the box, but if you’re wanting it to create legal contracts or work with highly-specialized scientific vocabulary, you’ll likely need to fine-tune it.

Should You Fine-Tune a Pretrained Model For Your Business?

Generative pretrained language models like ChatGPT and Bard have already begun to change the way businesses like contact centers function, and we think this is a trend that is likely to accelerate in the years ahead.

If you’ve been intrigued by the possibility of fine-tuning a pretrained model to supercharge your enterprise, then hopefully the information contained in this article gives you some ideas on how to begin.

Another option is to leverage the power of the Quiq platform. We’ve built a best-in-class conversational AI system that can automate substantial parts of your customer interactions (without you needing to run your own models or set up a fine-tuning pipeline.)

To see how we can help, schedule a demo with us today!

Request A Demo

Brand Voice And Tone Building With Prompt Engineering

Artificial intelligence tools like ChatGPT are changing the way strategists are building their brands.

But with the staggering rate of change in the field, it can be hard to know how to utilize its full potential. Should you hire an engineering team? Pay for a subscription and do it yourself?

The truth is, it depends. But one thing you can try is prompt engineering, a term that refers to carefully crafting the instructions you give to the AI to get the best possible results.

In this piece, we’ll cover the basics of prompt engineering and discuss the many ways in which you can build your brand voice with generative AI.

What is Prompt Engineering?

As the name implies, generative AI refers to any machine learning (ML) model whose primary purpose is to generate some output. There are generative AI applications for creating new images, text, code, and music.

There are also ongoing efforts to expand the range of outputs generative models can handle, such as a fascinating project to build a high-level programming language for creating new protein structures.

The way you get output from a generative AI model is by prompting it. Just as you could prompt a friend by asking “How was your vacation in Japan,” you can prompt a generative model by asking it questions and giving it instructions. Here’s an example:

“I’m working on learning Java, and I want you to act as though you’re an experienced Java teacher. I keep seeing terms like `public class` and `public static void`. Can you explain to me the different types of Java classes, giving an example and explanation of each?”

When we tried this prompt with GPT-4, it responded with a lucid breakdown of different Java classes (i.e., static, inner, abstract, final, etc.), complete with code snippets for each one.

When Small Changes Aren’t So Small

Mapping the relationship between human-generated inputs and machine-generated outputs is what the emerging field of “prompt engineering” is all about.

Prompt engineering only entered popular awareness in the past few years, as a direct consequence of the meteoric rise of large language models (LLMs). It rapidly became obvious that GPT-3.5 was vastly better than pretty much anything that had come before, and there arose a concomitant interest in the best ways of crafting prompts to maximize the effectiveness of these (and similar) tools.

At first glance, it may not be obvious why prompt engineering is a standalone profession. After all, how difficult could it be to simply ask the computer to teach you Chinese or explain a coding concept? Why have a “prompt engineer” instead of a regular engineer who sometimes uses GPT-4 for a particular task?

A lot could be said in reply, but the big complication is the fact that a generative AI’s output is extremely dependent upon the input it receives.

An example pulled from common experience will make this clearer. You’ve no doubt noticed that when you ask people different kinds of questions you elicit different kinds of responses. “What’s up?” won’t get the same reply as “I notice you’ve been distant recently, does that have anything to do with losing your job last month?”

The same basic dynamic applies to LLMs. Just as subtleties in word choice and tone will impact the kind of interaction you have with a person, they’ll impact the kind of interaction you have with a generative model.

All this nuance means that conversing with your fellow human beings is a skill that takes a while to develop, and that also holds in trying to productively using LLMs. You must learn to phrase your queries in a way that gives the model good context, includes specific criteria as to what you’re looking for in a reply, etc.

Honestly, it can feel a little like teaching a bright, eager intern who has almost no initial understanding of the problem you’re trying to get them to solve. If you give them clear instructions with a few examples they’ll probably do alright, but you can’t just point them at a task and set them loose.

We’ll have much more to say about crafting the kinds of prompts that help you build your brand voice in upcoming sections, but first, let’s spend some time breaking down the anatomy of a prompt.

This context will come in handy later.

What’s In A Prompt?

In truth, there are very few real restrictions on how you use an LLM. If you ask it to do something immoral or illegal it’ll probably respond along the lines of “I’m sorry Dave, but as a large language model I can’t let you do that,” otherwise you can just start feeding it text and seeing how it responds.

That having been said, prompt engineers have identified some basic constituent parts that go into useful prompts. They’re worth understanding as you go about using prompt engineering to build your brand voice.

Context

First, it helps to offer the LLM some context for the task you want done. Under most circumstances, it’s enough to give it a sentence or two, though there can be instances in which it makes sense to give it a whole paragraph.

Here’s an example prompt without good context:

“Can you write me a title for a blog post?”

Most human beings wouldn’t be able to do a whole lot with this, and neither can an LLM. Here’s an example prompt with better context:

“I’ve just finished a blog post for a client that makes legal software. It’s about how they have the best payments integrations, and the tone is punchy, irreverent, and fun. Could you write me a title for the post that has the same tone?”

To get exactly what you’re looking for you may need to tinker a bit with this prompt, but you’ll have much better chances with the additional context.

Instructions

Of course, the heart of the matter is the actual instructions you give the LLM. Here’s the context-added prompt from the previous section, whose instructions are just okay:

“I’ve just finished a blog post for a client that makes legal software. It’s about how they have the best payments integrations, and the tone is punchy, irreverent, and fun. Could you write me a title for the post that has the same tone?”

A better way to format the instructions is to ask for several alternatives to choose from:

“I’ve just finished a blog post for a client that makes legal software. It’s about how they have the best payments integrations, and the tone is punchy, irreverent, and fun. Could you give me 2-3 titles for the blog post that have the same tone?”

Here again, it’ll often pay to go through a couple of iterations. You might find – as we did when we tested this prompt – that GPT-4 is just a little too irreverent (it used profanity in one of its titles.) If you feel like this doesn’t strike the right tone for your brand identity you can fix it by asking the LLM to be a bit more serious, or rework the titles to remove the profanity, etc.

You may have noticed that “keep iterating and testing” is a common theme here.

Example Data

Though you won’t always need to get the LLM input data, it is sometimes required (as when you need it to summarize or critique an argument) and is often helpful (as when you give it a few examples of titles you like.)

Here’s the reworked prompt from above, with input data:

“I’ve just finished a blog post for a client that makes legal software. It’s about how they have the best payments integrations, and the tone is punchy, irreverent, and fun. Could you give me 2-3 titles for the blog post that have the same tone?

Here’s a list of two titles that strike the right tone:
When software goes hard: dominating the legal payments game.
Put the ‘prudence’ back in ‘jurisprudence’ by streamlining your payment collections.”

Remember, LLMs are highly sensitive to what you give them as input, and they’ll key off your tone and style. Showing them what you want dramatically boosts the chances that you’ll be able to quickly get what you need.

Output Indicators

An output indicator is essentially any concrete metric you use to specify how you want the output to be structured. Our existing prompt already has one, and we’ve added another (both are bolded):

“I’ve just finished a blog post for a client that makes legal software. It’s about how they have the best payments integrations, and the tone is punchy, irreverent, and fun. Could you give me 2-3 titles for the blog post that have the same tone? Each title should be approximately 60 characters long.

Here’s a list of two titles that strike the right tone:
When software goes hard: dominating the legal payments game.
Put the ‘prudence’ back in ‘jurisprudence’ by streamlining your payment collections.”

As you go about playing with LLMs and perfecting the use of prompt engineering in building your brand voice, you’ll notice that the models don’t always follow these instructions. Sometimes you’ll ask for a five-sentence paragraph that actually contains eight sentences, or you’ll ask for 10 post ideas and get back 12.

We’re not aware of any general way of getting an LLM to consistently, strictly follow instructions. Still, if you include good instructions, clear output indicators, and examples, you’ll probably get close enough that only a little further tinkering is required.

What Are The Different Types of Prompts You Can Use For Prompt Engineering?

Though prompt engineering for tasks like brand voice and tone building is still in its infancy, there are nevertheless a few broad types of prompts that are worth knowing.

  • Zero-shot prompting: A zero-shot prompt is one in which you simply ask directly for what you want without providing any examples. It’ll simply generate an output on the basis of its internal weights and prior training, and, surprisingly, this is often more than sufficient.
  • One-shot prompting: With a one-shot prompt, you’re asking the LLM for output and giving it a single example to learn from.
  • Few-shot prompting: Few-shot prompts involve a least a few examples of expected output, as in the two titles we provided our prompt when we asked it for blog post titles.
  • Chain-of-thought prompting: Chain-of-thought prompting is similar to few-shot prompting, but with a twist. Rather than merely giving the model examples of what you want to see, you craft your examples such that they demonstrate a process of explicit reasoning. When done correctly, the model will actually walk through the process it uses to reason about a task. Not only does this make its output more interpretable, but it can also boost accuracy in domains at which LLMs are notoriously bad, like addition.

What Are The Challenges With Prompt Engineering For Brand Voice?

We don’t use the word “dazzling” lightly around here, but that’s the best way of describing the power of ChatGPT and the broader ecosystem of large language models.

You would be hard-pressed to find many people who have spent time with one and come away unmoved.

Still, challenges remain, especially when it comes to using prompt engineering for content marketing or building your brand voice.

One well-known problem is the tendency of LLMs to completely make things up, a phenomenon referred to as “hallucination”. The internet is now filled with examples of ChatGPT completely fabricating URLs, books, papers, professions, and individuals. If you use an LLM to create content for your website and don’t thoroughly vet it, you run the risk of damaging your reputation and your brand if it contains false or misleading information.

A related problem is legal or compliance issues that emerge as a result of using an LLM. Though the technology hasn’t been around long enough to get anyone into serious trouble (we suspect it won’t be long), there are now cases in which attorneys have been caught using faulty research generated by ChatGPT or engineering teams have leaked proprietary secrets by feeding meeting notes into it.

Finally, if you’re offering a fine-tuned model to customers to do something like answer questions, you must be very, very careful in delimiting its scope so that it doesn’t generate unwanted behavior. It’s pretty easy to accidentally wander into fraught territory when engaging with an LLM in an open-ended manner, and that’s not even counting users who deliberately try to get it to respond inappropriately.

One potential solution to this problem is by crafting your prompts such that they contain clear instructions about what not to do. You may tell it not to discuss its own rules, not to change its tone, not to speak negatively about anyone, not to argue, etc.

Crafting a prompt that illustrates the correct behavior while explicitly ruling out any incorrect behaviors is a non-trivial task, requiring a great deal of testing and refinement. But it’s one you’ll have to get right if you want to leverage LLMs for your brand voice while avoiding any possible harm down the line.

Prompt Engineering For Marketers

With all this context having been established, you’re no doubt excited about using prompt engineering in your marketing efforts. In this section, we’ll go through a couple of ways you can do that.

Coming up with ideas

One place where LLMs excel is in helping you with ideation. A lot goes into content marketing, but all of that follows from having a solid list of basic concepts to work from.

We tested ChatGPT’s functionality by feeding it this query:

“I’m starting a blog about becoming a data scientist. Could you give me a list of 10 blog post ideas, with a mix of beginner, intermediate, and advanced topics? The titles should be eye-catching but not too aggressive.

Here are two examples of good blog post titles:

‘Data science: what to know when considering a career in this hot new field.’
‘Data science v.s. data engineering: everything you wanted to know’”

And we got back some excellent ideas:

  1. “Demystifying Data Science: A Beginner’s Guide to Jumpstarting Your Career”
  2. “From Novice to Ninja: Essential Skills for Every Data Science Aspirant”
  3. “Unlocking the Power of Python: A Data Scientist’s Best Friend”
  4. “Exploratory Data Analysis: Unveiling Insights Hidden in the Numbers”
  5. “Machine Learning 101: A Gentle Introduction for Data Science Enthusiasts”
  6. “Feature Engineering: Enhancing Your Models Like a Pro”
  7. “The Art of Natural Language Processing: Extracting Meaning from Text”
  8. “Unsupervised Learning: Unveiling Patterns and Structure in Your Data”
  9. “Deep Dive into Neural Networks: Building Advanced Models for Predictive Analytics”
  10. “Data Science in the Real World: Tackling Complex Business Problems with Analytics”

And look, it even generated the correct number of titles! You’ll notice, though that every single one follows the same template, with an initial phrase, a colon, and then a follow-on phrase. That’s because the two examples I gave have this structure, and that’s what it used as a guide.

Performing Background Research

After you’re done coming up with good ideas for building your brand voice, you’ll likely want to do some additional background research on your domain. This, too, is a place where LLMs excel.

There can be a lot of subtlety to this. You might start with something obvious, like “give me a list of the top authors in the keto diet niche”, but you can also get more creative than this. We’ve heard of copywriters who have used GPT-3.5 to generate lengthy customer testimonials for fictional products, or diary entries for i.e. 40-year-old suburban dads who are into DIY home improvement projects.

Regardless, with a little bit of ingenuity, you can generate a tremendous amount of valuable research that can inform your attempts to develop a brand voice.

Be careful, though; this is one place where model hallucinations could be really problematic. Be sure to manually check a model’s outputs before using them for anything critical.

Generating Actual Content

Of course, one place where content marketers are using LLMs more often is in actually writing full-fledged content. We’re of the opinion that GPT-3.5 is still not at the level of a skilled human writer, but it’s excellent for creating outlines, generating email blasts, and writing relatively boilerplate introductions and conclusions.

Getting better at prompt engineering

Despite the word “engineering” in its title, prompt engineering remains as much an art as it is a science. Hopefully, the tips we’ve provided here will help you structure your prompts in a way that gets you good results, but there’s no substitute for practicing the way you interact with LLMs.

One way to approach this task is by paying careful attention to the ways in which small word choices impact the kinds of output generated. You could begin developing an intuitive feel for the relationship between input text and output text by simply starting multiple sessions with ChatGPT and trying out slight variations of prompts. If you really want to be scientific about it, copy everything over into a spreadsheet and look for patterns. Over time, you’ll become more and more precise in your instructions, just as an experienced teacher or manager does.

Prompt Engineering Can Help You Build Your Brand

Advanced AI models like ChatGPT are changing the way SEO, content marketing, and brand strategy are being done. From creating buyer personas to using chatbots for customer interactions, these tools can help you get far more work done with less effort.

But you have to be cautious, as LLMs are known to hallucinate information, change their tone, and otherwise behave inappropriately.

With the right prompt engineering expertise, these downsides can be ameliorated, and you’ll be on your way to building a strong brand. If you’re interested in other ways AI tools can take your business to the next level, schedule a demo of Quiq’s conversational CX platform today!

Contact Us

LLMs For the Enterprise: How to Protect Brand Safety While Building Your Brand Persona

It’s long been clear that advances in artificial intelligence change how businesses operate. Whether it’s extremely accurate machine translation, chatbots that automate customer service tasks, or spot-on recommendations for music and shows, enterprises have been using advanced AI systems to better serve their customers and boost their bottom line for years.

Today the big news is generative AI, with large language models (LLMs) in particular capturing the imagination. As we’d expect, businesses in many different industries are enthusiastically looking at incorporating these tools into their workflows, just as prior generations did for the internet, computers, and fax machines.

But this alacrity must be balanced with a clear understanding of the tradeoffs involved. It’s one thing to have a language model answer simple questions, and quite another to have one engaging in open-ended interactions with customers involving little direct human oversight.

If you have an LLM-powered application and it goes off the rails, it could be mildly funny, or it could do serious damage to your brand persona. You need to think through both possibilities before proceeding.

This piece is intended as a primer on effectively using LLMs for the enterprise. If you’re considering integrating LLMs for specific applications and aren’t sure how to weigh the pros and cons, it will provide invaluable advice on the different options available while furnishing the context you need to decide which is the best fit for you.

How Are LLMs Being Used in Business?

LLMs like GPT-4 are truly remarkable artifacts. They’re essentially gigantic neural networks with billions of internal parameters, trained on vast amounts of text data from books and the internet.

Once they’re ready to go, they can be used to ask and answer questions, suggest experiments or research ideas, write code, write blog posts, and perform many other tasks.

Their flexibility, in fact, has come as quite a surprise, which is why they’re showing up in so many places. Before we talk about specific strategies for integrating LLMs into your enterprise, let’s walk through a few business use cases for the technology.

Generating (or rewriting) text

The obvious use case is generating text. GPT-4 and related technologies are very good at writing generic blog posts, copy, and emails. But they’ve also proven useful in more subtle tasks, like producing technical documentation or explaining how pieces of code work.

Sometimes it makes sense to pass this entire job on to LLMs, but in other cases, they can act more like research assistants, generating ideas or taking human-generated bullet points and expanding on them. It really depends on the specifics of what you’re trying to accomplish.

Conversational AI

A subcategory of text generation is using an LLM as a conversational AI agent. Clients or other interested parties may have questions about your product, for instance, and many of them can be answered by a properly fine-tuned LLM instead of by a human. This is a use case where you need to think carefully about protecting your brand persona because LLMs are flexible enough to generate inappropriate responses to questions. You should extensively test any models meant to interact with customers and be sure your tests include belligerent or aggressive language to verify that the model continues to be polite.

Summarizing content

Another place that LLMs have excelled is in summarizing already-existing text. This, too, is something that once would’ve been handled by a human, but can now be scaled up to the greater speed and flexibility of LLMs. People are using LLMs to summarize everything from basic articles on the internet to dense scientific and legal documents (though it’s worth being careful here, as they’re known to sometimes include inaccurate information in these summaries.)

Answering questions

Though it might still be a while before ChatGPT is able to replace Google, it has become more common to simply ask it for help rather than search for the answer online. Programmers, for example, can copy and paste the error messages produced by their malfunctioning code into ChatGPT to get its advice on how to proceed. The same considerations around protecting brand safety that we mentioned in the ‘conversational AI’ section above apply here as well.

Classification

One way to get a handle on a huge amount of data is to use a classification algorithm to sort it into categories. Once you know a data point belongs in a particular bucket you already know a fair bit about it, which can cut down on the amount of time you need to spend on analysis. Classifying documents, tweets, etc. is something LLMs can help with, though at this point a fair bit of technical work is required to get models like GPT-3 to reliably and accurately handle classification tasks.

Sentiment analysis

Sentiment analysis refers to a kind of machine learning in which the overall tone of a piece of text is identified (i.e. is it happy, sarcastic, excited, etc.) It’s not exactly the same thing as classification, but it’s related. Sentiment analysis shows up in many customer-facing applications because you need to know how people are responding to your new brand persona or how they like an update to your core offering, and this is something LLMs have proven useful for.

What Are the Advantages of Using LLMs in Business?

More and more businesses are investigating LLMs for their specific applications because they confer many advantages to those that know how to use them.

For one thing, LLMs are extremely well-suited for certain domains. Though they’re still prone to hallucinations and other problems, LLMs can generate high-quality blog posts, emails, and general copy. At present, the output is usually still not as good as what a skilled human can produce.

But LLMs can generate text so quickly that it often makes sense to have the first draft created by a model and tweaked by a human, or to have relatively low-effort tasks (like generating headlines for social media) delegated to a machine so a human writer can focus on more valuable endeavors.

For another, LLMs are highly flexible. It’s relatively straightforward to take a baseline LLM like GPT-4 and feed it examples of behavior you want to see, such as generating math proofs in the form of poetry (if you’re into that sort of thing.) This can be done with prompt engineering or with a more sophisticated pipeline involving the model’s API, but in either case, you have the option of effectively pointing these general-purpose tools at specific tasks.

None of this is to suggest that LLMs are always and everywhere the right tool for the job. Still, in many domains, it makes sense to examine using LLMs for the enterprise.

What Are the Disadvantages of Using LLMs in Business?

For all their power, flexibility, and jaw-dropping speed, there are nevertheless drawbacks to using LLMs.

One disadvantage of using LLMs in business that people are already familiar with is the variable quality of their output. Sometimes, the text generated by an LLM is almost breathtakingly good. But LLMs can also be biased and inaccurate, and their hallucinations – which may not be a big deal for SEO blog posts – will be a huge liability if they end up damaging your brand.

Exacerbating this problem is the fact that no matter how right or wrong GPT-4 is, it’ll format its response in flawless, confident prose. You might expect a human being who doesn’t understand medicine very well to misspell a specialized word like “Umeclidinium bromide”, and that would offer you a clue that there might be other inaccuracies. But that essentially never happens with an LLM, so special diligence must be exercised in fact-checking their claims.

There can also be substantial operational costs associated with training and using LLMs. If you put together a team to build your own internal LLM you should expect to spend (at least) hundreds of thousands of dollars getting it up and running, to say nothing of the ongoing costs of maintenance.

Of course, you could also build your applications around API calls to external parties like OpenAI, who offer their models’ inferences as an endpoint. This is vastly cheaper, but it comes with downsides of its own. Using this approach means being beholden to another entity, which may release updates that dramatically change the performance of their models and materially impact your business.

Perhaps the biggest underlying disadvantage to using LLMs, however, is their sheer inscrutability. True, it’s not that hard to understand at a high level how models like GPT-4 are trained. But the fact remains that no one really understands what’s happening inside of them. It’s usually not clear why tiny changes to a prompt can result in such wildly different outputs, for example, or why a prompt will work well for a while before performance suddenly starts to decline.

Perhaps you just got unlucky – these models are stochastic, after all – or perhaps OpenAI changed the base model. You might not be able to tell, and either way, it’s hard to build robust, long-range applications around technologies that are difficult to understand and predict.

Contact Us

How Can LLMs Be Integrated Into Enterprise Applications?

If you’ve decided you want to integrate these groundbreaking technologies into your own platforms, there are two basic ways you can proceed. Either you can use a 3rd-party service through an API, or you can try to run your own models instead.

In the following two sections, we’ll cover each of these options and their respective tradeoffs.

Using an LLM through an API

An obvious way of leveraging the power of LLMs is by simply including API calls to a platform that specializes in them, such as OpenAI. Generally, this will involve creating infrastructure that is able to pass a prompt to an LLM and return its output.

If you’re building a user-facing chatbot through this method, that would mean that whenever the user types a question, their question is sent to the model and its response is sent back to the user.

The advantages of this approach are that they offer an extremely low barrier to entry, low costs, and fast response times. Hitting an API is pretty trivial as engineering tasks go, and though you’re charged per token, the bill will surely be less than it would be to stand up an entire machine-learning team to build your own model.

But, of course, the danger is that you’re relying on someone else to deliver crucial functionality. If OpenAI changes its terms of service or simply goes bankrupt, you could find yourself in a very bad spot.

Another disadvantage is that the company running the model may have access to the data you’re passing to its models. A team at Samsung recently made headlines when it was discovered they’d been plowing sensitive meeting notes and proprietary source code directly into ChatGPT, where both were viewable by OpenAI. You should always be careful about the data you’re exposing, particularly if it’s customer data whose privacy you’ve been entrusted to protect.

Running Your Own Model

The way to ameliorate the problems of accessing an LLM through an API is to either roll your own or run an open-source model in an environment that you control.

Building the kind of model that can compete with GPT-4 is really, really difficult, and it simply won’t be an option for any but the most elite engineering teams.

Using an open-source LLM, however, is a much more viable option. There are now many such models for text or code generation, and they can be fine-tuned for the specifics of your use case.

By and large, open-source models tend to be smaller and less performant than their closed-source cousins, so you’ll have to decide whether they’re good enough for you. And you should absolutely not underestimate the complexity of maintaining an open-sourced LLM. Though it’s nowhere near as hard as training one from scratch, maintaining an advanced piece of AI software is far from a trivial task.

All that having been said, this is one path you can take if you have the right applications in mind and the technical skills to pull it off.

How to Protect Brand Safety While Building Your Brand Persona

Throughout this piece, we’ve made mention of various ways in which LLMs can help supercharge your business while also warning of the potential damage a bad LLM response can do to your brand.

At present, there is no general-purpose way of making sure an LLM only does good things while never doing bad things. They can be startlingly creative, and with that power comes the possibility that they’ll be creative in ways you’d rather them not be (same as children, we suppose.)

Still, it is possible to put together an extensive testing suite that substantially reduces the possibility of a damaging incident. You need to feed the model many different kinds of interactions, including ones that are angry, annoyed, sarcastic, poorly spelled or formatted, etc., to see how it behaves.

What’s more, this testing needs to be ongoing. It’s not enough to run a test suite one weekend and declare the model fit for use, it needs to be periodically re-tested to ensure no bad behavior has emerged.

With these techniques, you should be able to build a persona as a company on the cutting edge while protecting yourself from incidents that damage your brand.

What Is the Future of LLMs and AI?

The business world moves fast, and if you’re not keeping up with the latest advances you run the risk of being left behind. At present, large language models like GPT-4 are setting the world ablaze with discussions of their potential to completely transform fields like customer experience chatbots.

If you want in on the action and you have the in-house engineering expertise, you could try to create your own offering. But if you would rather leverage the power of LLMs for chat-based applications by working with a world-class team that’s already done the hard engineering work, reach out to Quiq to schedule a demo.

Request A Demo

Semi-Supervised Learning Explained (With Examples)

From movie recommendations to chatbots as customer service reps, it seems like machine learning (ML) is absolutely everywhere. But one thing you may not realize is just how much data is required to train these advanced systems, and how much time and energy goes into formatting that data appropriately.

Machine learning engineers have developed many ways of trying to cut down on this bottleneck, and one of the techniques that have emerged from these efforts is semi-supervised learning.

Today, we’re going to discuss semi-supervised learning, how it works, and where it’s being applied.

What is Semi-Supervised Learning?

Semi-supervised learning (SSL) is an approach to machine learning (ML) that is appropriate for tasks where you have a large amount of data that you want to learn from, only a fraction of which is labeled.

Semi-supervised learning sits somewhere between supervised and unsupervised learning, and we’ll start by understanding these techniques because that will make it easier to grasp how semi-supervised learning works.

Supervised learning refers to any ML setup in which a model learns from labeled data. It’s called “supervised” because the model is effectively being trained by showing it many examples of the right answer.

Suppose you’re trying to build a neural network that can take a picture of different plant species and classify them. If you give it a picture of a rose it’ll output the “rose” label, if you give it a fern it’ll output the “fern” label, and so on.

The way to start training such a network is to assemble many labeled images of each kind of plant you’re interested in. You’ll need dozens or hundreds of such images, and they’ll each need to be labeled by a human.

Then, you’ll assemble these into a dataset and train your model on it. What the neural network will do is learn some kind of function that maps features in the image (the concentrations of different colors, say, or the shape of the stems and leaves) to a label (“rose”, “fern”.)

One drawback to this approach is that it can be slow and extremely expensive, both in funds and in time. You could probably put together a labeled dataset of a few hundred plant images in a weekend, but what if you’re training something more complex, where the stakes are higher? A model trained to spot breast cancer from a scan will need thousands of images, perhaps tens of thousands. And not just anyone can identify a cancerous lump, you’ll need a skilled human to look at the scan to label it “cancerous” and “non-cancerous.”

Unsupervised learning, by contrast, requires no such labeled data. Instead, an unsupervised machine learning algorithm is able to ingest data, analyze its underlying structure, and categorize data points according to this learned structure.

Semi-supervised learning

Okay, so what does this mean? A fairly common unsupervised learning task is clustering a corpus of documents thematically, and let’s say you want to do this with a bunch of different national anthems (hey, we’re not going to judge you for how you like to spend your afternoons!).

A good, basic algorithm for a task like this is the k-means algorithm, so-called because it will sort documents into k categories. K-means begins by randomly initializing k “centroids” (which you can think of as essentially being the center value for a given category), then moving these centroids around in an attempt to reduce the distance between the centroids and the values in the clusters.

This process will often involve a lot of fiddling. Since you don’t actually know the optimal number of clusters (remember that this is an unsupervised task), you might have to try several different values of k before you get results that are sensible.

To sort our national anthems into clusters you’ll have to first pre-process the text in various ways, then you’ll run it through the k-means clustering algorithm. Once that is done, you can start examining the clusters for themes. You might find that one cluster features words like “beauty”, “heart” and “mother”, another features words like “free” and “fight”, another features words like “guard” and “honor”, etc.

As with supervised learning, unsupervised learning has drawbacks. With a clustering task like the one just described, it might take a lot of work and multiple false starts to find a value of k that gives good results. And it’s not always obvious what the clusters actually mean. Sometimes there will be clear features that distinguish one cluster from another, but other times they won’t correspond to anything that’s easily interpretable from a human perspective.

Semi-supervised learning, by contrast, combines elements of both of these approaches. You start by training a model on the subset of your data that is labeled, then apply it to the larger unlabeled part of your data. In theory, this should simultaneously give you a powerful predictive model that is able to generalize to data it hasn’t seen before while saving you from the toil of creating thousands of your own labels.

How Does Semi-Supervised Learning Work?

We’ve covered a lot of ground, so let’s review. Two of the most common forms of machine learning are supervised learning and unsupervised learning. The former tends to require a lot of labeled data to produce a useful model, while the latter can soak up a lot of hours in tinkering and yield clusters that are hard to understand. By training a model on a labeled subset of data and then applying it to the unlabeled data, you can save yourself tremendous amounts of effort.

But what’s actually happening under the hood?

Three main variants of semi-supervised learning are self-training, co-training, and graph-based label propagation, and we’ll discuss each of these in turn.

Self-training

Self-training is the simplest kind of semi-supervised learning, and it works like this.

A small subset of your data will have labels while the rest won’t have any, so you’ll begin by using supervised learning to train a model on the labeled data. With this model, you’ll go over the unlabeled data to generate pseudo-labels, so-called because they are machine-generated and not human-generated.

Now, you have a new dataset; a fraction of it has human-generated labels while the rest contains machine-generated pseudo-labels, but all the data points now have some kind of label and a model can be trained on them.

Co-training

Co-training has the same basic flavor as self-training, but it has more moving parts. With co-training you’re going to train two models on the labeled data, each on a different set of features (in the literature these are called “views”.)

If we’re still working on that plant classifier from before, one model might be trained on the number of leaves or petals, while another might be trained on their color.

At any rate, now you have a pair of models trained on different views of the labeled data. These models will then generate pseudo-labels for all the unlabeled datasets. When one of the models is very confident in its pseudo-label (i.e., when the probability it assigns to its prediction is very high), that pseudo-label will be used to update the prediction of the other model, and vice versa.

Let’s say both models come to an image of a rose. The first model thinks it’s a rose with 95% probability, while the other thinks it’s a tulip with a 68% probability. Since the first model seems really sure of itself, its label is used to change the label on the other model.

Think of it like studying a complex subject with a friend. Sometimes a given topic will make more sense to you, and you’ll have to explain it to your friend. Other times they’ll have a better handle on it, and you’ll have to learn from them.

In the end, you’ll both have made each other stronger, and you’ll get more done together than you would’ve done alone. Co-training attempts to utilize the same basic dynamic with ML models.

Graph-based semi-supervised learning

Another way to apply labels to unlabeled data is by utilizing a graph data structure. A graph is a set of nodes (in graph theory we call them “vertices”) which are linked together through “edges.” The cities on a map would be vertices, and the highways linking them would be edges.

If you put your labeled and unlabeled data on a graph, you can propagate the labels throughout by counting the number of pathways from a given unlabeled node to the labeled nodes.

Imagine that we’ve got our fern and rose images in a graph, together with a bunch of other unlabeled plant images. We can choose one of those unlabeled nodes and count up how many ways we can reach all the “rose” nodes and all the “fern” nodes. If there are more paths to a rose node than a fern node, we classify the unlabeled node as a “rose”, and vice versa. This gives us a powerful alternative means by which to algorithmically generate labels for unlabeled data.

Contact Us

Semi-Supervised Learning Examples

The amount of data in the world is increasing at a staggering rate, while the number of human-hours available for labeling it all is increasing at a much less impressive clip. This presents a problem because there’s no end to the places where we want to apply machine learning.

Semi-supervised learning presents a possible solution to this dilemma, and in the next few sections, we’ll describe semi-supervised learning examples in real life.

  • Identifying cases of fraud: In finance, semi-supervised learning can be used to train systems for identifying cases of fraud or extortion. Rather than hand-labeling thousands of individual instances, engineers can start with a few labeled examples and proceed with one of the semi-supervised learning approaches described above.
  • Classifying content on the web: The internet is a big place, and new websites are put up all the time. In order to serve useful search results it’s necessary to classify huge amounts of this web content, which can be done with semi-supervised learning.
  • Analyzing audio and images: This is perhaps the most popular use of semi-supervised learning. When audio files or image files are generated they’re often not labeled, which makes it difficult to use them for machine learning. Beginning with a small subset of human-labeled data, however, this problem can be overcome.

How Is Semi-Supervised Learning Different From…?

With all the different approaches to machine learning, it can be easy to confuse them. To make sure you fully understand semi-supervised learning, let’s take a moment to distinguish it from similar techniques.

Semi-Supervised Learning vs Self-Supervised Learning

With semi-supervised learning you’re training a model on a subset of labeled data and then using this model to process the unlabeled data. Self-supervised learning is different in that it’s showing an algorithm some fraction of the data (say the first 80 words in a paragraph) and then having it predict the remainder (the other 20 words in a paragraph.)

Self-supervised learning is how LLMs like GPT-4 are trained.

Semi-Supervised Learning vs Reinforcement Learning

One interesting subcategory of ML we haven’t discussed yet is reinforcement learning (RL). RL involves leveraging the mathematics of sequential decision theory (usually a Markov Decision Process) to train an agent to interact with its environment in a dynamic, open-ended way.

It bears little resemblance to semi-supervised learning, and the two should not be confused.

Semi-Supervised Learning vs Active Learning

Active learning is a type of semi-supervised learning. The big difference is that, with active learning, the algorithm will send its lowest-confidence pseudo-labels to a human for correction.

When Should You Use Semi-Supervised Learning?

Semi-supervised learning is a way of training ML models when you only have a small amount of labeled data. By training the model on just the labeled subset of data and using it in a clever way to label the rest, you can avoid the difficulty of having a human being label everything.

There are many situations in which semi-supervised learning can help you make use of more of your data. That’s why it has found widespread use in domains as diverse as document classification, fraud, and image identification.

So long as you’re considering ways of using advanced AI systems to take your business to the next level, check out our generative AI resource hub to go even deeper. This technology is changing everything, and if you don’t want to be left behind, set up a time to talk with us.

Request A Demo

How Large Language Models Have Evolved

It seems as though large language models (LLMs) exploded into public awareness almost overnight. Relatively few people had heard of GPT-2, but I would venture to guess relatively few people haven’t heard of ChatGPT.

But like most things, language models have a history. And, in addition to being outrageously interesting, that history can help us reason about the progress in LLMs, as well as their likely future impacts.

Let’s get started!

A Brief History of Artificial Intelligence Development

The human fascination with building artificial beings capable of thought and action goes back a long way. Writing in roughly the 8th century BCE, Homer recounts tales of the god Hephaestus outsourcing repetitive manual tasks to automated bellows and working alongside robot-like “attendants” that were “…golden, and in appearance like living young women.”

No mere adornments, these handmaidens were described as having “intelligence in their hearts” and stirring “nimbly in support of their master” because “from the immortal gods they have learned how to do things.”

Some 500 years later, mathematicians in Alexandria would produce treatises on creating mechanical servants and various kinds of automata. Heron wrote a technical manual for producing a mechanical shrine and an automated theatre whose figurines could be activated to stage a full tragic play through an intricate system of cords and axles.

Nor is it only ancient Greece that tells similar tales. Jewish legends speak of the Golem, a being made of clay and imbued with life and agency through the use of language. The word “abracadabra”, in fact, comes from the Aramaic phrase avra k’davra, which translates to “I create as I speak.”

Through the ages, these old ideas have found new expression in stories such as “The Sorcerer’s Apprentice”, Mary Shelley’s “Frankenstein”, and Karel Čapek’s “R.U.R.”, a science fiction play that features the first recorded use of the word “robot”.

From Science Fiction to Science Fact

But they remained purely fiction until the early 20th Century, when advances in the theory of computation, as well as the development of primitive computers, began to offer a path toward actually building intelligent systems.

Arguably, the field of artificial intelligence really began in earnest with the 1950 publication of Alan Turing’s “Computing Machinery and Intelligence” – in which he proposed the famous “Turing test” – and with the 1956 Dartmouth conference on AI, organized by luminaries John McCarthy and Marvin Minsky.

People began taking AI seriously. Over the next ~50 years, there were numerous periods of hype and exuberance in which major advances were made, as well as long stretches, known as “AI winters”, in which funding dried up and little was accomplished.

Neural networks and the deep learning revolution are two advances that are particularly important for understanding how large language models have evolved over time, so it’s to these that we now turn.

Neural Networks And The Deep Learning Revolution

The groundwork for future LLM systems was laid by Walter Pitts and Warren McCulloch in the early 1940s. Inspired by the burgeoning study of the human brain, they wondered if it would be possible to build an artificial neuron that had the same basic properties as a biological one, i.e. it would activate and fire once a certain critical threshold had been crossed.

They were successful, though several other breakthroughs would be required before artificial neurons could be arranged into systems that were capable of doing useful work. One such breakthrough was backpropagation, the basic algorithm that is still used to train deep learning systems. Backpropagation was developed in 1960, and it uses the errors in a model’s outputs to iteratively adjust its internal parameters.

It wasn’t until 1985, however, that David Rumelhart, Ronald Williams, and Geoff Hinton used backpropagation in neural networks, and in 1989, this allowed Yann LeCun to train a convolutional neural network to recognize handwritten digits.

This was not the only architectural improvement that came out of this period. Especially noteworthy were the long short-term memory (LSTM) networks that were introduced in 1997 by Sepp Hochreiter and Jürgen Schmidhuber, which made it possible to learn more complex functions.

With these advances, it was clear that neural networks could be trained to do useful work, and that they were poised to do so. All that was left was to gather the missing piece: data.

The Big Data Era

Neural networks and deep-learning applications tend to be extremely data-hungry, and access to quality training data has always been a major bottleneck. In 2009 Stanford’s Fei-Fei Li sought to change this by releasing Imagenet, a database of over 14 million labeled images that could be used for free by researchers. The increase in available data, together with substantial improvements in computer hardware like graphical processing units (GPUs), meant that at long last the promise of deep learning could begin to be fulfilled.

And it was. In 2011 a convolutional neural network called “AlexNet” won multiple international competitions for image recognition, IBM’s Watson system beat several Jeopardy! all-stars in a real game, and Apple launched Siri. Amazon’s Alexa followed in 2014, and from 2015 to 2017 DeepMind’s AlphaGo shocked the world by utterly dominating the best human Go players.

Substantial strides were made in language models. In 2018 Google introduced its Bidirectional Encoder Representations from Transformers (BERT), a pre-trained model capable of a wide array of tasks, like text summarization, translation, and sentiment analysis.

One Model To Rule Them All

It would be easy to miss the significance of AlexNet’s performance on the ImageNet competition or BERT’s usefulness across multiple tasks. For a long time, it was anyone’s guess as to whether it would be possible to train a single large model on a dataset and use it for a range of purposes, or whether it would be necessary to train a multitude of models for each application.

From 2011 onwards, it has become clear that large, general-purpose models are often the best way to go. This point has only become more reinforced, with the success of GPT-4 in everything from brainstorming scientific hypotheses to handling customer service tasks.

Contact Us

How Has Large Language Model Performance Improved?

Now that we’ve discussed this history, we’re well-placed to understand why LLMs and generative AI have ignited so much controversy. People have been mulling over the promise (and peril) of thinking machines for literally thousands of years. After all that time it looks like they might be here, at long last.

But what, exactly, has people so excited? What is it that advanced AI tools are doing that has captured the popular imagination? In the following sections, we’ll talk about the astonishing (and astonishingly rapid) improvements that have been seen in language models in just a few short years.

Getting To Human-Level

One of the more surprising things about LLMs such as ChatGPT is just how good they are at so many different things. LLMs are trained with a technique known as “self-supervised learning”. They take random samples of the text data they’re given, and they try to predict what words come next given the words that came before.

Suppose the model sees the famous opening lines of Leo Tolstoy’s Ann Karenina: “Happy families are all alike; unhappy families are all unhappy in their own way.” What the model is trying to do is learn a function that will allow it to predict “in their own way” from “Happy families are all alike; unhappy families are all unhappy ___”.

The modern crop of LLMs can do this incredibly well, but what is remarkable is just how far this gets you. People are using generative AI to help them write poems, business plans, and code, create recipes based on the ingredients in their fridges, and answer customer questions.

Emergence in Language Models

Perhaps even more interesting, however, is the phenomenon of “emergence” in language models. When researchers tested LLMs on a wide variety of tasks meant to be especially challenging to these models – things like identifying a movie given a string of emojis or finding legal chess moves – they found that in about 5% of tasks, there is a sudden, sharp increase in ability on a given task once a model reaches a certain size.

At present, it’s not really clear how we should think about emergence. One hypothesis for emergence is that a big enough model is able to learn some general piece of knowledge not attainable by a smaller cousin, while another, more prosaic one is that it’s a relatively straightforward consequence of the model’s internal statistical machinery.

What’s more, it’s difficult to pin down the conditions required for emergence in language models. Though it generally appears to be a function of model size, there are cases in which the same abilities can be achieved with smaller models, or with models trained on very high-quality data, and emergence shows up at different scales for different models and tasks.

Whatever ends up being the case, it’s clear that this is a promising direction for future research. Much more work needs to be done to understand how precisely LLMs accomplish what they accomplish. This will not only redound upon the question of emergence, it will also inform the ongoing efforts to make language models safer and less biased.

The GPT Series

The big recent news in AI has, of course, been ChatGPT. ChatGPT has proven useful in an astonishingly-wide variety of use cases and is among the first powerful systems to have been made widely available to the public.

ChatGPT is part of a broader series of GPT models built by OpenAI. “GPT” stands for “generative pre-trained transformer”, and the first of its kind was developed back in 2018. New models and major updates have been released at a rapid clip ever since, culminating with GPT-4 coming out in March of 2023.

At present, OpenAI’s CEO Sam Altman has claimed that there are no current plans to train a successor GPT-5 model, but there are other companies, like DeepMind, who could plausibly build a competitor.

What’s Next For Large Language Models?

Given their flexibility and power, LLMs are finding use across a wide variety of industries, from software engineering to medicine to customer service.

If your interest has been piqued and you’d like to talk to an expert at Quiq about incorporating it into your business, reach out to us to schedule a demo!

Request A Demo

Before You Develop a Mobile App For Your Business—Read This

Remember when every business was coming out with an app? Your favorite clothing brand, that big retail chain, your neighborhood grocery store, and even your babysitter jumped on the bandwagon and claimed real estate on their customers’ mobile devices.

It probably made you think: Do we need an app for our business?

Despite the many benefits of an app, diving headfirst into development can drain your team’s time and resources without the guarantee of a return. Done poorly, it can even hinder your customer experience. Before you do any mobile app development, you need a plan.

This article will take you through some of the lessons learned from working with brands that deliver world-class experiences within apps and beyond.

Why do companies build apps?

Apps are powerful marketing tools for all kinds of businesses—and none more than e-commerce. Here are some of the top reasons why businesses build an app.

A place for loyal customers.

Almost by default, a mobile app is an exclusive space for your loyal customers. Think about the last time you downloaded an app. It probably wasn’t for a business you buy from once a year. It’s almost always a brand you follow closely or a service you use frequently.

Providing an app is basically like creating a direct line of communication with your best customers. You can create exclusive content, provide a better shopping experience, and unlock early access to products and services. Apps are great ways to turn good customers into great ones.

Mobile device real estate.

On average, Americans check their phones 344 times per day—or once every 4 minutes. And 88% of the time we spend on our phones is spent in apps, according to Business Insider. Having your brand logo as an icon on your customers’ home screens is invaluable real estate.

Push notifications.

When customers have push notifications turned on, it’s another way to speak directly to your customers. Push notifications are great engagement tools, and you can connect with customers using timely and personalized communications and ultimately drive in-app sales.

Beating out or keeping up with competitors.

Standing out from the competition is another reason many businesses build apps. If your competitors are using apps to stand out from the crowd, then it often compels businesses to do the same.

Contact Us

What are the drawbacks of using building an app?

While mobile apps are still extremely popular, they have some major drawbacks for brands not ready to invest in them.

Phones are overcrowded.

Whereas building an app five years ago meant you stood out from the crowd, now you’re just one of many. People have an average of 80 apps on their phones, but they’re only using around nine a day.

Basically, that means mobile users are downloading apps and not using them on a regular basis. In fact, 25% of apps are used once and then never opened again, according to Statista.

Having an app doesn’t guarantee your customers’ attention or engagement—that’s still up to your marketing team.

There’s a big upfront investment.

Whether you enlist the help of your development team or outsource app creation, it’s a big lift. Getting a mobile app up and running takes significant resources, and while there may be a return on investment, it isn’t guaranteed.

When you’re already overwhelmed with your current development efforts, adding another microsite to manage could just make it worse.

You’ll double your marketing efforts.

More push notifications, more campaigns, more content. An app just means you have to do more to see an increase in revenue. While it could be a valuable asset, there are other, smaller steps you can take that will help you see the same revenue boost without the exponential effort.

Can you deliver rich customer experiences without an app?

Yes! But don’t think we’re anti-app. In fact, a lot of our clients create great apps that are sticky because they provide ongoing value to their customers. These clients are able to reach a whole set of people in their moment of need and build trust as they continue to look to the app for help.

However, many of the marketing and customer service goals that drive businesses to create an app can be achieved through rich business messaging. Here are a few examples.

Want to speak directly to your customers? Try outbound SMS.

Push notifications are extremely effective at connecting with customers, but it only takes a few taps to turn them off.

A similar communication method is outbound SMS messaging. You can personalize messages and deliver real-time communications via text messaging. Plus, with rich messaging capabilities, you can send interactive media like images, cards, emojis, and videos to enhance every conversation.

Want to engage with your customers? Use Google Business Messages.

Get customers from Google directly in communication with your customer service agents using Google Business Messages.

Customers can tap a message button right from Google search to connect with your team. (And since 92% of searches start with Google, there’s a good chance your customers will take advantage of this feature.)

Want to enhance your customer experience? Use Apple Messages for Business.

If you’re after a branded experience and want to meet user expectations, Apple Messages for Business delivers. Apple device users can simply tap the message icon from Maps, Siri, Safari, Spotlight, or your company’s website and instantly connect with your team.

You’ll deliver a rich messaging experience, plus your branding upfront and center. Your company name, logo, and colors will be featured in the messaging app, delivering a fully branded experience for your customers.

Want to be more social? Connect Quiq with social platforms.

Clients using Quiq are uniquely equipped with a conversational engagement platform that provides rich experiences to users across chat and business messaging channels.

This means that companies can provide content-rich, personalized experiences across SMS/text business messaging, web chat, Facebook, Twitter, Instagram, and WhatsApp.

Your brand can be on social platforms without working across them. Quiq gives your team access to all these messaging channels within one easy-to-use message center. So, unlike an app, adding more channels doesn’t necessarily increase the workload. It just gives your customers more ways to connect with you.

Should you consider business messaging over an app?

There’s no either/or choice here. Both can be part of a thriving marketing and customer service strategy. But if you’re looking for a way to engage your customers and haven’t tried business messaging—start there.

If you’re on the fence, consider this:

  1. You don’t have to build an app—you only have to implement business messaging.
  2. Customers don’t have to download and learn anything to connect with you. Business messaging is right there in communication channels they already know and love, like texting and social media.

Engage customers with or without an app.

The main goal of most apps is to help build long-term relationships with customers. Whether you choose to build an app or not, business messaging supports this goal by providing information, support, and help at the customer’s exact moment of need.

Quiq powers conversations between customers and companies across the most convenient and preferred engagement channels. With Quiq, you’ll have meaningful, timely, and personalized conversations with your customers that can be easily managed in a simplified UI.

Ready to see how business messaging can help you engage your customers with or without an app? Request a demo or try it for yourself today.

Are Generative AI And Large Language Models The Same Thing?

The release of ChatGPT was one of the first times an extremely powerful AI system was broadly available, and it has ignited a firestorm of controversy and conversation.

Proponents believe current and future AI tools will revolutionize productivity in almost every domain.

Skeptics wonder whether advanced systems like GPT-4 will even end up being all that useful.

And a third group believes they’re the first sparks of artificial general intelligence and could be as transformative for life on Earth as the emergence of homo sapiens.

Frankly, it’s enough to make a person’s head spin. One of the difficulties in making sense of this rapidly-evolving space is the fact that many terms, like “generative AI” and “large language models” (LLMs), are thrown around very casually.

In this piece, our goal is to disambiguate these two terms by discussing ​​the differences between generative AI vs. large language models. Whether you’re pondering deep questions about the nature of machine intelligence, or just trying to decide whether the time is right to use conversational AI in customer-facing applications, this context will help.

Let’s get going!

What Is Generative AI?

Of the two terms, “generative AI” is broader, referring to any machine learning model capable of dynamically creating output after it has been trained.

This ability to generate complex forms of output, like sonnets or code, is what distinguishes generative AI from linear regression, k-means clustering, or other types of machine learning.

Besides being much simpler, these models can only “generate” output in the sense that they can make a prediction on a new data point.

Once a linear regression model has been trained to predict test scores based on number of hours studied, for example, it can generate a new prediction when you feed it the hours a new student spent studying.

But you couldn’t use prompt engineering to have it help you brainstorm the way these two values are connected, which you can do with ChatGPT.

There are many types of generative AI, so let’s spend a few minutes discussing the major categories: image generation, music generation, code generation, and a few others.

How Is Generative AI Used To Make Images?

One of the first “wow” moments in generative AI came fairly recently when it was discovered that tools like Midjourney, DALL-E, and Stable Diffusion could create absolutely stunning images based on simple prompts like:

“Old man in a book store, ambient dappled sunlight, sedate, calm, close-up portrait.”

Depending on the wording you use, these images might be whimsical and futuristic, they might look like paintings from world-class artists, or they might look so photo-realistic you’d be convinced they’re about to start talking.

Created using DALL-E

Each of these tools is suited to specific applications. Midjourney seems to be best at capturing different artistic approaches and generating images that accurately capture an aesthetic. DALL-E tends to do better at depicting human figures, including faces and eyes. Stable Diffusion seems to do well at generating highly-detailed outputs, capturing subtleties like the way light reflects on a rain-soaked street.

(Note: these are all general impressions, it’s difficult to know how the tools will compare on any specific prompt.)

Broadly, this is known as “image synthesis”. And since we’re talking specifically about making images from text, this sub-domain is known as “text-to-image.”

A variant of this technique is text-to-video (alternatively: “text-to-4d”), which produces short clips or scenes based on text prompts. While text-to-video is still much more primitive than text-to-image, it will get better very quickly if recent progress in AI is any guide.

One interesting wrinkle in this story is that generative algorithms have generated something else along with images and animations: legal battles.

Earlier this year, Getty Images filed a lawsuit against the creators of Stable Diffusion, alleging that they trained their algorithm on millions of images from the Getty collection without getting permission first or compensating Getty in any way.

This has raised many profound questions about data rights, privacy, and how (or whether) people should be paid when their work is used to train a model that might eventually automate them out of a job.

We’re still in the early days of grappling with these issues, but they’re sure to make for fascinating case law in the years ahead.

How Is Generative AI Used To Make Music?

Given how successful advanced models have been in generating text (more on that shortly), it’s only natural to wonder whether similar models could also prove useful in generating music.

This is especially true because, on the surface, text and music share many obvious similarities (both are sequential, for example.) It would make sense, therefore, that the technical advances that have allowed coherent text production might also allow for coherent music production.

And they have! There are now a number of different tools, such as MusicLM, which are able to generate fairly high-quality audio tracks from prompts like:

“The main soundtrack of an arcade game. It is fast-paced and upbeat, with a catchy electric guitar riff. The music is repetitive and easy to remember, but with unexpected sounds, like cymbal crashes or drum rolls.”

As with using generative AI in images, creating artificial musical tracks in the style of popular artists has already sparked legal controversies. A particularly memorable example occurred just recently when a TikTok user supposedly created an AI-generated collaboration between Drake and The Weeknd, which then promptly went viral.

The track was removed from all major streaming services in response to backlash from artists and record labels, but it’s clear that ai music generators are going to change the way art is created in a major way.

How Is Generative AI Used For Coding?

It’s long been the dream of both programmers and non-programmers to simply be able to provide a computer with natural-language instructions (“build me a cool website”) and have the machine handle the rest. It would be hard to overstate the explosion in creativity and productivity this would initiate.

With the advent of code-generation models such as Replit’s Ghostwriter and GitHub Copilot, we’ve taken one more step towards that halcyon world.

As is the case with other generative models, code-generation tools are usually trained on massive amounts of data, after which point they’re able to take simple prompts and produce code from them.

You might ask it to write a function that converts between several different coordinate systems, create a web app that measures BMI, or translate from Python to Javascript.

As things stand now, the code is often incomplete in small ways. It might produce a function that takes an argument as input that is never used, for example, or which lacks a return function. Still, it is remarkable what has already been accomplished.

There are now software developers who are using models like ChatGPT all day long to automate substantial portions of their work, to understand new codebases with which they’re unfamiliar, or to write comments and unit tests.

Contact Us

What Are Large Language Models?

Now that we’ve covered generative AI, let’s turn our attention to large language models (LLMs).

LLMs are a particular type of generative AI.

Unlike with MusicLM or DALL-E, LLMs are trained on textual data and then used to output new text, whether that be a sales email or an ongoing dialogue with a customer.

(A technical note: though people are mostly using GPT-4 for text generation, it is an example of a “multimodal” LLM because it has also been trained on images. According to OpenAI’s documentation, image input functionality is currently being tested, and is expected to roll out to the broader public soon.)

What Are Examples of Large Language Models?

By far the most well-known example of an LLM is OpenAI’s “GPT” series, the latest of which is GPT-4. The acronym “GPT” stands for “Generative Pre-Trained Transformer”, and it hints at many underlying details about the model.

GPT models are based on the transformer architecture, for example, and they are pre-trained on a huge corpus of textual data taken predominately from the internet.

GPT, however, is not the only example of an LLM.

The BigScience Large Open-science Open-access Multilingual Language Model – known more commonly by its mercifully-short nickname, “BLOOM” – was built by more than 1,000 AI researchers as an open-source alternative to GPT.

BLOOM is capable of generating text in almost 50 natural languages, and more than a dozen programming languages. Being open-sourced means that its code is freely available, and no doubt there will be many who experiment with it in the future.

In March, Google announced Bard, a generative language model built atop its Language Model for Dialogue Applications (LaMDA) transformer technology.

As with ChatGPT, Bard is able to work across a wide variety of different domains, offering help with planning baby showers, explaining scientific concepts to children, or helping you make lunch based on what you already have in your fridge.

How Are Large Language Models Trained?

A full discussion of how large language models are trained is beyond the scope of this piece, but it’s easy enough to get a high-level view of the process. In essence, an LLM like GPT-4 is fed a huge amount of textual data from the internet. It then samples this dataset and learns to predict what words will follow given what words it has already seen.

At first, its performance will be terrible, but over time it will learn that a sentence like “I sat down on the _____” probably ends with a word like “floor” or “chair”, and probably not a word like “cactus” (at least, we hope you’re not sitting down on a cactus!)

When a model has been trained for long enough on a large enough dataset, you get the remarkable performance seen with tools like ChatGPT.

Is ChatGPT A Large Language Model?

Speaking of ChatGPT, you might be wondering whether it’s a large language model. ChatGPT is a special-purpose application built on top of GPT-3, which is a large language model. GPT-3 was fine-tuned to be especially good at conversational dialogue, and the result is ChatGPT.

Are All Large Language Models Generative AI?

Yes. To the best of our knowledge, all existing large language models are generative AI. “Generative AI” is an umbrella term for algorithms that generate novel output, and the current set of models is built for that purpose.

Utilizing Generative AI In Your Business

Though truly powerful generative AI language models are less than a year old, they’re already being integrated into numerous business applications. Quiq Compose, for example, is able to study past interactions with customers to better tailor its future conversations to their particular needs.

From generating fake viral rap songs to generating photos that are hard to distinguish from real life, these powerful tools have already proven that they can dramatically speed up marketing, software development, and many other crucial business functions.

If you’re an enterprise wondering how you can use advanced AI technologies such as generative AI language models for applications like customer service, schedule a demo to see what the Quiq platform can offer you!

A Deep Dive on Large Language Models—And What They Mean For You

The release of OpenAI’s ChatGPT in late 2022 has utterly transformed the conversation around artificial intelligence. Whether it’s generating functioning web apps with just a few prompts, writing Spanish-language children’s stories about the blockchain in the style of Dr. Suess, or opining on the virtues and vices of major political figures, its ability to generate long strings of coherent, grammatically-correct text is shocking.

Seen in this light, it’s perhaps no surprise that ChatGPT has achieved such a staggering rate of growth. The application garnered a million users less than a week after its launch.

It’s believed that by January of 2023, this figure had climbed to 100 million monthly users, blowing past the adoption rates of TikTok (which needed nine months to get to this many monthly users) and Instagram (which took over two years.)

Naturally, many have become curious about the “large language model” (LLM) technology that makes ChatGPT and similar kinds of disruptive generative AI possible.

In this piece, we’re going to do a deep dive on LLMs, exploring how they’re trained, how they work internally, and how they might be deployed in your business. Our hope is that this will arm Quiq’s customers with the context they need to keep up with the ongoing AI revolution.

What Are Large Language Models?

LLMs are pieces of software with the ability to interact with and generate a wide variety of text. In this discussion, “text” is used very broadly to include not just existing natural language but also computer code.

A good way to begin exploring this subject is to analyze each of the terms in “large language model”, so let’s do that now. Here’s our large language models overview:

LLMs Are Models.

In machine learning (ML), you can think of a model as being a function that maps inputs to outputs. Early in their education, for example, machine learning engineers usually figure out how to fit a linear regression model that does something like predict the final price of a house based on its square footage.

They’ll feed their model a bunch of data points that look like this:

House 1: 800 square feet, $120,000
House 2: 1000 square feet, $175,000
House 3: 1500 square feet, $225,000

And the model learns the relationship between square footage and price well enough to roughly predict the price of homes that weren’t in its training data.

We’ll have a lot more to say about how LLMs are trained in the next section. For now, just be aware that when you get down to it, LLMs are inconceivably vast functions that take the input you feed them and generate a corresponding output.

LLMs Are Large.

Speaking of vastness, LLMs are truly gigantic. As with terms like “big data”, there isn’t an exact, agreed-upon point at which a basic language model becomes a large language model. Still, they’re plenty big enough to deserve the extra “L” at the beginning of their name.

There are a few ways to measure the size of machine learning models, but one of the most common is by looking at their parameters.

In the linear regression model just discussed, there would be only one parameter, for square footage. We could make our model better by also showing it the home’s zip code and the number of bathrooms it has, and then it would have three parameters.

It’s hard to say how big most real systems are because that information isn’t usually made public, but a linear regression model might have dozens of parameters, and a basic neural network could range from a few hundred thousand to a few tens of millions of parameters.

GPT-3 has 175 billion parameters, and Google’s Minerva model has 540 billion parameters. It isn’t known how many parameters GPT-4 has, but it’s almost certainly more.

(Note: I say “almost” certainly because better models don’t always have more parameters. They usually do, but it’s not an ironclad rule.)

LLMs Focus On Language.

ChatGPT and its cousins take text as input and produce text as output. This makes them distinct from some of the image-generation tools that are on the market today, such as DALL-E and Midjourney.

It’s worth noting, however, that this might be changing in the future. Though most of what people are using GPT-4 to do revolves around text, technically, the underlying model is multimodal. This means it can theoretically interact with image inputs as well. According to OpenAI’s documentation, support for this feature should arrive in the coming months.

How Are Large Language Models Trained?

Like all machine learning models, LLMs must be trained. We don’t actually know exactly how OpenAI trained the latest GPT models, as they’ve kept those details secret, but we can make some broad comments about how systems like these are generally trained.

Before we get into technical details, let’s frame the overall task that LLMs are trying to perform as a guessing game. Imagine that I start a sentence and leave out the last word, asking you to provide a guess as to how it ends.

Some of these would be fairly trivial; everyone knows that “[i]t was the best of times, it was the worst of _____,” ends with the word “times.” Others would be more ambiguous; “I stopped to pick a flower, and then continued walking down the ____,” could plausibly end with words like “road”, “street”, or “trail.”

For still others, there’d be an almost infinite number of possibilities; “He turned to face the ___,” could end with anything from “firehose” to “firing squad.”

But how is it that you’re able to generate these guesses? How do you know what a good ending to a natural-language sentence sounds like?

The answer is that you’ve been “training” for this task your entire life. You’ve been listening to sentences, reading and writing sentences, or thinking in sentences for most of your waking hours, and have therefore developed a sense of how they work.

The process of training an LLM differs in many specifics, but at a high level, it’s learning to do the same thing. A model like GPT-4 is fed gargantuan amounts of textual data from the internet or other sources, and it learns a statistical distribution that allows it to predict which words come next.

At first, it’ll have no idea how to end the sentence “[i]t was the best of times, it was the worst of ____.” But as it sees more and more examples of human-generated textual content, it improves. It discovers that when someone writes “red, orange, yellow, green, blue, indigo, ______”, the next sequence of letters is probably “violet”. It begins to be more sensitive to context, discovering that the words “bat”, “diamond”, and “plate” are probably occurring in a discussion about baseball and not the weirdest Costco you’ve ever been to.

It’s precisely this nuance that makes advanced LLMs suitable for applications such as customer service.

They’re not simply looking up pre-digested answers to questions, they’re learning a function big enough to account for the subtleties of a specific customer’s specific problem. They still don’t do this job perfectly, but they’ve made remarkable progress, which is why so many companies are looking at integrating them.

Getting into the GPT-weeds

The discussion so far is great for building a basic intuition for how LLMs are trained, but this is a deep dive, so let’s talk technical specifics.

Though we don’t know much about GPT-4, earlier models like GPT and GPT-2 have been studied in great detail. By understanding how they work, we can cultivate a better grasp of cutting-edge models.

When an LLM is trained, it’s fed a great deal of text data. It will grab samples from this data, and try to predict the next token in its sample. To make our earlier explanation easier to understand we implied that a token is a word, but that’s not quite right. A token can be a word, an individual letter, or “sub words”, i.e. small chunks of letters and spaces.

This process is known as “self-supervised learning” because the model can assess its own accuracy by checking its predicted next token against the actual next token in the dataset it’s training on.

At first, its accuracy is likely to be very bad. But as it trains its internal parameters (remember those?) are tuned with an optimizer such as stochastic gradient descent, and it gets better.

One of the crucial architectural building blocks of LLMs is the transformer.

A full discussion of transformers is well beyond the scope of this piece, but the most important thing to know is that transformers can use “attention” to model more complex relationships in language data.

For example: in a sentence like “the dog didn’t chase the cat because it was too tired”, every human knows that “it” refers to the dog and not the cat. Earlier approaches to building language models struggled with such connections in sentences that were longer than a few words, but using attention, transformers can handle them with ease.

In addition to this obvious advantage, transformers have found widespread use in deep learning applications such as language models because they’re easy to parallelize, meaning that training times can be reduced.

Building On Top Of Large Language Models

Out-of-the-box LLMs are pretty powerful, but it’s often necessary to tweak them for specific applications such as enterprise bots. There are a few ways of doing this, and we’re going to confine ourselves to two major approaches: fine-tuning and prompt engineering.

First up, it’s possible to fine-tune some of these models. Fine-tuning an LLM involves providing a training set and letting the model update its internal weights to perform better on a specific task. 

Next, the emerging discipline of prompt engineering refers to the practice of systematically crafting the text fed to the model to get it to better approximate the behavior you want.

LLMs can be surprisingly sensitive to small changes in words, phrases, and context; the job of a prompt engineer, therefore, is to develop a feel for these sensitivities and construct prompts in a way that maximizes the performance of the LLM.

Contact Us

How Can Large Language Models Be Used In Business?

There is a new gold rush in applying AI to business use cases.

For starters, given how good they are at generating text, they’re being deployed to write email copy, blog posts, and social media content, to text or survey customers, and to summarize text.

LLMs are also being used in software development. Tools like Replit’s Ghostwriter are already dramatically improving developer productivity in a variety of domains, from web development to machine learning.

What Are The “LLiMitations” Of LLMs?

For all their power, LLMs have turned out to have certain well-known limitations. To begin with, LLMs are capable of being toxic, harmful, aggressive, and biased.

Though heroic efforts have been made to train this behavior out with techniques such as reinforcement learning from human feedback, it’s possible that it can reemerge under the right conditions.

This is something you should take into account before giving customers access to generative AI offerings.

Another oft-discussed limitation is the tendency of LLMs to “invent” facts. Remember, an LLM is just trying to predict sequences of tokens, and there’s no reason it couldn’t output a sequence of text like “Dr. Micha Sartorius, professor of applied computronics at Santa Grega University”, even though this person, field, and university are fictitious.

This, too, is something you should be cognizant of before letting customers interact with generative AI.

At Quiq, we harness the power of LLMs’ language-generating capabilities, while putting strict guardrails in place to prevent these risks that are inherent to public-facing generative AI.

Should You Be Using Large Language Models?

LLMs are a remarkable engineering achievement, having been trained on vast amounts of human text and able to generate whole conversations, working code, and more.

No doubt, some of the fervor around LLMs will end up being hype. Nevertheless, the technology has been shown to be incredibly powerful, and it is unlikely to go anywhere. If you’re interested in learning about how to integrate generative AI applications like Quiq’s into your business, schedule a demo with us today!

Request A Demo

Prompt Engineering: What Is It—And How Can You Use It To Get The Most Out Of AI?

Think back to your school days. You come into class only to discover a timed writing assignment on the agenda. You have to respond to the provided prompt, quickly and accurately and will be graded against criteria like grammar, vocabulary, factual accuracy, and more.

Well, that’s what natural language processing (NLP) software like ChatGPT does daily. Except, when a computer steps into the classroom, it can’t raise its hand to ask questions.

That’s why it’s so important to provide AI with a prompt that’s clear and thorough enough to produce the best possible response.

What is ai prompt engineering?

A prompt can be a question, a phrase, or several paragraphs. The more specific the prompt is, the better the response.

Writing the perfect prompt — prompt engineering — is critical to ensure the NLP response is not only factually correct but crafted exactly as you intended to best deliver information to a specific target audience.

You can’t use low-quality ingredients in the kitchen to produce gourmet cuisine — and you can’t expect AI to, either.

Let’s revisit your old classroom again: did you ever have a teacher provide a prompt where you just weren’t really sure what the question was asking? So, you guessed a response based on the information provided, only to receive a low score.

In the post-exam review, the teacher explained what she was actually looking for and how the question was graded. You sat there thinking, “If I’d only had that information when I was given the prompt!”

Well, AI feels your pain.

The responses that NLP software provides are only as good as the input data. Learning how to communicate with AI to get it to generate desired responses is a science, and you can learn what works best through trial and error to continuously optimize your prompts.

Prompts that fail to deliver, and why.

What’s the root of the issue of prompt engineering gone wrong? It all comes down to incomplete, inconsistent, or incorrect data.

Even the most advanced AI using neural networks and deep learning techniques still needs to be fed the right information in the right way. When there is too little context provided, not enough examples, conflicting information from different sources, or major typos in the prompt, the AI can generate responses that are undesirable or just plain wrong.

How to craft the perfect prompt.

Here are some important factors to take into consideration for successful prompt engineering.

Clear instructions

Provide specific instructions and multiple examples to illustrate precisely what you want the AI to do. Words like “something,” “things,” “kind of,” and “it” (especially when there are multiple subjects within one sentence) can be indicators that your prompt is too vague.

Try to use descriptive nouns that refer to the subject of your sentence and avoid ambiguity.

  • Example (ambiguity): “She put the book on the desk; it was blue.”
  • What does “it” refer to in this sentence? Is the book blue, or is the desk blue?

Simple language

Use plain language, but avoid shorthand and slang. When in doubt, err on the side of overcommunicating and you can use trial and error to determine what shorthand approaches work for future, similar prompts. Avoid internal company or industry-specific jargon when possible, and be sure to clearly define any terms you may want to integrate.

Quality data

Give examples. Providing a single source of truth — for example, an article you want the AI to respond to questions about — will have a higher probability of returning factually correct responses based on the provided article.

On that note, teach the API how you want it to return responses when it doesn’t know the answer, such as “I don’t know,” “not enough information,” or simply “?”.

Otherwise, the AI may get creative and try to come up with an answer that sounds good but has no basis in reality.

Persona

Develop a persona for your responses. Should the response sound as though it’s being delivered by a subject matter expert or would it be better (legally or otherwise) if the response was written by someone who was only referring to subject matter experts (SMEs)?

  • Example (direct from SMEs): “Our team of specialists…”
  • Example (referring to SMEs): “Based on recent research by experts in the field…”

Voice, style, and tone

Decide how you want to represent your brand’s voice, which will largely be determined by your target audience. Would your customer be more likely to trust information that sounds like it was provided by an academic, or would a colloquial voice be more relatable?

Do you want a matter-of-fact, encyclopedia-type response, a friendly or supportive empathetic approach, or is your brand’s style more quick-witted and edgy?

With the right prompt, AI can capture all that and more.

Quiq takes prompt engineering out of the equation.

Prompt engineering is no easy task. There are many nuances to language that can trick even the most advanced NLP software.

Not only are incorrect AI responses a pain to identify and troubleshoot, but they can also hurt your business’s reputation if they aren’t caught before your content goes public.

On the other hand, manual tasks that could be automated with NLP waste time and money that could be allocated to higher-priority initiatives.

Quiq uses large language models (LLMs) to continuously optimize AI responses to your company’s unique data. With Quiq’s world-class Conversational AI platform, you can reduce the burden on your support team, lower costs, and boost customer satisfaction.

Contact Quiq today to see how our innovative LLM-built features improve business outcomes.

Contact Us

Agent Efficiency: How to Collect Better Metrics

Your contact center experience has a direct impact on your bottom line. A positive customer experience can nudge them toward a purchase, encourage repeat business, or turn them into loyal brand advocates.

But a bad run-in with your contact center? That can turn them off of your business for life.

No matter your industry, customer service plays a vital role in financial success. While it’s easy to look at your contact center as an operational cost, it’s truly an investment in the future of your business.

To maximize your return on investment, your contact center must continually improve. That means tracking contact center effectiveness and agent efficiency is critical.

But before you make any changes, you need to understand how your customer service center currently operates. What’s working? What needs improvement? And what needs to be cut?

Let’s examine how contact centers can measure customer service performance and boost efficiency.

What metrics should you monitor?

The world of contact center metrics is overwhelming—to say the least. There are hundreds of data points to track to assess customer satisfaction, agent effectiveness, and call center success.

But to make meaningful improvements, you need to begin with a few basic metrics. Here are three to start with.

1. Response time.

Response time refers to how long, on average, it takes for a customer to reach an agent. Reducing the amount of time it takes to respond to customers can increase customer satisfaction and prevent customer abandonment.

Response time is a top factor for customer satisfaction, with 83% expecting to interact with someone immediately when they contact a company, according to Salesforce’s State of the Connected Customer report.

When using response time to measure agent efficiency, have different target goals set for different channels. For example, a customer calling in or using web chat will expect an immediate response, while an email may have a slightly longer turnaround. Typically, messaging channels like SMS text fall somewhere in between.

If you want to measure how often your customer service team meets your target response times, you can also track your service level. This metric is the percentage of messages and calls answered by customer service agents within your target time frame.

2. Agent occupancy.

Agent occupancy is the amount of time an agent spends actively occupied on a customer interaction. It’s a great way to quickly measure how busy your customer service team is.

An excessively low occupancy suggests you’ve hired more agents than contact volume demands. At the same time, an excessively high occupancy may lead to agent burnout and turnover, which have their own negative effects on efficiency.

3. Customer satisfaction.

The most important contact center performance metric, customer satisfaction, should be your team’s main focus. Customer satisfaction, or CSAT, typically asks customers one question: How satisfied are you with your experience?

Customers respond using a numerical scale to rate their experience from very dissatisfied (0 or 1) to very satisfied (5). However, the range can vary based on your business’s preferences.

You can calculate CSAT scores using this formula:

Number of satisfied customers ÷ total number of respondents x 100 = CSAT

CSAT’s a great first metric to measure since it’s extremely important in measuring your agents’ effectiveness, and it’s easy for customers to complete.

There are lots of options for measuring different aspects of customer satisfaction, like customer effort score and Net Promoter Score®. Whichever you choose, ensure you use it consistently for continuous customer input.

Bonus tip: Capturing customer feedback and agent performance data is easier with contact center software. Not only can the software help with customer relationship management, but it can facilitate customer surveys, track agent data, and more.

Contact Us

How to assess contact center metrics.

Once you’ve measured your current customer center operations, you can start assessing and taking action to improve performance and boost customer satisfaction. But looking at the data isn’t as easy as it seems. Here are some things to keep in mind as you start to base decisions on your numbers.

Figure out your reporting methods.

How will you gather this information? What timeframes will you measure? Who’s included in your measurements? These are just a few questions you need to answer before you can start analyzing your data.

Contact center software, or even more advanced conversational AI platforms like Quiq, can help you track metrics and even put together reports that are ready for your management team to analyze and take action on.

Analyze data over time.

When you’re just starting out, it can be hard to contextualize your data. You need benchmarks to know whether your CSAT rating or occupancy rates are good or bad. While you can start with industry benchmarks, the most effective way to analyze data is to measure it against yourself over periods of time.

It takes months or even years for trends to reveal themselves. Start with comparative measurements and then work your way up. Month-over-month data or even quarter-over-quarter can give you small windows into what’s working and what’s not working. Just leave the big department-wide changes until you’ve collected enough data for it to be meaningful.

Don’t forget about context.

You can’t measure contact center metrics in a silo. Make sure you look at what’s going on throughout your organization and in the industry as a whole before making any changes. For example, a drop in customer response time might have to do with the influx of messages caused by a faulty product.

While collecting the data is easy, analyzing it and drawing conclusions is much more difficult. Keep the whole picture in mind when making any important decisions.
How to improve call center agent efficiency.
Now that you have the numbers, you can start making changes to improve your agent efficiency. Start with these tips.

Make incremental changes.

Don’t be tempted to make wide-reaching changes across your entire contact center team when you’re not happy with the data. Select specific metrics to target and make incremental changes that move the needle in the right direction.

For example, if your agent occupancy rates are high, don’t rush to add new members to your team. Instead, see what improvements you can make to agent efficiency. Maybe there’s some call center software you can invest in that’ll improve call turnover. Or perhaps all your team needs is some additional training on how to speed up their customer interactions. No matter what you do, track your changes.

Streamline backend processes.

Agents can’t perform if they’re constantly searching for answers on slow intranets or working with outdated information. Time spent fighting with old technology is time not spent serving your contact center customers.

Now’s the perfect time to consider a conversational platform that allows your customers to reach out using the preferred channel but still keeps the backend organized and efficient for your team.

Agents can bounce back and forth between messaging channels without losing track of conversations. Customers get to chat with your brand how they want, where they want, and your team gets to preserve the experience and deliver snag-free customer service.

Improve agent efficiency with Quiq’s Conversational AI Platform

If you want to improve your contact center’s efficiency and customer satisfaction ratings, Quiq’s conversational customer engagement software is your new best friend.

Quiq’s software enables agents to manage multiple conversations simultaneously and message customers across channels, including text and web chat. By giving customers more options for engaging with customer service, Quiq reduces call volume and allows contact center agents to focus on the conversations with the highest priority.

How To Be The Leader Of Personalized CX In Your Industry

Customer expectations are evolving alongside AI technology, at an unprecedented pace. People are more informed, connected, and demanding than ever before, and they expect nothing less than exceptional customer experiences (CX) from the brands they interact with.

This is where personalized customer experience comes in.

By tailoring CX to individual customers’ needs, preferences, and behaviors, businesses can create more meaningful connections, build loyalty, and drive revenue growth.
In this article, we will explore the power of personalized CX in industries and how it can help businesses stay ahead of the curve.

What is Personalized CX?

Personalized CX refers to the process of tailoring customer experiences to individual customers based on their unique needs, preferences, and behaviors. This involves using customer data and insights to create targeted and relevant interactions across multiple touchpoints, such as websites, mobile apps, social media, and customer service channels.

Personalization can take many forms, from simple tactics like using a customer’s name in a greeting to more complex strategies like recognizing that they are likely to be asking a question about the order that was delivered today. The goal is to create a seamless and consistent experience that makes customers feel valued and understood.

Why is Personalized CX Important?

Personalized CX has become increasingly important in industries for several reasons:

1. Rising Customer Expectations

Today’s customers expect personalized experiences across all industries, from retail and hospitality to finance and healthcare. In fact, according to a survey by Epsilon, 80% of consumers are more likely to do business with a company if it offers personalized experiences.

2. Increased Competition

As industries become more crowded and competitive, businesses need to find new ways to differentiate themselves. Personalized CX can help brands stand out by creating a unique and memorable experience that sets them apart from their competitors.

3. Improved Customer Loyalty and Retention

Personalized CX can help businesses build stronger relationships with their customers by creating a sense of loyalty and emotional connection. According to a survey by Accenture, 75% of consumers are more likely to buy from a company that recognizes them by name, recommends products based on past purchases, or knows their purchase history.

4. Increased Revenue

By providing personalized CX, businesses can also increase revenue by creating more opportunities for cross-selling and upselling. According to a study by McKinsey, personalized recommendations can drive 10-30% of revenue for e-commerce businesses.

Industries That Can Benefit From Personalized CX

Personalized CX can benefit almost any industry, but some industries are riper for personalization than others.

Here are some industries that can benefit the most from personalized CX:

1. Retail

Retail is one of the most obvious industries that can benefit from personalized CX. By using customer data and insights, retailers can create tailored product recommendations and personalized support based on products purchased and current order status.

2. Hospitality

In the hospitality industry, personalized CX can create a more memorable and enjoyable experience for guests. From personalized greetings to customized room amenities, hospitality businesses can use personalization to create a sense of luxury and exclusivity.

3. Healthcare

Personalized CX is also becoming increasingly important in healthcare. By tailoring healthcare experiences to individual patients’ needs and preferences, healthcare providers can create a more patient-centered approach that improves outcomes and satisfaction.

4. Finance

In the finance industry, personalized CX can help businesses create more targeted and relevant offers and services. By using customer data and insights, financial institutions can offer personalized recommendations for investments, loans, and insurance products.

Best Practices for Implementing Personalized CX in Industries

Implementing personalized CX requires a strategic approach and a deep understanding of customers’ preferences and behaviors.

Here are some best practices for implementing personalized CX in industries:

1. Collect and Use Customer Data Wisely

Collecting customer data is essential for personalized CX, but it’s important to do so in a way that respects customers’ privacy and preferences. Businesses should be transparent about the data they collect and how they use it, and give customers the ability to opt out of data collection.

2. Use Technology to Scale Personalization

Personalizing CX for every individual customer can be a daunting task, especially for large businesses. Using technology, such as machine learning algorithms and artificial intelligence (AI), can help businesses scale personalization efforts and make them more efficient.

3. Be Relevant and Timely

Personalized CX is only effective if it’s relevant and timely. Businesses should use customer data to create targeted and relevant offers, messages, and interactions that resonate with customers at the right time.

4. Focus on the Entire Customer Journey

Personalization shouldn’t be limited to a single touchpoint or interaction. To create a truly personalized CX, businesses should focus on the entire customer journey, from awareness to purchase and beyond.

5. Continuously Test and Optimize

Personalized CX is a continuous process that requires constant testing and optimization. Businesses should use data and analytics to track the effectiveness of their personalization efforts and make adjustments as needed.

Challenges of Implementing Personalized CX in Industries

While the benefits of personalized CX are clear, implementing it in industries can be challenging. Here are some of the challenges businesses may face:

1. Data Privacy and Security Concerns

Collecting and using customer data for personalization raises concerns about data privacy and security. Businesses must ensure they are following best practices for data collection, storage, and usage to protect their customers’ information.

2. Integration with Legacy Systems

Personalization requires a lot of data and advanced technology, which may not be compatible with legacy systems. Businesses may need to invest in new infrastructure and systems to support personalized CX.

3. Lack of Skilled Talent

Personalized CX requires a skilled team with expertise in data analytics, machine learning, and AI. Finding and retaining this talent can be a challenge for businesses, especially smaller ones.

4. Resistance to Change

Implementing personalized CX requires significant organizational change, which can be met with resistance from employees and stakeholders. Businesses must communicate the benefits of personalization and provide training and support to help employees adapt.

Personalized CX is no longer a nice-to-have; it’s a must-have for businesses that want to stay competitive in today’s digital age. By tailoring CX to individual customers’ needs, preferences, and behaviors, businesses can create more meaningful connections, build loyalty, and drive revenue growth. While implementing personalized CX in industries can be challenging, the benefits far outweigh the costs.

The Rise of Conversational AI: Why Businesses Are Embracing It

Movies may have twisted our expectations of artificial intelligence—either giving us extremely high expectations or making us think it’s ready to wipe out humanity.

But the reality isn’t on those levels. In fact, you’re already using AI in your daily life—but it’s so ingrained in your technology you probably don’t even notice. Netflix and Spotify both use AI to personalize your content recommendations. Siri, Alexa, and Google Assistant use it as well.

Conversational AI, like what Quiq uses to power our chatbots, takes artificial intelligence to the next level. See what it is and how you can use it in your business.

What is conversational AI?

Conversational artificial intelligence (AI) is a collection of technologies that create a human-like experience. It combines natural language processing (NLP), machine learning, and other technologies to enhance streamlined conversations. This can be used in many applications, like chatbots and voice (like Siri and Alexa). The most common use case for conversational AI in the business-to-customer world is through an AI chatbot messaging experience.

Unlike rule-based chatbots, those powered by conversational AI generate responses and adapt to user behavior over time. Rule-based chatbots were also limited to what you put in them—meaning if someone phrased a question differently than you wrote it (or used slang/colloquialisms/etc.), it wouldn’t understand the question. Conversational AI can also help chatbots understand more complex questions.

Putting technical terms in context.

Companies throw around a lot of technical terms when it comes to artificial intelligence, so here are what they mean and how they’re used to improve your business.

Rules-based chatbots: Earlier chatbot iterations (and some current low-cost versions) work mainly through pre-defined rules. Your business (or service provider) writes specific guidelines for the chatbot to follow. For example, when a customer says “Hi,” the chatbot responds, “Hello, how may I help you?”

Another example is when a customer asks about a return. The chatbot is programmed to give a specific response, like, “Here’s a link to the return policy.”

However, the problem with rule-based chatbots is that they can be limiting. It only knows how to handle situations based on the information programmed into it. So if someone says, “I don’t like this product, what can I do?” and you haven’t planned for that question, the chatbot won’t have a response.

Machine learning: Machine learning is a way to combat the problem posed above. Instead of giving the chatbot specific parameters complete with pre-written questions and answers, machine learning helps chatbots make decisions based on the information provided.

Machine learning helps chatbots adapt over time based on customer conversations. Instead of giving the bot specific ways to answer specific questions, you show it the basic rules, and it crafts its own response. Plus, since it means your chatbot is always learning, it gets better the longer you use it.

Natural language processing: As humans and speakers of the English language, we know that there are different ways to ask every question. For example, a customer who wants to know when an item is back in stock may ask, “When is X back in stock?” or they might say, “When will you get X back in?” or even, “When are you restocking X?” Those three questions all mean the same thing, and as humans, we naturally understand that. But a rules-based bot must be told that those mean the same things, or they might not understand it.

Natural language processing (NLP) uses AI technology to help chatbots understand that those questions are all asking the same thing. It also can determine what information it needs to answer your question, like color, size, etc.

NLP also helps chatbots answer questions in a more human-like way. If you want your chatbot to sound more human (and you should), then find one that uses NLP.

Web-based SDK: A web-based SDK (that’s a software development kit for non-developers) is a set of tools and resources developers use to integrate programs (in this case, chatbots) into websites and web-based applications.

What does this mean for your chatbot? Context. When a user says, “I need help with my order,” the chatbot can use NLP to identify “help” and “order.” Then it can look back at previous conversations, pull the customers’ order history, and more—if the data is there.

Contextual conversations are everything in customer service—so this is a big factor in building a successful chatbot using conversational AI. In fact, 70% of customers expect anyone they’re speaking with to have the full context. With a web-based SDK, your chatbot can do that too.

The benefits of conversational AI.

Using chatbots with conversational AI provides benefits across your business, but the clearest wins are in your contact center. Here are three ways chatbots improve your customer service.

24/7 customer support.

Your customer service agents need to sleep, but your conversational AI chatbot doesn’t. A chatbot can answer questions and contain customer issues while your contact center is closed. Any issues they can’t solve, they can pass along to your agents the next day. Not only does that give your customers 24/7 service, but your agents will have less of a backlog when they return to work.

Faster response times.

When your agents are inundated with customers, an AI chatbot can pick up the slack. Send your chatbot in to greet customers immediately, let them know the wait time, or even start collecting information so your agents can get to the root of the problem faster. Chatbots powered with AI can also answer questions and solve easy customer issues, skipping human agents altogether.

For more ways AI chatbots can improve your customer service, read this >

More present customer service agents.

Chatbots can handle low-level customer queries and give agents the time and space to handle more complex issues. Not only will this result in better customer service, but agents will be happier and less stressed overall.

Plus, chatbots can scale during your busy seasons. You’ll save on costs since you won’t have to hire more agents, and the agents you have won’t be overworked.

How to make the most of AI technology.

Unfortunately, you can’t just plug and play with conversational AI and expect to become an AI company. Just like any other technology, it takes prep work and thoughtful implementation to get it right—plus lots of iterations.

Use these tips to make the most of AI technology:

Decide on your AI goals.

How are you planning on using conversational AI? Will it be for marketing? Customer service? All of the above? Think about what your main goals are and use that information to select the right AI partner.

Choose the right conversational AI platform.

Once you’ve decided on how you want to use conversational AI, select the right partner to help you get there. Think about aspects like ease of use, customization, scalability, and budget.

Design your chatbot interactions.

Even with artificial intelligence, you still have to put the work in upfront. What you do and how you do it will vary greatly depending on which platform you go with. Design your chatbot conversations with these things in mind:

  • Your brand voice
  • Personalization
  • Customer service best practices
  • Logical conversation flows
  • Concise messages

Build a partnership between agents and chatbots.

Don’t launch the chatbot independently of your customer service agents. Include them in the training and launch, and start to build a working relationship between the two. Agents and chatbots can work together on customer issues, both popping in and out of the conversation seamlessly. For example, a chatbot can collect information from the customer upfront and pass it to the agent to solve the issue. Then, when the agent is done, they can bring the chatbot back in to deliver a customer survey.

Test and refine.

Sometimes, you don’t know what you don’t know until it happens. Test your chatbot before it launches, but don’t stop there. Keep refining your conversations even after you’ve launched.

What does the future hold for conversational AI?

There are many exciting things happening in AI right now, and we’re only on the cusp of delving into what it can really do.

The big prediction? For now, conversational AI will keep getting better at what it’s already doing. More human-like interactions, better problem-solving, and more in-depth analysis.

In fact, 75% of customers believe AI will become more natural and human-like over time. Gartner is also predicting big things for conversational AI, saying by 2026, conversational AI deployments within contact centers will reduce agent labor costs by $80 billion.

Why should you jump in now when bigger things are coming? It’s simple. You’ll learn to master conversational AI tools ahead of your competitors and earn an early competitive advantage.

How Quiq does conversational AI.

To ensure you give your customers the best experience, Quiq powers our entire platform with conversational AI. Here are a few stand-out ways Quiq uniquely improves your customer service with conversational AI.

Design customized chatbot conversations.

Create chatbot conversations so smooth and intuitive that it feels like you’re talking to a real person. Using the best conversational AI techniques, Quiq’s chatbot gives customers quick and intelligent responses for an up-leveled customer experience.

Help your agents respond to customers faster.

Make your agents more efficient with Quiq Compose. Quiq Compose uses conversational AI to suggest responses to customer questions. How? It uses information from similar conversations in the past to craft the best response.

Empower agent performance.

Tools like our Adaptive Response Timer (ADT) prioritizes conversations based on how fast or slow customers respond. The conversational AI platform also uses AI to analyze customer sentiment to give extra attention to customers who need it.

This is just the beginning.

This is just a taste of what conversational AI can do. See how Quiq can apply the latest technology to your contact center to help you deliver exceptional customer service.

Contact Us

How to Create an Effective Business Text Messaging Strategy – The Ultimate Guide

U up? Text messaging replaced other communication methods for consumers all over the world. So why wouldn’t that extend to businesses?

Business text messaging is a great way to communicate with customers on their terms in their own messaging app. But it can be a challenge when you don’t have a plan.

Customer service is complex on its own, so taking it to a new medium only makes it harder. Knowing how to create an effective customer service text message strategy is the key to succeeding in today’s competitive market.

Why bother with business text messaging?

If you still think text messaging is a new-fangled fad, we’re here to open your eyes to the possibilities. (If you’re already rocking a text messaging strategy and just want to know how to improve it, feel free to skip to the next section—we won’t be offended.)

Your competitors are using it.

While you’re sleeping on text messaging (maybe you still think texting is for sending memes to friends, not business conversations), your competitors have jumped on business messaging and are seeing great returns.

In 2020, business messaging traffic hit 3.5 trillion. That’s up from 3.2 trillion in 2019, a 9.4% year-over-year increase, reports Juniper Research.

You can use business text messaging for all kinds of applications. Here are a few ideas to get your thought train started:

  • Customer support conversations
  • Outbound marketing messages
  • Appointment scheduling
  • Call-to-text in your interactive voice response (IVR) system
  • Complete one-off transactions
  • Use it as an engagement tool

Many businesses have found ways to use text messaging to interact with their customers, and now customers want and expect it.

People respond faster to text messages.

Text messaging has the benefit of being both a quick form of communication and a forgiving one.

Here’s what we mean. According to Forbes, it takes the average person 90 minutes to respond to an email but only 90 seconds to respond to a text message. So customers generally expect quick responses during a text conversation.

However, since the other person’s availability isn’t expected (like it might be with live chat), there’s typically some wiggle room.

So conversations are more likely to follow the customers’ preferred pace. It works when they’re ready for a quick chat, but they can step away whenever they need to.

Your customers want to message you.

Forbes also reported that 74% of customers say their impressions improve when businesses use text messaging. And it makes sense. Customers know how to use text messaging. They don’t have to download a new app or find your website.

When you use text messaging, you fit into your customers’ lives. You’re not asking them to do anything out of the ordinary—and they appreciate that.

If you’re still not convinced, here are nine more reasons why you should consider business text messaging.

Start by dissecting your current text messaging strategy.

Since text messaging is a unique medium with so many aspects to consider, you need a thorough strategy for success. Start by identifying the essentials.

What’s your purpose?

How are you using text messaging? Is it a revenue-driving channel? Are you using it for IVR system overflow? Customer service?

Pick a starting path. Trying to do all the things at once leads to a muddled strategy. Identify why you’re adding text messaging to your business. By starting small and focused, you’ll have the bandwidth to see what’s not working and fix it.

Who’s your audience?

Identify who you’re texting. While nearly every generation uses text messaging on a regular basis, they all use it in different ways. To start, identify who you’re targeting with text messaging. Consider:

  • Demographics like age, location, and income
  • Psychographics like lifestyle, preferences, and needs

Figure out how different audiences want to interact with you using messaging. For example, twenty-something single men will have different preferences than 40-something mothers.

Contact Us

Use what you know to create a voice guide.

This is where phone-based customer service and text messaging customer service start to diverge. Since words have more weight when written (said the writer), it’s important to give your customer service team some direction.

Put everything you learned in the last section and put it together to decide on the tone of voice for your audience. If you’ve gone through this exercise with your marketing team, you can certainly use what they have and adapt it to fit your customer service and text messaging applications.

Pick your tone.

Text messaging is inherently a more casual medium than email or even voice. But that doesn’t mean you should send text slang. Tailor to your audience and your industry.

For example, if you’re selling luxury air travel to middle-aged business travelers, a professional tone is warranted. Avoid text acronyms, and skip the emojis and memes.

For an audience full of elder millennials with an affinity for plants, include emojis and memes. Stay friendly, upbeat, and as positive as possible.

However, if your audience is filled with college students, keep your tone friendly and to the point, but skip the emojis. Apparently, they’re cheugy 🤷‍♀️.

Create parameters.

Deciding on your tone of voice is only as helpful as the guidelines that go with it. Think about telling your customer service team that emojis are okay, only to see this: 😺🐵🐵🐀❣️❣️❣️😝

That might be overkill, but you get the point. Put guidelines in place, like maybe they can use three to five emojis per conversation but never more than one per text message.

Do the same for the tone of voice. Provide examples of what “professional” means and how it compares to “friendly.” If you’re already using text messaging in your customer support center, pull some examples directly from past conversations.

How to solve problems in a bite-sized format.

SMS texting has 160 characters—that’s not a lot of space to solve customer problems. There’s a lot to consider to keep the conversation flowing toward a quick resolution. Start with these steps.

Step 1: Introduce yourself.

There’s a lot of spam in the texting world. Whether the customer reached out to you or you’re sending a message (after they’ve opted-in, of course), make sure to introduce yourself just as you would on any other channel.

Step 2: Ask the customer to describe the problem.

Before you can solve the problem, you have to know what it is. Ask probing questions to determine the issue. If it’s an issue that can be seen visually, you can even ask for pictures or videos so you can identify the problem easier and exceed user expectations.

Step 3: Keep answers as simple as possible.

With so little space, you want to ensure messages are easy to understand. While SMS is limited to 160 characters, don’t be afraid to send two messages if that’ll help your customer understand the solution better. Just don’t forget to include an indicator that you’re sending multiple messages (e.g., 1 of 2).

Step 4: Include relevant links, videos, or diagrams.

If you’re using rich messaging, send whatever medium will help your customer solve their problems.

The dos and don’ts of business text messaging.

As you plan and launch your messaging strategy, keep these dos and don’ts in mind.

Do develop a prioritization system.

Prioritization plays a major role in organizing the process and improving customer service efficiency. As questions arise, it can help prioritize them based on urgency and order of importance. This helps ensure that troubleshooting questions and general issues are addressed as quickly as possible. Less urgent questions may be able to wait a little longer if necessary.

Here are a couple of examples of ways you can segment customer service questions in order to prioritize them:

  • The order the questions come in: Do you have a first-in-first-out method?
  • Customer sentiment: Are they frustrated or neutral?
  • Urgent question vs. non-urgent question: What can wait?
  • The service or product they’re asking about: Are some more important? Are there certain team members who can handle certain questions?
  • Members vs. nonmembers: Do members get special priority if you have a special program?
  • Self-prioritization: Ask customers directly how urgent their request is.

The best method is to combine these factors to create a foolproof prioritization system. For example, how would you prioritize a frustrated member with a nonurgent question over a neutral, nonmember’s urgent question? Make sure your AI conversational platform and/or customer service agents prioritize according to your guidance.

Don’t be afraid to ask clarifying questions.

Text messaging is a short medium—but it also lends itself to quick back-and-forth communication. When one small miscommunication can derail a conversation and drive away your customer, it’s imperative that you ask clarifying questions.

Without understanding the problem, you can’t find a solution. If someone has a complex or confusing question, break the question down into parts or ask for clarification. You can send messages like these:

  • “What do you mean when you say [X]?”
  • “Do you mean [Y] when you said [X]?
  • “Can you give me some background on the issue?”
  • “Can you give me an example of when [Z] happened?

Since text messaging can be a limited medium, it’s important to follow up so you understand the problem as best as you can. If you’re still having trouble, don’t be afraid to move to a voice call.

Do make answers clear and understandable.

Communicating with consumers is all about being clear and concise. People come from all types of situations and educational backgrounds, so every customer support agent needs to know how to type a message that’s easy to understand and digest.

A customer who is engaged in the conversation will be more likely to seek help again. Instead of texting long, detailed messages, it’s best to simplify replies into one or two sentences that contain the necessary information. This helps drive more productive conversations and leaves more consumers satisfied at the end of the day.

Don’t forget to follow up with customers.

When a customer has an issue or question, they want to know they’re not just a number. One effective way to show this is by following up after addressing the issue.

Is the consumer satisfied? Do they have any more questions? Do they have any constructive feedback to offer? By asking what they can do to make the customer experience better, customer support agents show that they’re willing to listen and adapt as needed. This can go a long way toward building strong professional relationships.

Do use artificial intelligence to enhance customer service.

There are many ways to use artificial intelligence (AI) to make your business text messaging better. AI can make your agents faster, help serve customers when no one’s around, and even reduce your customer service ticket volume.

  • Predict customer sentiment: A conversational AI platform, like Quiq, can pick up on queues from customers to predict how their feeling so you can prioritize customers whose anger is escalating.
  • Help agents compose messages: Some platforms use natural language processing (NPL) to observe your agents’ responses and suggest sentences as they type. This will help agents stay on tone and write messages more quickly.
  • Respond to customers: Unless your message center is staffed 24/7, messages won’t get answered when no one’s available. That’s where chatbots come in. They can contain conversations by answering simple questions, automating surveys, and even collecting information to route questions to the right agent.

Build business messaging into your business.

Business messaging, whether for customer service, marketing, or even sales, is a great asset to your business—and a great way to engage your customers. But remember: don’t go in blind. Create a thoughtful strategy and see just how quickly your customers respond.

Customer Service in the Travel Industry: How to Do More with Less

Doing more with less is nothing new for the travel industry. It’s been tough out there for the last few years—and while the future is bright, travel and tourism businesses are still facing a labor shortage that’s causing customer satisfaction to plummet.

While HR leaders are facing the labor shortage head-on with recruiting tactics and budget increases, customer service teams need to search for ways to provide the service the industry is known for without the extra body count.

In other words… You need to do more with less.

The best way to do that is with a conversational AI platform. Whether a hotel, airline, car rental company or experience provider, you can provide superior service to your customers without overworking your support team.

Keep reading to take a look at the state of the travel industry’s labor shortage and how you can still provide exceptional customer service.

Travel is back, but labor is not.

In 2019, the travel and tourism industry accounted for 1 in 10 jobs around the world. Then the pandemic happened, and the industry lost 62 million jobs overnight, according to the World Travel & Tourism Council (WTTC).

Now that most travel restrictions, capacity limits, and safety restrictions are lifted, much of the world is ready to travel again. The pent-up demand has caused the tourism and travel industry to outpace overall economic growth. In 2021, the GDP grew by 21.7%, while the overall economy only grew by 5.8%, according to the WTTC.

In 2021, travel added 18.2 million jobs globally, making it difficult to keep up with labor demands. In the U.S., 1 in 9 jobs went unfilled in 2021.

What’s causing the shortage? A combination of factors:

  • Flexibility: Over the last few years, there has been a mindset shift when it comes to work-life balance. Many people aren’t willing to give up weekends and holidays with their families to work in hospitality.
  • Safety: Many jobs in hospitality work on the frontline, interacting with the public on a regular basis. Even though the pandemic has cooled in most parts of the world, some workers are still hesitant to work face-to-face. This goes double for older workers and those with health concerns, who may have either switched industries or dropped out of the workforce altogether.
  • Remote work: The pandemic made remote work more feasible for many industries, and travel requires a lot of in-person work and interactions.

How is the labor shortage impacting customer service?

As much as we try to separate those shortages from affecting service, customers feel it. According to the American Customer Satisfaction Index, hotel guests were 2.7% less satisfied overall between 2021 and 2022. Airlines and car rental companies also dropped 1.3% each.

While there are likely multiple reasons factoring into lower customer satisfaction rates, there’s no denying that the labor shortage has an impact.

As travel ramps back up, there’s an opportunity to reshape the industry at a fundamental level. The world is ready to travel again, but demand is outpacing your ability to grow. While HR is hard at work recruiting new team members, it’s time to look at your operations and see what you can do to deliver great customer service without adding to your staff.

What a conversational AI platform can do in the travel industry.

First, what is conversational AI? Conversational AI combines multiple technologies (like machine learning and natural language processing) to enable human-like interactions between people and computers. For your customer service team, this means there’s a coworker that never sleeps, never argues, and seems to have all the answers.

A conversational AI platform like Quiq can help support your travel business’s customer service team with tools designed to speed conversations and improve your brand experience.

In short, a conversational AI platform can help businesses in the travel industry provide excellent customer service despite the current labor shortage. Here’s how.

Contact Us

Resolve issues faster with conversational AI support.

When you’re short-staffed, you can’t afford inefficient customer conversations. Switching from voice-based customer service to messaging comes with its own set of benefits.

Using natural language processing (NLP), a conversational AI platform can identify customer intent based on their actions or conversational cues. For example, if a customer is stuck on the booking page, maybe they have a question about the cancellation policy. By starting with some basic customer knowledge, chatbots or human agents can go into the conversation with context and get to the root of the problem faster.

Conversational AI platforms can also route conversations to the right agent, so agents spend less time gathering information and more time solving the problem. Plus, messaging’s asynchronous nature means customer service representatives can handle 6–8 conversations at once instead of working one-on-one. But conversational AI for customer service provides even more opportunities for speed.

Anytime access to your customer service team.

Many times, workers leaving the travel industry cite a lack of schedule flexibility as one of their reasons for leaving. Customer service doesn’t stop at 5 o’clock, and support agents end up working odd hours like weekends and holidays. Plus, when you’re short-staffed, it’s harder to cover shifts outside of normal business hours.

Chatbots can help provide customer service 24/7. If you don’t already provide anytime customer service support, you can use chatbots to answer simple questions and route the more complex questions to a live agent to handle the next day. Or, if you already have staff working evening shifts, you can use chatbots to support them. You’ll require fewer human agents during off times while your chatbot can pick up the slack.

Connect with customers in any language.

Five-star experiences start with understanding. You’re in the travel business, so it’s not unlikely that you’ll encounter people who speak different languages. When you’re short-staffed, it’s hard to ensure you have enough multilingual support agents to accommodate your customers.

Conversational AI platforms like Quiq offer translation capabilities. Customers can get the help they need in their native language—even if you don’t have a translator on staff.

Work-from-anywhere capabilities.

One of the labor shortage’s root causes is the move to remote work. Many customer-facing jobs require working in person. That limits your labor pool to people within the immediate area. The high cost of living in cities with increased tourism can push locals out.

Moving to a remote-capable conversational tool will expand your applicant pool outside your immediate area. You can attract a wider range of talented customer service agents to help you fill open positions.

Build automation to anticipate customer needs.

A great way to reduce the strain on a short-staffed customer service team? Prevent problems before they happen.

A lot of customer service inquiries are simple, routine questions that agents have to answer every day. Questions about cancellation policies, cleaning and safety measures, or special requests happen often—and can all be handled using automation.

Use conversational AI to set up personalized messages based on behavioral or timed triggers. Here are a few examples:

  • When customers book a vacation: Automatically send a confirmation text message with their booking information, cancellation policy, and check-in procedures.
  • The day before check-in: Send a reminder with check-in procedures, along with an option for any special requests.
  • During their vacation: Offer up excursion ideas, local restaurant reservations, and more. You can even book the reservation or complete the transaction right within the messaging platform.
  • After an excursion: Send a survey to collect feedback and give customers an outlet for their positive or negative feedback.

By anticipating these customer needs, your agents won’t have to spend as much time fielding simple questions. And the easy ones that do come in can be handled by your chatbot, leaving only more complex issues for your smaller team.

Don’t let a short staff take away from your customer service.

There are few opportunities to make something both cheaper and better. Quiq is one of them. Quiq’s conversational AI Platform isn’t just a stop-gap solution while the labor market catches up with the travel industry’s needs. It will actually improve your customer service experience while helping you do more with less.

7 Ways AI Chatbots Improve Customer Service

If you’ve been using business messaging for a while, you know easy and convenient it is for your customers—and its impact on your customer service team’s output.

With Quiq’s robust messaging platform, it’s easy for contact centers to manage customer conversations while boosting conversion rates, increasing engagement, and reducing costs. But our little slice of digital nirvana only gets better when you add chatbots into the mix.

Enter the business messaging bot. Bots can help increase your agent productivity while delivering an even better customer experience.

We’re diving into seven times business messaging bots made a customer conversation faster and better.

1. Collect customer information upfront.

Let’s say, for example, you own an airline with a great reward program. With Quiq, you can create a bot that greets all your customers right away and asks them to enter their rewards number if they have one.

This “reward bot” will use the information gathered to help recognize platinum-status members—your most elite program. The reward bot reroutes platinum members to a special VIP queue where wait times are shorter and they receive higher support. This is done consistently and without hesitation. Your platinum members don’t have to wade through the customer service queue. It makes them feel more valued and more likely to continue flying with you in the future.

The reward bot can also collect other information, such as confirmation numbers for reservations, updated email addresses, or contact numbers. All of this data gathering can be done before a human agent steps into the conversation. The support chatbot has done the work to arm the agent with the information they need to deliver better service.

2. Decrease customer abandonment.

Acknowledging customers with a fast, friendly greeting lets them know they’ve started on a path to resolution. Agents may be busy with other conversations (we’ve seen agents handle upwards of eight at a time), but that doesn’t mean the customer can’t start engaging with your business. A support chatbot can greet customers immediately while agents are busy.

Instead of waiting in a stagnant queue over the phone or trying to talk to a live chat agent (also known as web chat) who has disappeared, a bot can send a welcome message and let the customer know when they’ll receive a response from a human agent.

3. Get faster, more accurate customer responses.

Remember the last time you had to spell your name out over the phone or repeat your birthday again and again because the call bot couldn’t pick it up? Conversational chatbots eliminate that frustration and ensure it collects fast and accurate information from the customer every time.

Over messaging, the customer can see the data they’re providing and confirm right away if there’s an error. The customer can at least reference the information and catch any typos in their email address or that they’ve provided their old phone number. It happens.

4. Prioritize customer conversations.

In our above example, the reward bot was able to recognize platinum rewards members so they could get the perks that came with their membership. Chatbots can help you prioritize conversations in other ways too.

For example, you can set rules within Quiq to recognize keywords such as “buy” or “purchase” to prioritize customers who may need help with a transaction. Depending on the situation, the platform can prioritize that conversation (likely with high purchase intent) over a password reset or return.

A chatbot platform like Quiq can also use natural language processing (NLP) to predict customer sentiment and prioritize based on that. That way, you can identify a frustrated customer and bump them up in the queue to handle the problem before it escalates.

Contact Us

5. Get customers to the right place.

Chatbots can help route customers to the appropriate department, agent, or even another support bot for help. Much like a call routing system (but more sophisticated), a chatbot can identify a customer’s problem and save them from bouncing around between support agents.

The simplest example is when a bot greets customers and asks, “What can I help you with today?” The bot can either present the user with several options or let them state their problem. A customer can then be routed directly to the support agent best fit for solving their problem.

This also eliminates the need for customers to repeat themselves at each step of the way. Instead of having to explain their situation to the call router and then again to the service agent, the chatbot hands off the messages to the human agent. The agent already knows the problem and can start searching for a solution right away.

6. Reschedule appointments.

Appointment scheduling and rescheduling is a time-consuming and frustrating process. Chatbots can help you reduce delays, ensuring customers avoid back-and-forth emails and long hold times just to move an appointment.

With Quiq business messaging, you can present customers with available dates and times. Customers can choose and confirm a date from available calendar options.

A support chatbot with the right integrations can help present customers with available dates to choose from and schedule the selected appointment.

7. Collect feedback for even more improvement.

Businesses shouldn’t underestimate the power of feedback. Believing you know what customers want and actually asking them can lead to completely different results. Yet, the biggest roadblock to collecting feedback is distributing the survey at the moment when it counts.

A support chatbot can ensure every customer service interaction is followed up with a survey. You can program the bot to send unique surveys based on the conversation and get specific feedback on the spot. Collecting that survey information and putting it into place will help your team improve.

Take the Leap with Quiq.

Implementing customer service chatbots within your organization may seem intimidating now, but Quiq can help you navigate it. We can help you orchestrate bots throughout your organization, whether you need one or many.

With Quiq, you can design conversational experiences your customers will love. Once you create a bot, you can run it across all of our supported channels to deliver a consistent experience no matter where your customers are.

11 Ways to Navigate High Call Volumes

There are some high call volume spikes you can prepare for—like the holidays or a new product launch—and some you can’t.

When you get a sudden spike in calls, it can feel like the sky is falling. Your support team is overwhelmed with calls and struggling not to let it show in customer interactions.

While there are some things you can do when you’re in the thick of it, planning now for those intermittent spikes is the best way to set your team up for success.

And it all comes down to doing more with less, so you can make the most of finite agent resources while improving customer service.

Let’s look at what you can do now and in the future to prepare for unexpectedly high call volumes.

1. Dive into the data.

Unexpected call volume spikes always seem out of the blue—but are they really? Besides the obvious and expected busy periods (the holidays, the January return season, and new product launches), other reasons could trigger your support center surge.

Look at your call center data to see if there’s a rhyme or reason to your surges. Do they follow the busy season? Do they happen after new influencer launches? Do they align with college semesters?

Even if you can’t find any hard-and-fast patterns, maybe the data can help you with scheduling.

2. Optimize customer service agents’ schedules.

There’s something to be said for strategic scheduling. Poor scheduling can make a normal call day feel like a torrent. Take a look at what time of day your call volume peaks, and pull agents from other shifts to cover it.

More than that, you can get more granular by scheduling agents based on their strengths and abilities.

Move agents with faster resolution times to the busiest part of your day, and put agents with a slower, more methodical approach during your off time.

You can also schedule agents based on their specialties. For instance, if you’re dealing with a high volume of returns, make sure your team members with the most experience doing returns are working.

3. Cross-train other staff.

A great short-term solution to call volume spikes is to have additional staff you can call on to help. Cross-training a few team members on customer service means they’ll be ready to go when you need them most.

Call volumes aside, things happen. A bug can work its way through your customer service team, hiring can take longer than expected, the works. Having staff you can call on to help goes a long way.

Additionally, cross-training staff from other teams can also help reduce customer service agents’ burnout. When employees constantly handle high volumes of calls, it can take a toll on their mental and physical well-being.

By rotating staff from other teams, you can ensure that customer service agents are not overworked and are able to maintain a healthy work-life balance.

4. Embrace asynchronous messaging.

Asynchronous messaging (sometimes called async messaging or asynchronous chat) allows customers to converse with brands as it is convenient for them. Think about text messaging. While you can have a live back-and-forth conversation, you can also send a message and receive a reply an hour or two later.

Asynchronous messaging lets customers respond at their leisure and gives your customer service agents some breathing room to answer questions. Since customer responses stagger, your customer service agents can manage conversations with multiple people—as many as 8 at one time. That’s something you can’t do with live support.

File this under the category of “not something you can implement when you’re in a pinch.”

While embracing asynchronous messaging enables your customer service agents to handle multiple conversations at once, you need a runway to implement and train your team on using it to the fullest. Managing asynchronous messaging also works best with a powerful conversational platform behind it, like Quiq.

5. Take advantage of predictive text.

Another advantage of adding messaging to your help center tech stack is the use of predictive text. If you can’t deflect calls enough to lighten your team’s load, then the best thing to do is make them more efficient.

A conversational platform with AI-enhancing capabilities can actually make your team faster. AI-powered snippets in Quiq’s platform, for example, can help predict responses and provide answers for agents to tweak, personalize, and send off. That way, instead of searching for the answers, customer service agents always have them at their fingertips.

6. Add call-to-text to your IVR.

There’s no doubt that phone conversations take more of your agents’ time than messaging. Although you can’t keep customers from making phone calls to your help center, you can encourage them to message.

Most customers hang up after being on hold for 90 seconds, so when your call center is slammed, your customer satisfaction rates can plummet. Instead, give them the option to take the conversation to messaging.

Call-to-text can work through SMS text messaging or WhatsApp. Customers who don’t want to wait to get live support can get their answers via messaging as they go about their day. It’ll give your call center some much-needed relief.

Contact Us

7. Outsource some of your calls.

When under pressure, you can always look for an outside team to pick up the slack temporarily.

Make sure you set them up for success by putting together your best practices, important product knowledge, and policies and procedures. While you may not have time to thoroughly train them in the short term, they can help in a pinch.

8. Implement a queue system.

Whether you stick with calls or add messaging to your customer service feature, implementing an automated (and intelligent) queue management system can manage high volumes.

Ideally, the system can let customers know how long they should expect to remain on hold and give them other options to contact customer service if they don’t want to wait. This can include call-to-text as we mentioned earlier, but it should also include options like receiving a callback or sending an email.

9. Make ai chatbots a part of your team.

Chatbots aren’t going to replace your team, but they can support it. If you’ve made the leap to messaging, leaning on AI-enhanced chatbots can lighten the load on your team. There are several ways chatbots can help you do more with less:

  • Charge your chatbot with answering basic questions: When call volume is high, use your team to answer the more complex questions and have a chatbot answer the easy ones.
  • Collect information upfront: Chatbots can help gather information and route calls to the right agent to cut down on service time.
  • Walk customers through troubleshooting: If the reason for your call volume spike is a product issue, use chatbots to walk customers through the first few phases of troubleshooting. If the problem persists, then a customer service agent can pick up where the bot left off.

10. Proactively get ahead of problems.

Take care of issues before they reach your call center. While you can’t anticipate every problem and prevent people from calling altogether (nor should you), you can get ahead of issues.

Sometimes it’s as simple as putting your return policy in a more easily accessible place on your website. Or when a product issue arises, send emails and outbound text messages to address it before it blows up your call center.

Being upfront and transparent with customers will prevent a lot of anger and frustration before it gets to your call center.

11. Scale up when you need to.

As efficient as you are, there are times when you’ll need to scale up your help center quickly. If a volume spike turns into a sustained deluge, then it might be time to look at the budget and see if hiring additional support is feasible.

Manage high-volume calls with Quiq.

Predict high-volume call spikes when you can, and turn to Quiq when you can’t. Quiq’s Conversational AI platform can help you improve efficiency across the board—and especially during busy periods. Let agent overwhelm become a thing of the past and start improving your customer service strategy now.

How Messaging Delivers a Modern Customer Experience

Customer service has moved from call centers to contact centers—and they’ve gone next-gen. Technology has seen rapid growth over the last few years, and your customers’ expectations have grown with them.

Nothing has made this clearer than messaging’s rise as the ultimate customer service channel.

Keep reading to see how messaging delivers a modern customer service experience.

What customer service looks like at the end of 2022.

We’re at the end of 2022 and 2023 is coming up fast. Countless headlines threaten a looming recession (while others say it’s already here), you see inflation in every trip to the grocery store, and national conversations are filled with high emotions. The doom and gloom news cycle has everyone feeling down, and unfortunately, it’s spilling into everyday interactions with customer service.

The consumer mindset.

Expectedly, the current climate is affecting consumers across the board—and customer service teams are feeling it. According to Zendesk, 66% of companies report that customers are less patient when interacting with agents or service teams this year. What’s more, 18% of companies are more likely to report that customer satisfaction is somewhat or significantly below expectations than in 2021.

So, although we say it constantly, customer expectations truly are higher than ever. For contact centers, that generally means the need for faster response times, faster resolution times, and more one-touch resolutions.

Digital expectations remain high.

While businesses were forced to focus on digital customer service as in-person stores were closed during much of 2020, they’ve slacked off as the world returns to pre-pandemic lifestyles. Forrester’s US 2022 Customer Experience Index showed a 19% drop in CX across US consumer brands.

So while many consumers are returning to in-store shopping, they still expect the stellar service they received over the last two years. Anything less fails to meet expectations.

More conversations.

But there’s more at play than just speed. Customers are also looking for organic, conversational interactions. Zendesk reports that upwards of 70% of customers say they expect conversational experiences when interacting with brands.

What’s more conversational than messaging? Email has some formality (a holdover from its predecessor, letter writing), and phone calls require both parties to stop what they’re doing and focus on the one-to-one conversation. Messaging fits into the way people already have conversations—making it a natural next step in the evolution of modern customer service.

Messaging has revolutionized the customer experience.

There’s no doubt that messaging has changed the game when it comes to delivering a modern customer experience. It’s made customer service more accessible to younger generations who favor messaging over phone conversations, and it’s increased the speed at which contact centers can help customers.

Here are a few ways you can revolutionize your customer experience with messaging.

1. Deliver personalization with data.

One of the most frustrating things for customers is having to repeat their problems to every new person talk to in customer service. Plus, the rise of personalization has made access to customer databases a critical business need.

Opt for a conversational platform that integrates with the client databases (CRMs, ERPs, etc.) that you already use. Whether you use Salesforce, Zendesk, Oracle, etc., having easy access to customer information will help contact center agents personalize conversations and improve the customer experience.

2. Enhance conversations with AI.

How many chatbots have you encountered that felt like you were talking to a robot? Probably a fair amount. If you don’t ask your question the right way or put your answer in the right format, it’s all over. Basic chatbots rarely understand the nuances of human language, and they aren’t able to read context to make sense of a conversation.

But AI-enhanced chatbots aren’t like the others. Chatbots like Quiq’s use Natural Language Processing (NPL) to identify customer intent and base the conversation in the right context. This means more natural conversations between bots and customers and less of a strain on your contact support team.

3. Uplevel conversations with rich messages.

Messaging is more than a replacement for phone conversations—it’s a way to create rich, modern customer experiences. Rich messaging is an advanced form of text messaging that lets you send more visually engaging and interactive messages.

Instead of sending a message with a link to your website—where it’s easy for customers to get lost or distracted—you can send images and videos within the conversation. You can even securely complete the transactions right within the messaging app. Schedule appointments, send GIFs, or share high-resolution photos and videos—everything you need for modern customer service.

Optimize customer interactions with Quiq.

Meet the future of customer service head-on with Quiq’s Conversational AI Platform. Quiq makes it easy for customers to contact a business via messaging, the channel your customers already use to connect with family and friends. With Quiq, customers can engage with companies via SMS/text messaging, Facebook Messenger, web chat, in-app, WhatsApp, and more for a more modern customer service experience.

If you want to learn how you can easily deliver the modern customer experience by connecting with your customers contact us for a short demo.

[Infographic] 9 Effective Call Center Strategies You Can’t Miss

Effective call center strategies are essential to running a contact center. It’s not as simple as setting up a few phones and handing your team a script (although we’re sure no one has thought that since 2005). But it’s equally as likely that you’re so bogged down with managing the everyday realities that you can’t see the forest through the trees.

That is, you can’t see just how cluttered the contact center has become.

From staffing and training to managing operations and tracking KPIs, you spend too much time keeping a contact center running instead of doing what you do best: Connecting with customers.

That’s where Quiq comes in. Our Conversational AI Platform uses breakthrough technology to make it easier to engage customers, whether, through live chat (also known as web chat), text messaging, or social media.

Let’s take a look at ways to improve your call center efficiency and how Quiq can help you reduce the clutter with 9 effective call center strategies in a handy infographic:

9 ways to improve call center efficiency

Download as a PDF instead

The 9 effective call center strategies recap

Check out these call center strategies below:

  1. Streamline your current system.
  2. Boost agent productivity and efficiency.
  3. Drive down costs.
  4. Manage seasonal spikes and fluctuating demands.
  5. Remove friction.
  6. Improve the quality of your conversations with rich messaging.
  7. Engage more qualified leads.
  8. Increase conversions.
  9. Increase customer satisfaction.

1. Streamline your current system.

How do you currently connect with your customers? Fielding phone calls, emails, and the occasional DMs can leave communications scattered and your systems fragmented.

Here’s what can happen with you don’t have a single, consolidated platform:

  • Customer conversations can slip through the cracks.
  • Your team wastes time switching between apps, programs, and windows.
  • Disparate technology becomes outdated and overpriced.
  • With no support for asynchronous communication, conversations can only happen one at a time.
  • Measuring performance requires pulling metrics from multiple sources, a time-consuming and arduous process.

Quiq lets your agents connect with customers across various channels in a singular platform. You’ll improve your contact center operational efficiency with conversations, survey results, and performance data all in one easy-to-use interface.

2. Boost agent productivity and efficiency.

How do your customer service agents go about their day? Are they handling one call at a time? Reinventing the wheel with every new conversation? Switching between apps and email and phone systems?

Outdated technology (or a complete lack of it) makes handling customer conversations inherently more difficult. Switching to a messaging-first strategy with Quiq increases the speed with which agents can tackle customer conversations.

Switching to asynchronous messaging (that is, messaging that doesn’t require both parties to be present at the same time) enables agents to handle 6–8 conversations at once. Beyond conversation management, Quiq helps optimize agent performance with AI-enhanced tools like bots, snippets, sentiment analysis, and more.

3. Drive down costs.

It’s time to stop looking at your contact center as a black hole for your profits. At the most basic level, your customer service team’s performance is measured by how many people they can serve in a period of time, which means time is money.

The longer it takes your agents to solve problems, whether they’re searching for the answer, escalating to a higher customer service level, or taking multiple conversations to find a solution, the more it impacts your bottom line.

Even simple questions, like “Where’s my order?” inquiries needlessly slow down your contact center. Managing your contact center’s operations is overwhelming, to say the least.

Need a Quiq solution? We have many. Let’s start with conversation queuing. Figuring out a customer’s problem and getting to the right person or department eats away at time that could be spent finding a solution. Quiq routes conversations to the right person, significantly reducing resolution times. Agents can also seamlessly loop in other departments or a manager to solve a problem quickly.

Beyond improving your contact center’s operational efficiency, messaging is 3x less expensive than the phone.

4. Manage seasonal spikes and fluctuating demands.

All contact centers face the eternal hiring/firing merry-go-round struggle. You probably get busy around the holidays and slow down in January. Or maybe September is your most active season, and your team shrinks through the rest of the year. While you can’t control when you’re busy and when you’re slow, you can control how you respond to those fluctuations.

Manage seasonal spikes by creating your own chatbot using Quiq’s AI engine. Work with our team to design bot conversations that use Natural Language Processing (NPL) to assist customers with simple questions. Chatbots can also improve agent resolution times by collecting customer information upfront to speed up conversations.

Daily Harvest’s chatbot, Sage, was able to contain 60% of conversations, which means their human agents saw a vast reduction in call volume. Perfect for managing the holiday rush.

5. Remove friction.

How hard is it for your customers to contact your help center? Do they have to fill out a web form, wait for an email, and set up a phone call? Is there a number in fine print in the depths of your FAQ page? Some companies make it difficult for customers to interact with their team, hoping that they’ll spend less money if there are fewer calls and emails. But engaging with customers can improve company perception, boost sales, and deepen customer loyalty.

That’s why Quiq makes it easy for your team and customers to connect. From live chat to SMS/text and Google Business Messaging to WhatsApp, customers can connect with your team on their preferred channel.

6. Improve the quality of your conversations with rich messaging.

Email and phone conversations are, in a word, boring. Whether you’re an e-commerce company selling products or a service provider helping customers troubleshoot problems with their latest device, words aren’t always enough. That’s why Quiq offers rich messaging.

What is rich messaging? It’s an advanced form of text messaging that includes multimedia, like GIFs, high-resolution photos, or video. It also includes interactive tools, like appointment scheduling, transaction processing, and more.

You can use rich messaging to give customers a better service experience. Whether sending them product recommendations or a video walkthrough, they’ll get a fully immersed experience.

7. Engage more qualified leads.

Do leads die in your contact center? Let’s face it: your contact center isn’t the place to handle high-value leads. Yet when warm leads find themselves there, you need a way to track, qualify, and engage them.

Here’s where chatbots can help with marketing. Quiq’s chatbots can help you identify qualified leads by engaging with your prospect and collecting information before it ever gets to your sales team.

A great example we’ve seen is from General Assembly. With the Quiq team by their side, they created a bot that helped administer a quiz and captured and nurtured leads interested in specific courses. This helped them strengthen the quality of their leads and achieve a 26% conversion rate, which leads us to our next factor for an effective call center strategy.

8. Increase conversions.

If you haven’t stopped viewing your call center as a cost center, this next topic should change your mind. While many contact centers focus on customer service, which can lean heavily toward complaints and post-purchase problems, there’s also tons of profit potential via effective call center strategies.

Adding messaging to your contact center opens up more opportunities to engage with your customers across the web. Live chat is a great way to talk to your customers at key points in the buyers’ journey. Using a chatbot to assist shoppers in navigating your website makes shoppers 3x more likely to convert to a sale than unassisted visitors.

Combining AI and human agents with Quiq’s conversational platform gives your customers the best experience possible without adding to your contact center’s workload—and it can lead to an 85% reduction in abandoned shopping carts. Plus, Quiq integrates with your ERP system so customer data is always at your team’s fingertips.

9. Increase customer satisfaction.

Customer satisfaction is likely your call center’s #1 goal. Yet outdated phone systems and substandard technology isn’t the best solution to improve call center agent performance.

Quiq empowers agents to be more efficient, which reduces your customer’s wait time and helps ensure customers get the best service possible. Quiq customers often increase their customer satisfaction ratings by about 15 points.

And the best way to increase your ratings? With regular, in-context surveys. Our conversational platform helps you and your agents get instant customer feedback. Customers can seamlessly respond to surveys right from within the channel they used to connect with your customer service.

Give contact center clutter a Quiq goodbye with effective call center strategies.

There’s no place in an efficient business for a cluttered contact center. Outdated systems, slow processes, and a lack of support can overwhelm your agents—and keep them from performing their best for your customers.

Now that you’re equipped with ways to improve call center efficiency, it’s time to see it in action. Quiq’s Conversational AI Platform empowers your team to work more efficiently and create happier customers.

Contact Us