5 Tips for Coaching Your Contact Center Agents to Work with AI

Generative AI has enormous potential to change the work done at places like contact centers. For this reason, we’ve spent a lot of energy covering it, from deep dives into the nuts and bolts of large language models to detailed advice for managers considering adopting it.

Here, we will provide tips on using AI tools to coach, manage, and improve your agents.

How Will AI Make My Agents More Productive?

Contact centers can be stressful places to work, but much of that stems from a paucity of good training and feedback. If an agent doesn’t feel confident in assuming their responsibilities or doesn’t know how to handle a tricky situation, that will cause stress.

Tip #1: Make Collaboration Easier

With the right AI tools for coaching agents, you can get state-of-the-art collaboration tools that allow agents to invite their managers or colleagues to silently appear in the background of a challenging issue. The customer never knows there’s a team operating on their behalf, but the agent won’t feel as overwhelmed. These same tools also let managers dynamically monitor all their agents’ ongoing conversations, intervening directly if a situation gets out of hand.

Agents can learn from these experiences to become more performant over time.

Tip #2: Use Data-Driven Management

Speaking of improvement, a good AI platform will have resources that help managers get the most out of their agents in a rigorous, data-driven way. Of course, you’re probably already monitoring contact center metrics, such as CSAT and FCR scores, but this barely scratches the surface.

What you really need is a granular look into agent interactions and their long-term trends. This will let you answer questions like “Am I overstaffed?” and “Who are my top performers?” This is the only way to run a tight ship and keep all the pieces moving effectively.

Tip #3: Use AI To Supercharge Your Agents

As its name implies, generative AI excels at generating text, and there are several ways this can improve your contact center’s performance.

To start, these systems can sometimes answer simple questions directly, which reduces the demands on your team. Even when that’s not the case, however, they can help agents draft replies, or clean up already-drafted replies to correct errors in spelling and grammar. This, too, reduces their stress, but it also contributes to customers having a smooth, consistent, high-quality experience.

Tip #4: Use AI to Power Your Workflows

A related (but distinct) point concerns how AI can be used to structure the broader work your agents are engaged in.

Let’s illustrate using sentiment analysis, which makes it possible to assess the emotional state of a person doing something like filing a complaint. This can form part of a pipeline that sorts and routes tickets based on their priority, and it can also detect when an issue needs to be escalated to a skilled human professional.

Tip #5: Train Your Agents to Use AI Effectively

It’s easy to get excited about what AI can do to increase your efficiency, but you mustn’t lose sight of the fact that it’s a complex tool your team needs to be trained to use. Otherwise, it’s just going to be one more source of stress.

You need to have policies around the situations in which it’s appropriate to use AI and the situations in which it’s not. These policies should address how agents should deal with phenomena like “hallucination,” in which a language model will fabricate information.

They should also contain procedures for monitoring the performance of the model over time. Because these models are stochastic, they can generate surprising output, and their behavior can change.

You need to know what your model is doing to intervene appropriately.

Wrapping Up

Hopefully, you’re more optimistic about what AI can do for your contact center, and this has helped you understand how to make the most out of it.

If there’s anything else you’d like to go over, you’re always welcome to request a demo of the Quiq platform. Since we focus on contact centers we take customer service pretty seriously ourselves, and we’d love to give you the context you need to make the best possible decision!

AI Gold Rush: How Quiq Won the Land Grab for AI Contact Centers (& How You Can Benefit)

There have been many transformational moments throughout the history of the United States, going back all the way to its unique founding.

Take for instance the year 1849.

For all of you SFO 49ers fans (sorry, maybe next year), you are very well aware of the land grab that was the birth of the state of California. That year, tens of thousands of people from the Eastern United States flocked to the California Territory hoping to strike it rich in a placer gold strike.

A lesser-known fact of that moment in history is that the gold strike in California was actually in 1848. And while all of those easterners were lining up for the rush, a small number of people from Latin America and Hawaii were already in production, stuffing their pockets full of nuggets.

176 years later, AI is the new gold rush.

Fast forward to 2024, a new crowd is forming, working toward the land grab once again. Only this time, it’s not physical.

It’s AI in the contact center.

Companies are building infrastructure, hiring engineers, inventing tools, and trying to figure out how to build a wagon that won’t disintegrate on the trail (AKA hallucinate).

While many of those companies are going to make it to the gold fields, one has been there since 2023, and that is Quiq.

Yes, we’ve been mining LLM gold in the contact center since July of 2023 when we released our first customer-facing Generative AI assistant for Loop Insurance. Since then, we have released over a dozen more and have dozens more under construction. More about the quality of that gold in a bit.

This new gold rush in the AI space is becoming more crowded every day.

Everyone is saying they do Generative AI in one way, shape, or form. Most are offering some form of Agent Assist using LLM technologies, keeping that human in the loop and relying on small increments of improvement in AHT (Average Handle Time) and FCR (First Contact Resolution).

However, there is a difference when it comes to how platforms are approaching customer-facing AI Assistants.

Actually, there are a lot of differences. That’s a big reason we invented AI Studio.

AI Studio: Get your shovels and pick axes.

Since we’ve been on the bleeding edge of Generative AI CX deployments, we created called AI Studio. We saw that there was a gap for CX teams, with the myriad of tools they would have had to stitch together and stay focused on business outcomes.

AI Studio is a complete toolkit to empower companies to explore nuances in their AI use within a conversational development environment that’s tailored for customer-facing CX.

That last part is important: Customer-facing AI assistants, which teams can create together using AI Studio. Going back to our gold rush comparison, AI Studio is akin to the pick axes and shovels you need.

Only success is guaranteed and the proverbial gold at the end of the journey is much, much more enticing—precisely because customer-facing AI applications tend to move the needle dramatically further than simpler Agent Assist LLM builds.

That brings me to the results.

So how good is our gold?

Early results are showing that our LLM implementations are increasing resolution rates 50% to 100% above what was achieved using legacy NLU intent-based models, with resolution rates north of 60% in some FAQ-heavy assistants.

Loop Insurance saw a 55% reduction in email tickets in their contact center.

Secondly, intent matching has more than doubled, meaning the percentage of correctly identified intents (especially when there are multiple intents) are being correctly recognized and responded to, which directly correlates to correct answers, fewer agent contacts, and satisfied customers.

That’s just the start though. Molekule hit a 60% resolution rate with a Quiq-built LLM-powered AI assistant. You can read all about that in our case study here.

And then there’s Accor, whose AI assistant across four Rixos properties has doubled (yes, 2X’ed) click-outs on booking links. Check out that case study here.

What’s next?

Like the miners in 1848, digging as much gold out of the ground as possible before the land rush, Quiq sits alone, out in front of a crowd lining up for a land grab.

With a dozen customer-facing LLM-powered AI assistants already living in the market producing incredible results, we have pioneered a space that will be remembered in history as a new day in Customer Experience.

Interested in harnessing Quiq’s power for your CX or contact center? Send us a demo request or get in touch another way and let’s talk.

Google Business Messages: Meet Your Customers Where They’re At

The world is a distracted and distracting place; between all the alerts, the celebrity drama on Twitter, and the fact that there are more hilarious animal videos on YouTube than you could ever hope to watch even if it were your full-time job, it takes a lot to break through the noise.

That’s one reason customer service-oriented businesses like contact centers are increasingly turning to text messaging. Not only are cell phones all but ubiquitous, but many people have begun to prefer text-message-based interactions to calls, emails, or in-person visits.

In this article, we’ll cover one of the biggest text-messaging channels: Google Business Messages. We’ll discuss what it is, what features it offers, and various ways of leveraging it to the fullest.

Let’s get going!

What is Google Business Messages?

Given that more than nine out of ten online searches go through Google, we will go out on a limb and assume you’ve heard of the Mountain View behemoth. But you may not be aware that Google has a Business Message service that is very popular among companies, like contact centers, that understand the advantages of texting their customers.

Business Messages allows you to create a “messaging surface” on Android or Apple devices. In practice, this essentially means that you can create a little “chat” button that your customers can use to reach out to you.

Behind the scenes, you will have to register for Business Messages, creating an “agent” that your customers will interact with. You have many configuration options for your Business Messages workflows; it’s possible to dynamically route a given message to contact center agents at a specific location, have an AI assistant powered by large language models generate a reply (more on this later), etc.

Regardless of how the reply is generated, it is then routed through the API to your agent, which is what actually interacts with the customer. A conversation is considered over when both the customer and your agent cease replying, but you can resume a conversation up to 30 days later.

What’s the Difference Between Google RCS and Google Business Messages?

It’s easy to confuse Google’s Rich Communication Services (RCS) and Google Business Messages. Although the two are similar, it’s nevertheless worth remembering their differences.

Long ago, text messages had to be short, sweet, and contain nothing but words. But as we all began to lean more on text messaging to communicate, it became necessary to upgrade the basic underlying protocol. This way, we could also use video, images, GIFs, etc., in our conversations.

“Rich” communication is this upgrade, but it’s not relegated to emojis and such. RCS is also quickly becoming a staple for businesses that want to invest in livelier exchanges with their customers. RCS allows for custom logos and consistent branding, for example; it also makes it easier to collect analytics, insert QR codes, link out to calendars or Maps, etc.

As discussed above, Business Messages is a mobile messaging channel that integrates with Google Maps, Search, and brand websites, offering rich, asynchronous communication experiences. This platform not only makes customers happy but also contributes to your business’s bottom line through reduced call volumes, improved CSAT, and better conversion rates.

Importantly, Business Messages are sometimes also prominently featured in Google search results, such as answer cards, place cards, and site links.

In short, there is a great deal of overlap between Google Business Messages and Google RCS. But two major distinctions are that RCS is not available on all Android devices (where Business Messages is), and Business Messages doesn’t require you to have a messaging app installed (where RCS does).

The Advantages of Google Business Messaging

Google Business Messaging has many distinct advantages to offer the contact center entrepreneur. In the next few sections, we’ll discuss some of the biggest.

It Supports Robust Encryption

A key feature of Business Messages is that its commitment to security and privacy is embodied through powerful, end-to-end encryption.

What exactly does end-to-end encryption entail? In short, it ensures that a message remains secure and unreadable from the moment the sender types it to whenever the recipient opens it, even if it’s intercepted in transit. This level of security is baked in, requiring no additional setup or adjustments to security settings by the user.

The significance of this feature cannot be overstated. Today, it’s not at all uncommon to read about yet another multi-million-dollar ransomware attack or a data breach of staggering proportions. This has engendered a growing awareness of (and concern for) data security, meaning that present and future customers will value those platforms that make it a central priority of their offering.

By our estimates, this will only become more important with the rise of generative AI, which has made it increasingly difficult to trust text, images, and even movies seen online—none of which was particularly trustworthy even before it became possible to mass-produce them.

If you successfully position yourself as a pillar your customers can lean on, that will go a long way toward making you stand out in a crowded market.

It Makes Connecting With Customers Easier

Another advantage of Google Business Messages is that it makes it much easier to meet customers where they are. And where we are is “on our phones.”

Now, this may seem too obvious to need pointing out. After all, if your customers are texting all day and you’re launching a text-messaging channel of communication, then of course you’ll be more accessible.

But there’s more to this story. Google Business Messaging allows you to seamlessly integrate with other Google services, like Google Maps. If a customer is trying to find the number for your contact center, therefore, they could instead get in touch simply by clicking the “CHAT” button.

This, too, may seem rather uninspiring because it’s not as though it’s difficult to grab the number and call. But even leaving aside the rising generations’ aversion to making phone calls, there’s a concept known as “trivial inconvenience” that’s worth discussing in this context.

Here’s an example: if you want to stop yourself from snacking on cookies throughout the day, you don’t have to put them on the moon (though that would help). Usually, it’s enough to put them in the next room or downstairs.

Though this only slightly increases the difficulty of accessing your cookie supply, in most cases, it introduces just enough friction to substantially reduce the number of cookies you eat (depending on the severity of your Oreo addiction, of course).

Well, the exact same dynamic works in reverse. Though grabbing your contact center’s phone number from Google and calling you requires only one or two additional steps, that added work will be sufficient to deter some fraction of customers from reaching out. If you want to make yourself easy to contact, there’s no substitute for a clean integration directly into the applications your customers are using, and that’s something Google Business Messages can do extremely well.

It’s Scalable and Supports Integrations

According to legend, the name “Google” originally came from a play on the word “Googol,” which is a “1” followed by a 100 “0”s. Google, in other words, has always been about scale, and that is reflected in the way its software operates today. For our purposes, the most important manifestation of this is the scalability of their API. Though you may currently be operating at a few hundred or a few thousand messages per day, if you plan on growing, you’ll want to invest early in communication channels that can grow along with you.

But this is hardly the end of what integrations can do for you. If you’re in the contact center business there’s a strong possibility that you’ll eventually end up using a large language model like ChatGPT in order to answer questions more quickly, offboard more routine tasks, etc. Unless you plan on dropping millions of dollars to build one in-house, you’ll want to partner with an AI-powered conversational platform. As you go about finding a good vendor, make sure to assess the features they support. The best platforms have many options for increasing the efficiency of your agents, such as reusable snippets, auto-generated suggestions that clean up language and tone, and dashboarding tools that help you track your operation in detail.

Best Practices for Using Google Business Messages

Here, in the penultimate section, we’ll cover a few optimal ways of utilizing Google Business Messages.

Reply in a Timely Fashion

First, it’s important that you get back to customers as quickly as you’re able to. As we noted in the introduction, today’s consumers are perpetually drinking from a firehose of digital information. If it takes you a while to respond to their query, there’s a good chance they’ll either forget they reached out (if you’re lucky) or perceive it as an unpardonable affront and leave you a bad review (if you’re not).

An obvious way to answer immediately is with an automated message that says something like, “Thanks for your question. We’ll respond to you soon!” But you can’t just leave things there, especially if the question requires a human agent to intervene.

Whatever automated system you implement, you need to monitor how well your filters identify and escalate the most urgent queries. Remember that an agent might need a few hours to answer a tricky question, so factor that into your procedures.

This isn’t just something Google suggests; it’s codified in its policies. If you leave a Business Messages chat unanswered for 24 hours, Google might actually deactivate your company’s ability to use chat features.

Don’t Ask for Personal Information

As hackers have gotten more sophisticated, everyday consumers have responded by raising their guard.

On the whole, this is a good thing and will lead to a safer and more secure world. But it also means that you need to be extremely careful not to ask for anything like a social security number or a confirmation code via a service like Business Messages. What’s more, many companies are opting to include a disclaimer to this effect near the beginning of any interactions with customers.

Earlier, we pointed out that Business Messages supports end-to-end encryption, and having a clear, consistent policy about not collecting sensitive information fits into this broader picture. People will trust you more if they know you take their privacy seriously.

Make Business Messages Part of Your Overall Vision

Google Business Messages is a great service, but you’ll get the most out of it if you consider how it is part of a more far-reaching strategy.

At a minimum, this should include investing in other good communication channels, like Apple Messages and WhatsApp. People have had bitter, decades-long battles with each other over which code editor or word processor is best, so we know that they have strong opinions about the technology that they use. If you have many options for customers wanting to contact you, that’ll boost their satisfaction and their overall impression of your contact center.

The prior discussion of trivial inconveniences is also relevant here. It’s not hard to open a different messaging app under most circumstances, but if you don’t force a person to do that, they’re more likely to interact with you.

Schedule a Demo with Quiq

Google has been so monumentally successful its name is now synonymous with “online search.” Even leaving aside rich messaging, encryption, and everything else we covered in this article, you can’t afford to ignore Business Messages for this reason alone.

But setting up an account is only the first step in the process, and it’s much easier when you have ready-made tools that you can integrate on day one. The Quiq conversational AI platform is one such tool, and it has a bevy of features that’ll allow you to reduce the workloads on your agents while making your customers even happier. Check us out or schedule a demo to see what we can do for you!

WhatsApp Business: A Guide for Contact Center Managers

In today’s digital era, businesses continually seek innovative ways to connect with their customers, striving to enhance communication and foster deeper relationships. Enter WhatsApp Business – a game-changer in the realm of digital communication. This powerful tool is not just a messaging app; it’s a bridge between businesses and their customers, offering a plethora of features designed to streamline communication, improve customer service, and boost engagement. Whether you’re a small business owner or part of a global enterprise, understanding the potential of WhatsApp Business could redefine your approach to customer communication.

What is Whatsapp Business?

WhatsApp is an application that supports text messaging, voice messaging, and video calling for over two billion global users. Because it leverages a simple internet connection to send and receive data, WhatsApp users can avoid the fees that once made communication so expensive.

Since WhatsApp already has such a large base of enthusiastic users, many international brands have begun leveraging it to communicate with their own audiences. It also has a number of built-in features that make it an attractive option for businesses wanting to establish a more personal connection with their customers, and we’ll cover those in the next section.

What Features Does WhatsApp Business Have?

In addition to its reach and the fact that it reduces the budget needed for communication, WhatsApp Business has additional functionality that makes it ideal for any business trying to interact with its customers.

When integrated with a tool like the Quiq conversational AI platform, WhatsApp Business can automatically transcribe voice-based messages. Even better, WhatsApp Business allows you to export these conversations later if you want to analyze them with a tool like natural language processing.

If your contact center agents and the customers they’re communicating with have both set a “preferred language,” WhatsApp can dynamically translate between these languages to make communication easier. So, if a user sends a voice message in Russian and the agent wants to communicate in English, they’ll have no trouble understanding one another.

What are the Differences Between WhatsApp and WhatsApp Business?

Before we move on, it’s worth pointing out that WhatsApp and WhatsApp Business are two different services. On its own, WhatsApp is the most widely used messaging application in the world. Businesses can use WhatsApp to talk to their customers, but with a WhatsApp Business account, they get a few extra perks.

Mostly, these perks revolve around building brand awareness. Unlike a basic WhatsApp account, a WhatsApp Business account allows you to include a lot of additional information about your company and its services. It also provides a labeling system so that you can organize the conversations you have with customers, and a variety of other tools so you can respond quickly and efficiently to any issues that come up.

The Advantages of WhatsApp Messaging for Businesses

Now, let’s spend some time going over the myriad advantages offered by a WhatsApp outreach strategy. Why, in other words, would you choose to use WhatsApp over its many competitors?

Global Reach and Popularity

First, we’ve already mentioned the fact that WhatsApp has achieved worldwide popularity, and in this section, we’ll drill down into more specifics.

When WhatsApp was acquired by Meta in 2014, it already boasted 450 million active users per month. Today, this figure has climbed to a remarkable 2.7 billion, but it’s believed it will reach a dizzying 3.14 billion as early as 2025.

With over 535 million users, India is the country where WhatsApp has gained the most traction by far. Brazil is second with 148 million users, and Indonesia is third with 112 million users.

The gender divide among WhatsApp users is pretty even – men account for just shy of 54% of WhatsApp users, so they have only a slight majority.

The app itself has over 5 billion downloads from the Google Play store alone, and it’s used to send 140 billion messages each day.

These data indicate that WhatsApp could be a very valuable channel to cultivate, regardless of the market you’re looking to serve or where your customers are located.

Personalized Customer Interactions

Firstly, platforms like WhatsApp enable businesses to customize communication with a level of scale and sophistication previously unavailable.

This customization is powered by machine learning, a technology that has consistently led the charge in the realm of automated content personalization. For example, Spotify’s ability to analyze your listening patterns and suggest music or podcasts that match your interests is powered by machine learning. Now, thanks to advancements in generative AI, similar technology is being applied to text messaging.

Past language models often fell short in providing personalized customer interactions. They tended to be more “rule-based” and, therefore, came off as “mechanical” and “unnatural.” However, contemporary models greatly improve agents’ capacity to adapt their messages to a particular situation.

While none of this suggests generative AI is going to entirely take the place of the distinctive human mode of expression, for a contact center manager aiming to improve customer experience, this marks a considerable step forward.

Below, we have a section talking a little bit more about integrating AI into WhatsApp Business.

End-to-End Encryption

One thing that has always been a selling point for WhatsApp is that it takes security and privacy seriously. This is manifested most obviously in the fact that it encrypts all messages end-to-end.

What does this mean? From the moment you start typing a message to another user all the way through when they read it, the message is protected. Even if another party were to somehow intercept your message, they’d still have to crack the encryption to read it. What’s more, all of this is enabled by default – you don’t have to spend any time messing around with security settings.

This might be more important than you realize. We live in a world increasingly beset by data breaches and ransomware attacks, and more people are waking up to the importance of data security and privacy. This means that a company that takes these aspects of its platform very seriously could have a leg up where building trust is concerned. Your users want to know that their information is safe with you, and using a messaging service like WhatsApp will help to set you apart.

Scalability

Finally, WhatsApp’s Business API is a sophisticated programmatic interface designed to scale your business’s outreach capabilities. By leveraging this tool, companies can connect with a broader audience, extending their reach to prospects and customers across various locations. This expansion is not just about increasing numbers; it’s about strategically enhancing your business’s presence in the digital world, ensuring that you’re accessible whenever your customers need to reach out to you.

By understanding the value WhatsApp’s Business API brings in reaching and engaging with more people effectively, you can make an informed decision about whether it represents the right technological solution for your business’s expansion and customer engagement strategies.

Enhancing Contact Center Performance with WhatsApp Messaging

Now, let’s turn our attention to some of the concrete ways in which WhatsApp can improve your company’s chances of success!

Improving Response and Resolution Metrics Times

Integrating technologies like WhatsApp Business into your agent workflow can drastically improve efficiency, simultaneously reducing response times and boosting customer satisfaction. Agents often have to manage several conversations at once, and it can be challenging to keep all those plates spinning.

However, a quality messaging platform like WhatsApp means they’re better equipped to handle these conversations, especially when utilizing tools like Quiq Compose.

Additionally, less friction in resolving routine tasks means agents can dedicate their focus to issues that necessitate their expertise. This not only leads to more effective problem-solving, it means that fewer customer inquiries are overlooked or terminated prematurely.

Integrating Artificial Intelligence

According to WhatsApp’s own documentation, there’s an ongoing effort to expand the API to allow for the integration of chatbots, AI assistants, and generative AI more broadly.

Today, these technologies possess a surprisingly sophisticated ability to conduct basic interactions, answer straightforward questions, and address a wide range of issues, all of which play a significant role in boosting customer satisfaction and making agents more productive.

We can’t say for certain when WhatsApp will roll out the red carpet for AI vendors like Quiq, but if our research over the past year is any indication, it will make it dramatically easier to keep customers happy!

Gathering Customer Feedback

Lastly, an additional advantage to WhatsApp messaging is the degree to which it facilitates collecting customer feedback. To adapt quickly and improve your services, you have to know what your customers are thinking. And more specifically, you have to know the details about what they like and dislike about your product or service.

In the Olde Days (i.e. 20 years ago year, or so), the only real way to do this was by conducting focus groups, sending out surveys – sometimes through the actual mail, if you can believe it – or doing something similarly labor-intensive.

Today, however, your customers are almost certainly walking around with a smartphone that supports text messaging. And, since it’s pretty easy for them to answer a few questions or dash off a few quick lines describing their experience with your service, odds are that you can gather a great deal of feedback from them.

Now, we hasten to add that you must exercise a certain degree of caution in interpreting this kind of feedback, as getting an accurate gauge of customer sentiment is far from trivial. To name just one example, the feedback might be exaggerated in both the positive and negative direction because the people most likely to send feedback via text messaging are the ones who really liked or really didn’t like you.

That said, so long as you’re taking care to contextualize the information coming from customers, supplementing it with additional data wherever appropriate, it’s valuable to have.

Wrapping Up

From its global reach and popularity to the personalized customer interactions it facilitates, WhatsApp Business stands out as a powerful solution for businesses aiming to enhance their digital presence and customer engagement strategies. By leveraging the advanced features of WhatsApp Business, companies can avail themselves of end-to-end encryption, enjoy scalability, and improve contact center performance, thereby positioning themselves at the forefront of the contact center game.

And speaking of being at the forefront, the Quiq conversational CX platform offers a staggering variety of different tools, from AI assistants powered by language models to advanced analytics on agent performance. Check us out or schedule a demo to see what we can do for your contact center!

6 Questions to Ask Generative AI Vendors You’re Evaluating

With all the power exhibited by today’s large language models, many businesses are scrambling to leverage them in their offerings. Enterprises in a wide variety of domains – from contact centers to teams focused on writing custom software – are adding AI-backed functionality to make their users more productive and the customer experience better.

But, in the rush to avoid being the only organization not using the hot new technology, it’s easy to overlook certain basic sanity checks you must perform when choosing a vendor. Today, we’re going to fix that. This piece will focus on several of the broad categories of questions you should be asking potential generative AI providers as you evaluate all your options.

This knowledge will give you the best chance of finding a vendor that meets your requirements, will help you with integration, and will ultimately allow you to better serve your customers.

These are the Questions you Should ask Your Generative AI Vendor

Training large language models is difficult. Besides the fact that it requires an incredible amount of computing power, there are also hundreds of tiny little engineering optimizations that need to be made along the way. This is part of the reason why all the different language model vendors are different from one another.

Some have a longer context window, others write better code but struggle with subtle language-based tasks, etc. All of this needs to be factored into your final decision because it will impact how well your vendor performs for your particular use case.

In the sections that follow, we’ll walk you through some of the questions you should raise with each vendor. Most of these questions are designed to help you get a handle on how easy a given offering will be to use in your situation, and what integrating it will look like.

1. What Sort of Customer Service Do You Offer?

We’re contact center and customer support people, so we understand better than anyone how important it is to make sure users know what our product is, what it can do, and how we can help them if they run into issues.

As you speak with different generative AI vendors, you’ll want to probe them about their own customer support, and what steps they’ll take to help you utilize their platform effectively.

For this, just start with the basics by figuring out the availability of their support teams – what hours they operate in, whether they can accommodate teams working in multiple time zones, and whether there is an option for 24/7 support if a critical problem arises.

Then, you can begin drilling into specifics. One thing you’ll want to know about is the channels their support team operates through. They might set up a private Slack channel with you so you can access their engineers directly, for example, or they might prefer to work through email, a ticketing system, or a chat interface. When you’re discussing this topic, try to find out whether you’ll have a dedicated account manager to work with.

You’ll also want some context on the issue resolution process. If you have a lingering problem that’s not being resolved, how do you go about escalating it, and what’s the team’s response time for issues in general?

Finally, it’s important that the vendors have some kind of feedback mechanism. Just as you no doubt have a way for clients to let you know if they’re dissatisfied with an agent or a process, the vendor you choose should offer a way for you to let them know how they’re doing so they can improve. This not only tells you they care about getting better, it also indicates that they have a way of figuring out how to do so.

2. Does Your Team Offer Help with Setting up the Platform?

A related subject is the extent to which a given generative AI vendor will help you set up their platform in your environment. A good way to begin is by asking what kinds of training materials and resources they offer.

Many vendors are promoting their platforms by putting out a ton of educational content, all of which your internal engineers can use to get up to speed on what those platforms can do and how they function.

This is the kind of thing that is easy to overlook, but you should pay careful attention to it. Choosing a generative AI vendor that has excellent documentation, plenty of worked-out examples, etc. could end up saving you a tremendous amount of time, energy, and money down the line.

Then, you can get clarity on whether the vendor has a dedicated team devoted to helping customers like you get set up. These roles are usually found under titles like “solutions architect”, so be sure to ask whether you’ll be assigned such a person and the extent to which you can expect their help. Some platforms will go to the moon and back to make sure you have everything you need, while others will simply advise you if you get stuck somewhere.

Which path makes the most sense depends on your circumstances. If you have a lot of engineers you may not need more than a little advice here and there, but if you don’t, you’ll likely need more handholding (but will probably also have to pay extra for that). Keep all this in mind as you’re deciding.

3. What Kinds of Integrations Do You Support?

Now, it’s time to get into more technical details about the integrations they support. When you buy a subscription to a generative AI vendor, you are effectively buying a set of capabilities. But those capabilities are much more valuable if you know they’ll plug in seamlessly with your existing software, and they’re even more valuable if you know they’ll plug into software you plan on building later on. You’ve probably been working on a roadmap, and now’s the time to get it out.

It’s worth checking to see whether the vendor can support many different kinds of language models. This involves a nuance in what the word “vendor” means, so let’s unpack it a little bit. Some generative AI vendors are offering you a model, so they’re probably not going to support another company’s model.

OpenAI and Anthropic are examples of model vendors, so if you work with them you’re buying their model and will not be able to easily incorporate someone else’s model.

Other vendors, by contrast, are offering you a service, and in many cases that service could theoretically by powered by many different models.

Quiq’s Conversational CX Platform, for example, supports OpenAI’s GPT models, and we have plans to expand the scope of our integrations to encompass even more models in the future.

Another thing you’re going to want to check on is whether the vendor makes it easy to integrate vector databases into your workflow. Vectors are data structures that are remarkably good at capturing subtle relationships in large datasets; they’re becoming an ever-more-important part of machine learning, as evinced by the fact that there are now a multitude of different vector databases on offer.

The chances are pretty good that you’ll eventually want to leverage a vector database to store or search over customer interactions, and you’ll want a vendor that makes this easy.

Finally, see if the vendor has any case studies you can look at. Quiq has published a case study on how our language services were utilized by LOOP, a car insurance company, to make a far superior customer-service chatbot. The result was that customers were able to get much more personalization in their answers and were able to resolve their problems fully half of the time, without help. This led to a corresponding 55% reduction in tickets, and a customer satisfaction rating of 75% (!) when interacting with the Quiq-powered AI assistant.

See if the vendors you’re looking at have anything similar you can examine. This is especially helpful if the case studies are focused on companies that are similar to yours.

4. How Does Prompt Engineering and Fine-Tuning Work for Your Model?

For many tasks, large language models work perfectly fine on their own, without much special effort. But there are two methods you should know about to really get the most out of them: prompt engineering and fine-tuning.

As you know, prompts are the basic method for interacting with language models. You’ll give a model a prompt like “What is generative AI?”, and it’ll generate a response. Well, it turns out that models are really sensitive to the wording and structure of prompts, and prompt engineers are those who explore the best way to craft prompts to get useful output from a model.

It’s worth asking potential vendors about this because they handle prompts differently. Quiq’s AI Studio encourages atomic prompting, where a single prompt has a clear purpose and intended completion, and we support running prompts in parallel and sequentially. You can’t assume everyone will do this, however, so be sure to check.

Then, there’s fine-tuning, which refers to training a model on a bespoke dataset such that its output is heavily geared towards the patterns found in that dataset. It’s becoming more common to fine-tune a foundational model for specific use cases, especially when those use cases involve a lot of specialized vocabulary such as is found in medicine or law.

Setting up a fine-tuning pipeline can be cumbersome or relatively straightforward depending on the vendor, so see what each vendor offers in this regard. It’s also worth asking whether they offer technical support for this aspect of working with the models.

5. Can Your Models Support Reasoning and Acting?

One of the current frontiers in generative AI is building more robust, “agentic” models that can execute strings of tasks on their way to completing a broader goal. This goes by a few different names, but one that has been popping up in the research literature is “ReAct”, which stands for “reasoning and acting”.

You can get ReAct functionality out of existing language models through chain-of-thought prompting, or by using systems like AutoGPT; to help you concretize this a bit, let’s walk through how ReAct works in Quiq.

With Quiq’s AI Studio, a conversational data model is used to classify and store both custom and standard data elements, and these data elements can be set within and across “user turns”. A single user turn is the time between when a user offers an input to the time at which the AI responds and waits for the next user input.

Our AI can set and reason about the state of the data model, applying rules to take the next best action. Common actions include things like fetching data, running another prompt, delivering a message, or offering to escalate to a human.

Though these efforts are still early, this is absolutely the direction the field is taking. If you want to be prepared for what’s coming without the need to overhaul your generative AI stack later on, ask about how different vendors support ReAct.

6. What’s your Pricing Structure Like?

Finally, you’ll need to talk to vendors about how their prices work, including any available details on licensing types, subscriptions, and costs associated with the integration, use, and maintenance of their solution.

To take one example, Quiq’s licensing is based on usage. We establish a usage pool wherein our customers pre-pay Quiq for a 12-month contract; then, as the customer uses our software money is deducted from that pool. We also have an annual AI Assistant Maintenance fee along with a one-time implementation fee.

Vendors can vary considerably in how their prices work, so if you don’t want to overpay then make sure you have a clear understanding of their approach.

Picking the Right Generative AI Vendor

Language models and related technologies are taking the world by storm, transforming many industries, including customer service and contact center management.

Making use of these systems means choosing a good vendor, and that requires you to understand each vendor’s model, how those models integrate with other tools, and what you’re ultimately going to end up paying.

If you want to see how Quiq stacks up and what we can do for you, schedule a demo with us today!

Request A Demo

Your Guide to Trust and Transparency in the Age of AI

Over the last few years, AI has really come into its own. ChatGPT and similar large language models have made serious inroads in a wide variety of natural language tasks, generative approaches have been tested in domains like music and protein discovery, researchers have leveraged techniques like chain-of-thought prompting to extend the capabilities of the underlying models, and much else besides.

People working in domains like customer service, content marketing, and software engineering are mostly convinced that this technology will significantly impact their day-to-day lives, but many questions remain.

Given the fact that these models are enormous artifacts whose inner workings are poorly understood, one of the main questions centers around trust and transparency. In this article, we’re going to address these questions head-on. We’ll discuss why transparency is important when advanced algorithms are making ever more impactful decisions, and turn our attention to how you can build a more transparent business.

Why is Transparency Important?

First, let’s take a look at why transparency is important in the first place. The next few sections will focus on the trust issues that stem from AI becoming a ubiquitous technology that few understand at a deep level.

AI is Becoming More Integrated

AI has been advancing steadily for decades, and this has led to a concomitant increase in its use. It’s now commonplace for us to pick entertainment based on algorithmic recommendations, for our loan and job applications to pass through AI filters, and for more and more professionals to turn to ChatGPT before Google when trying to answer a question.

We personally know of multiple software engineers who claim to feel as though they’re at a significant disadvantage if their preferred LLM is offline for even a few hours.

Even if you knew nothing about AI except the fact that it seems to be everywhere now, that should be sufficient incentive to want more context on how it makes decisions and how those decisions are impacting the world.

AI is Poorly Understood

But, it turns out there is another compelling reason to care about transparency in AI: almost no one knows how LLMs and neural networks more broadly can do what they do.

To be sure, very simple techniques like decision trees and linear regression models pose little analytical challenge, and we’ve written a great deal over the past year about how language models are trained. But if you were to ask for a detailed account of how ChatGPT manages to create a poem with a consistent rhyme scheme, we couldn’t tell you.

And – as far as we know – neither could anyone else.

This is troubling; as we noted above, AI has become steadily more integrated into our private and public lives, and that trend will surely only accelerate now that we’ve seen what generative AI can do. But if we don’t have a granular understanding of the inner workings of advanced machine-learning systems, how can we hope to train bias out of them, double-check their decisions, or fine-tune them to behave productively and safely?

These precise concerns are what have given rise to the field of explainable AI. Mathematical techniques like LIME and SHAP can offer some intuition for why a given algorithm generated the output it did, but they accomplish this by crudely approximating the algorithm instead of actually explaining it. Mechanistic interpretability is the only approach we know of that confronts the task directly, but it has only just gotten started.

This leaves us in the discomfiting situation of relying on technologies that almost no one groks deeply, including the people creating them.

People Have Many Questions About AI

Finally, people have a lot of questions about AI, where it’s heading, and what its ultimate consequences will be. These questions can be laid out on a spectrum, with one end corresponding to relatively prosaic concerns about technological unemployment and deepfakes influencing elections, and the other corresponding to more exotic fears around superintelligent agents actively fighting with human beings for control of the planet’s future.

Obviously, we’re not going to sort all this out today. But as a contact center manager who cares about building trust and transparency, it would behoove you to understand something about these questions and have at least cursory answers prepared for them.

How do I Increase Transparency and Trust when Using AI Systems?

Now that you know why you should take trust and transparency around AI seriously, let’s talk about ways you can foster these traits in your contact center. The following sections will offer advice on crafting policies around AI use, communicating the role AI will play in your contact center, and more.

Get Clear on How You’ll Use AI

The journey to transparency begins with having a clear idea of what you’ll be using AI to accomplish. This will look different for different kinds of organizations – a contact center, for example, will probably want to use generative AI to answer questions and boost the efficiency of its agents, while a hotel might instead attempt to automate the check-in process with an AI assistant.

Each use case has different requirements and different approaches that are better suited for addressing it; crafting an AI strategy in advance will go a long to helping you figure out how you should allocate resources and prioritize different tasks.

Once you do that, you should then create documentation and a communication policy to support this effort. The documentation will make sure that current and future agents know how to use the AI platform you decide to work with, and it should address the strengths and weaknesses of AI, as well as information about when its answers should be fact-checked. It should also be kept up-to-date, reflecting any changes you make along the way.

The communication policy will help you know what to say if someone (like a customer) asks you what role AI plays in your organization.

Know Your Data

Another important thing you should keep in mind is what kind of data your model has been trained on, and how it was gathered. Remember that LLMs consume huge amounts of textual data and then learn patterns in that data they can use to create their responses. If that data contains biased information – if it tends to describe women as “nurses” and men as “doctors”, for example – that will likely end up being reflected in its final output. Reinforcement learning from human feedback and other approaches to fine-tuning can go part of the way to addressing this problem, but the best thing to do is ensure that the training data has been curated to reflect reality, not stereotypes.

For similar reasons, it’s worth knowing where the data came from. Many LLMs are trained somewhat indiscriminately, and might have even been shown corpora of pirated books or other material protected by copyright. This has only recently come to the forefront of the discussion, and OpenAI is currently being sued by several different groups for copyright infringement.

If AI ends up being an important part of the way your organization functions, the chances are good that, eventually, someone will want answers about data provenance.

Monitor Your AI Systems Continuously

Even if you take all the precautions described above, however, there is simply no substitute for creating a robust monitoring platform for your AI systems. LLMs are stochastic systems, meaning that it’s usually difficult to know for sure how they’ll respond to a given prompt. And since these models are prone to fabricating information, you’ll need to have humans at various points making sure the output is accurate and helpful.

What’s more, many machine learning algorithms are known to be affected by a phenomenon known as “model degradation”, in which their performance steadily declines over time. The only way you can check to see if this is occurring is to have a process in place to benchmark the quality of your AI’s responses.

Be Familiar with Standards and Regulations

Finally, it’s always helpful to know a little bit about the various rules and regulations that could impact the way you use AI. These tend to focus on what kind of data you can gather about clients, how you can use it, and in what form you have to disclose these facts.

The following list is not comprehensive, but it does cover some of the more important laws:

  • The General Data Protection Regulation (GDPR) is a comprehensive regulatory framework established by the European Union to dictate data handling practices. It is applicable not only to businesses based in Europe but also to any entity that processes data from EU citizens.
  • The California Consumer Protection Act (CCPA) was introduced by California to enhance individual control over personal data. It mandates clearer data collection practices by companies, requires privacy disclosures, and allows California residents to opt-out of data collection.
  • Soc II, developed by the American Institute of Certified Public Accounts, focuses on the principles of confidentiality, privacy, and security in the handling and processing of consumer data.
  • In the United Kingdom, contact centers must be aware of the Financial Conduct Authority’s new “Consumer Duty” regulations. These regulations emphasize that firms should act with integrity toward customers, avoid causing them foreseeable harm, and support customers in achieving their financial objectives. As the integration of generative AI into this regulatory landscape is still being explored, it’s an area that stakeholders need to keep an eye on.

Fostering Trust in a Changing World of AI

An important part of utilizing AI effectively is making sure you do so in a way that enhances the customer experience and works to build your brand. There’s no point in rolling out a state-of-the-art generative AI system if it undermines the trust your users have in your company, so be sure to track your data, acquaint yourself with the appropriate laws, and communicate clearly.

Another important step you can take is to work with an AI vendor who enjoys a sterling reputation for excellence and propriety. Quiq is just such a vendor, and our Conversational AI platform can bring AI to your contact center in a way that won’t come back to bite you later. Schedule a demo to see what we can do for you, today!

Request A Demo

Getting the Most Out of Your Customer Insights with AI

The phrase “Knowledge is power” is usually believed to have originated with 16th- and 17th-century English philosopher Francis Bacon, in his Meditationes Sacræ. Because many people recognize something profoundly right about this sentiment, it has become received wisdom in the centuries since.

Now, data isn’t exactly the same thing as knowledge, but it is tremendously powerful. Armed with enough of the right kind of data, contact center managers can make better decisions about how to deploy resources, resolve customer issues, and run their business.

As is usually the case, the data contact center managers are looking for will be unique to their field. This article will discuss these data, why they matter, and how AI can transform how you gather, analyze, and act on data.

Let’s get going!

What are Customer Insights in Contact Centers?

As a contact center, your primary focus is on helping people work through issues related to a software product or something similar. But you might find yourself wondering who these people are, what parts of the customer experience they’re stumbling over, which issues are being escalated to human agents and which are resolved by bots, etc.

If you knew these things, you would be able to notice patterns and start proactively fixing problems before they even arise. This is what customer insights is all about, and it can allow you to finetune your procedures, write clearer technical documentation, figure out the best place to use generative AI in your contact center, and much more.

What are the Major Types of Customer Insights?

Before we turn to a discussion of the specifics of customer insights, we’ll deal with the major kinds of customer insights there are. This will provide you with an overarching framework for thinking about this topic and where different approaches might fit in.

Speech and Text Data

Customer service and customer experience both tend to be language-heavy fields. When an agent works with a customer over the phone or via chat, a lot of natural language is generated, and that language can be analyzed. You might use a technique like sentiment analysis, for example, to gauge how frustrated customers are when they contact an agent. This will allow you to form a fuller picture of the people you’re helping, and discover ways of doing so more effectively.

Data on Customer Satisfaction

Contact centers exist to make customers happy as they try to use a product, and for this reason, it’s common practice to send out surveys when a customer interaction is done. When done correctly, the information contained in these surveys is incredibly valuable, and can let you know whether or not you’re improving over time, whether a specific approach to training or a new large language model is helping or hurting customer satisfaction, and more.

Predictive Analytics

Predictive analytics is a huge field, but it mostly boils down to using machine learning or something similar to predict the future based on what’s happened in the past. You might try to forecast average handle time (AHT) based on the time of the year, on the premise that when an issue arises has something to do with how long it will take to get it resolved.

To do this effectively you would need a fair amount of AHT data, along with the corresponding data about when the complaints were raised, and then you could fit a linear regression model on these two data streams. If you find that AHT reliably climbs during certain periods, you can have more agents on hand when required.

Data on Agent Performance

Like employees in any other kind of business, agents perform at different levels. Junior agents will likely take much longer to work through a thorny customer issue than more senior ones, of course, and the same could be said for agents with an extensive technical background versus those without the knowledge this background confers. Or, the same agent might excel at certain kinds of tasks but perform much worse on others.

Regardless, by gathering these data on how agents are performing you, as the manager, can figure out where weaknesses lie across all your teams. With this information, you’ll be able to strategize about how to address those weaknesses with coaching, additional education, a refresh of the standard operating procedures, or what have you.

Channel Analytics

These days, there are usually multiple ways for a customer to get in touch with your contact center, and they all have different dynamics. Sending a long email isn’t the same thing as talking on the phone, and both are distinct from reaching out on social media or talking to a bot. If you have analytics on specific channels, how customers use them, and what their experience was like, you can make decisions about what channels to prioritize.

What’s more, customers will often have interacted with your brand in the past through one or more of these channels. If you’ve been tracking those interactions, you can incorporate this context to personalize responses when they reach out to resolve an issue in the future, which can help boost customer satisfaction.

What Specific Metrics are Tracked for Customer Insights?

Now that we have a handle on what kind of customer insights there are, let’s talk about specific metrics that come up in contact centers!

First Contact Resolution (FCR)

The first contact resolution is the fraction of issues a contact center is able to resolve on the first try, i.e. the first time the customer reaches out. It’s sometimes also known as Right First Time (RFT), for this reason. Note that first contact resolution can apply to any channel, whereas first call resolution applies only when the customer contacts you over the phone. They have the same acronym but refer to two different metrics.

Average Handle Time (AHT)

The average handle time is one of the more important metrics contact centers track, and it refers to the mean length of time an agent spends on a task. This is not the same thing as how long the agent spends talking to a customer, and instead encompasses any work that goes on afterward as well.

Customer Satisfaction (CSAT)

The customer satisfaction score attempts to gauge how customers feel about your product and service. It’s common practice, to collect this information from many customers, then averaging them to get a broader picture of how your customers feel. The CSAT can give you a sense of whether customers are getting happier over time, whether certain products, issues, or agents make them happier than others, etc.

Call Abandon Rate (CAR)

The call abandon rate is the fraction of customers who end a call with an agent before their question has been answered. It can be affected by many things, including how long the customers have to wait on hold, whether they like the “hold” music you play, and similar sorts of factors. You should be aware that CAR doesn’t account for missed calls, lost calls, or dropped calls.

***

Data-driven contact centers track a lot of metrics, and these are just a sample. Nevertheless, they should convey a sense of what kinds of numbers a manager might want to examine.

How Can AI Help with Customer Insights?

And now, we come to the “main” event, a discussion of how artificial intelligence can help contact center managers gather and better utilize customer insights.

Natural Language Processing and Sentiment Analysis

An obvious place to begin is with natural language processing (NLP), which refers to a subfield in machine learning that uses various algorithms to parse (or generate) language.

There are many ways in which NLP can aid in finding customer insights. We’ve already mentioned sentiment analysis, which detects the overall emotional tenor of a piece of language. If you track sentiment over time, you’ll be able to see if you’re delivering more or less customer satisfaction.

You could even get slightly more sophisticated and pair sentiment analysis with something like named entity recognition, which extracts information about entities from language. This would allow you to e.g. know that a given customer is upset, and also that the name of a particular product kept coming up.

Classifying Different Kinds of Communication

For various reasons, contact centers keep transcripts and recordings of all the interactions they have with a customer. This means that they have access to a vast amount of textual information, but since it’s unstructured and messy it’s hard to know what to do with it.

Using any of several different ML-based classification techniques, a contact center manager could begin to tame this complexity. Suppose, for example, she wanted to have a high-level overview of why people are reaching out for support. With a good classification pipeline, she could start automating the processing of sorting communications into different categories, like “help logging in” or “canceling a subscription”.

With enough of this kind of information, she could start to spot trends and make decisions on that basis.

Statistical Analysis and A/B Testing

Finally, we’ll turn to statistical analysis. Above, we talked a lot about natural language processing and similar endeavors, but more than likely when people say “customer insights” they mean something like “statistical analysis”.

This is a huge field, so we’re going to illustrate its importance with an example focusing on churn. If you have a subscription-based business, you’ll have some customers who eventually leave, and this is known as “churn”. Churn analysis has sprung up to apply data science to understanding these customer decisions, in the hopes that you can resolve any underlying issues and positively impact the bottom line.

What kinds of questions would be addressed by churn analysis? Things like what kinds of customers are canceling (i.e. are they young or old, do they belong to a particular demographic, etc.), figuring out their reasons for doing so, using that to predict which similar questions might be in danger of churning soon, and thinking analytically about how to reduce churn.

And how does AI help? There now exist any number of AI tools that substantially automate the process of gathering and cleaning the relevant data, applying standard tests, making simple charts, and making your job of extracting customer insights much easier.

What AI Tools Can Be Used for Customer Insights?

By now you’re probably eager to try using AI for customer insights, but before you do that, let’s spend some talking about what you’d look for in a customer insights tool.

Performant and Reliable

Ideally, you want something that you can depend upon and that won’t drive you crazy with performance issues. A good customer insights tool will have many optimizations under the hood that make crunching numbers easy, and shouldn’t require you to have a computer science degree to set up.

Straightforward Integration Process

Modern contact centers work across a wide variety of channels, including emails, chat, social media, phone calls, and even more. Whatever AI-powered customer insights platform you go with should be able to seamlessly integrate with all of them.

Simple to Use

Finally, your preferred solution should be relatively easy to use. Quiq Insights, for example, makes it a breeze to create customizable funnels, do advanced filtering, see the surrounding context for different conversations, and much more.

Getting the Most Out of AI-Powered Customer Insights

Data is extremely important to the success or failure of modern businesses, and it’s getting more important all the time. Contact centers have long been forward-looking and eager to adopt new technologies, and the same must be true in our brave new data-powered world.

If you’d like a demo of Quiq Insights, reach out to see how we can help you streamline your operation while boosting customer satisfaction!

Request A Demo

Security and Compliance in Next-Gen Contact Centers

Along with almost everyone else, we’ve been singing the praises of large language models like ChatGPT for a while now. We’ve noted how they can be used in retail, how they’re already supercharging contact center agents, and have even put out some content on how researchers are pushing the frontiers of what this technology is capable of.

But none of this is to say that generative AI doesn’t come with serious concerns for security and compliance. In this article, we’ll do a deep dive into these issues. We’ll first provide some context on how advanced AI is being deployed in contact centers, before turning our attention to subjects like data leaks, lack of transparency, and overreliance. Finally, we’ll close with a treatment of the best practices contact center managers can use to alleviate these problems.

What is a “Next-Gen” Contact Center?

First, what are some ways in which a next-generation contact center might actually be using AI? Understanding this will be valuable background for the rest of the discussion about security and compliance, because knowing what generative AI is doing is a crucial first step in protecting ourselves from its potential downsides.

Businesses like contact centers tend to engage in a lot of textual communication, such as when resolving customer issues or responding to inquiries. Due to their proficiency in understanding and generating natural language, LLMs are an obvious tool to reach for when trying to automate or streamline these tasks; for this reason, they have become increasingly popular in enhancing productivity within contact centers.

To give specific examples, there are several key areas where contact center managers can effectively utilize LLMs:

Responding to Customer Queries – High-quality documentation is crucial, yet there will always be customers needing assistance with specific problems. While LLMs like ChatGPT may not have all the answers, they can address many common inquiries, particularly when they’ve been fine-tuned on your company’s documentation.

Facilitating New Employee Training – Similarly, a language model can significantly streamline the onboarding process for new staff members. As they familiarize themselves with your technology and procedures, they may encounter confusion, where AI can provide quick and relevant information.

Condensing Information – While it may be possible to keep abreast of everyone’s activities on a small team, this becomes much more challenging as the team grows. Generative AI can assist by summarizing emails, articles, support tickets, or Slack threads, allowing team members to stay informed without spending every moment of the day reading.

Sorting and Prioritizing Issues – Not all customer inquiries or issues carry the same level of urgency or importance. Efficiently categorizing and prioritizing these for contact center agents is another area where a language model can be highly beneficial. This is especially so when it’s integrated into a broader machine-learning framework, such as one that’s designed to adroitly handle classification tasks.

Language Translation – If your business has a global reach, you’re eventually going to encounter non-English-speaking users. While tools like Google Translate are effective, a well-trained language model such as ChatGPT can often provide superior translation services, enhancing communication with a diverse customer base.

What are the Security and Compliance Concerns for AI?

The preceding section provided valuable context on the ways generative AI is powering the future of contact centers. With that in mind, let’s turn to a specific treatment of the security and compliance concerns this technology brings with it.

Data Leaks and PII

First, it’s no secret that language models are trained on truly enormous amounts of data. And with that, there’s a growing worry about potentially exposing “Personally Identifiable Information” (PII) to generative AI models. PII encompasses details like your actual name, residential address, and also encompasses sensitive information like health records. It’s important to note that even if these records don’t directly mention your name, they could still be used to deduce your identity.

While our understanding of the exact data seen by language models during their training remains incomplete, it’s reasonable to assume they’ve encountered some sensitive data, considering how much of that kind of data exists on the internet. What’s more, even if a specific piece of PII hasn’t been directly shown to an LLM, there are numerous ways it might still come across such data. Someone might input customer data into an LLM to generate customized content, for instance, not recognizing that the model often permanently integrates this information into its framework.

Currently, there’s no effective method to extract data from a language model, and no finetuning technique that ensures it never reveals that data again has yet been found.

Over-Reliance on Models

Are you familiar with the term “ultracrepidarianism”? It’s a fancy SAT word that refers to a person who consistently gives advice or expresses opinions on things that they simply have no expertise in.

A similar sort of situation can arise when people rely too much on language models, or use them for tasks that they’re not well-suited for. These models, for example, are well known to hallucinate (i.e. completely invent plausible-sounding information that is false). If you were to ask ChatGPT for a list of 10 scientific publications related to a particular scientific discipline, you could well end up with nine real papers and one that’s fabricated outright.
From a compliance and security perspective, this matters because you should have qualified humans fact-checking a model’s output – especially if it’s technical or scientific.

To concretize this a bit, imagine you’ve finetuned a model on your technical documentation and used it to produce a series of steps that a customer can use to debug your software. This is precisely the sort of thing that should be fact-checked by one of your agents before being sent.

Not Enough Transparency

Large language models are essentially gigantic statistical artifacts that result from feeding an algorithm huge amounts of textual data and having it learn to predict how sentences will end based on the words that came before.

The good news is that this works much better than most of us thought it would. The bad news is that the resulting structure is almost completely inscrutable. While a machine learning engineer might be able to give you a high-level explanation of how the training process works or how a language model generates an output, no one in the world really has a good handle on the details of what these models are doing on the inside. That’s why there’s so much effort being poured into various approaches to interpretability and explainability.

As AI has become more ubiquitous, numerous industries have drawn fire for their reliance on technologies they simply don’t understand. It’s not a good look if a bank loan officer can only shrug and say “The machine told me to” when asked why one loan applicant was approved and another wasn’t.

Depending on exactly how you’re using generative AI, this may not be a huge concern for you. But it’s worth knowing that if you are using language models to make recommendations or as part of a decision process, someone, somewhere may eventually ask you to explain what’s going on. And it’d be wise for you to have an answer ready beforehand.

Compliance Standards Contact Center Managers Should be Familiar With

To wrap this section up, we’ll briefly cover some of the more common compliance standards that might impact how you run your contact center. This material is only a sketch, and should not be taken to be any kind of comprehensive breakdown.

The General Data Protection Regulation (GDPR) – The famous GDPR is a set of regulations put out by the European Union that establishes guidelines around how data must be handled. This applies to any business that interacts with data from a citizen of the EU, not just to companies physically located on the European continent.

The California Consumer Protection Act (CCPA) – In a bid to give individuals more sovereignty over what happens to their personal data, California created the CCPA. It stipulates that companies have to be clearer about how they gather data, that they have to include privacy disclosures, and that Californians must be given the choice to opt out of data collection.

Soc II – Soc II is a set of standards created by the American Institute of Certified Public Accounts that stresses confidentiality, privacy, and security with respect to how consumer data is handled and processed.

Consumer Duty – Contact centers operating in the U.K. should know about The Financial Conduct Authority’s new “Consumer Duty” regulations. The regulations’ key themes are that firms must act in good faith when dealing with customers, prevent any foreseeable harm to them, and do whatever they can to further the customer’s pursuit of their own financial goals. Lawmakers are still figuring out how generative AI will fit into this framework, but it’s something affected parties need to monitor.

Best Practices for Security and Compliance when Using AI

Now that we’ve discussed the myriad security and compliance concerns facing contact centers that use generative AI, we’ll close by offering advice on how you can deploy this amazing technology without running afoul of rules and regulations.

Have Consistent Policies Around Using AI

First, you should have a clear and robust framework that addresses who can use generative AI, under what circumstances, and for what purposes. This way, your agents know the rules, and your contact center managers know what they need to monitor and look out for.

As part of crafting this framework, you must carefully study the rules and regulations that apply to you, and you have to ensure that this is reflected in your procedures.

Train Your Employees to Use AI Responsibly

Generative AI might seem like magic, but it’s not. It doesn’t function on its own, it has to be steered by a human being. But since it’s so new, you can’t treat it like something everyone will already know how to use, like a keyboard or Microsoft Word. Your employees should understand the policy that you’ve created around AI’s use, should understand which situations require human fact-checking, and should be aware of the basic failure modes, such as hallucination.

Be Sure to Encrypt Your Data

If you’re worried about PII or data leakages, a simple solution is to encrypt your data before you even roll out a generative AI tool. If you anonymize data correctly, there’s little concern that a model will accidentally disclose something it’s not supposed to down the line.

Roll Your Own Model (Or Use a Vendor You Trust)

The best way to ensure that you have total control over the model pipeline – including the data it’s trained on and how it’s finetuned – is to simply build your own. That being said, many teams will simply not be able to afford to hire the kinds of engineers who are equal to this task. In such case, you should utilize a model built by a third party with a sterling reputation and many examples of prior success, like the Quiq platform.

Engage in Regular Auditing

As we mentioned earlier, AI isn’t magic – it can sometimes perform in unexpected ways, and its performance can also simply degrade over time. You need to establish a practice of regularly auditing any models you have in production to make sure they’re still behaving appropriately. If they’re not, you may need to do another training run, examine the data they’re being fed, or try to finetune them.

Futureproofing Your Contact Center Security

The next generation of contact centers is almost certainly going to be one that makes heavy use of generative AI. There are just too many advantages, from lower average handling time to reduced burnout and turnover, to forego it.

But doing this correctly is a major task, and if you want to skip the engineering hassle and headache, give the Quiq conversational AI platform a try! We have the expertise required to help you integrate a robust, powerful generative AI tool into your contact center, without the need to write a hundred thousand lines of code.

Request A Demo

LLM-Powered AI Assistants for Hotels – Ultimate Guide

New technologies have always been disruptive, supercharging those firms that embrace it and requiring the others to adapt or be left behind.

With the rise of new approaches to AI, such as large language models, we can see this dynamic playing out again. One place where AI assistants could have a major impact is in the hotel industry.

In this piece, we’ll explore the various ways AI assistants can be used in hotels, and what that means for the hoteliers that keep these establishments running.

Let’s get going!

What is an AI Assistant?

The term “AI assistant” refers to any use of an algorithmic or machine-learning system to automate a part of your workflow. A relatively simple example would be the autocomplete found in almost all text-editing software today, while a much more complex example might involve stringing together multiple chain-of-thought prompts into an agent capable of managing your calendar.

There are a few major types of AI assistants. Near and dear to our hearts, of course, are chatbots that function in places like contact centers. These can be agent-facing or customer-facing, and can help with answering common questions, helping draft replies to customer inquiries, and automatically translating between many different natural languages.

Chatbots (and large language models more generally) can also be augmented to produce speech, giving rise to so-called “voice assistants”. These tend to work like other kinds of chatbots but have the added ability to actually vocalize their text, creating a much more authentic customer experience.

In a famous 2018 demo, Google Duplex was able to complete a phone call to a local hair salon to make a reservation. One remarkable thing about the AI assistant was how human it sounded – its speech even included “uh”s and “mm-hmm”s that made it almost indistinguishable from an actual person, at least over the phone and for short interactions.

Then, there are 3D avatars. These digital entities are crafted to look as human as possible, and are perfect for basic presentations, websites, games, and similar applications. Graphics technology has gotten astonishingly good over the past few decades and, in conjunction with the emergence of technologies like virtual reality and the metaverse, means that 3D avatars could play a major role in the contact centers of the future.

One thing to think about if you’re considering using AI assistants in a hotel or hospitality service is how specialized you want them to be. Although there is a significant effort underway to build general-purpose assistants that are able to do most of what a human assistant does, it remains true that your agents will do better if they’re fine-tuned on a particular domain. For the time being, you may want to focus on building an AI assistant that is targeted at providing excellent email replies, for example, or answering detailed questions about your product or service.

That being said, we recommend you check the Quiq blog often for updates on AI assistants; when there’s a breakthrough, we’ll deliver actionable news as soon as possible.

How Will AI Assistants Change Hotels?

Though the audience we speak to is largely comprised of people working in or managing contact centers, the truth is that there are many overlaps with those in the hospitality space. Since these are both customer-service and customer-oriented domains, insights around AI assistants almost always transfer over.

With that in mind, let’s dive in now to talk about how AI is poised to transform the way hotels function!

AI for Hotel Operations

Like most jobs, operating a hotel involves many tasks that require innovative thinking and improvisation, and many others that are repetitive, rote, and quotidian. Booking a guest, checking them in, making small changes to their itinerary, and so forth are in the latter category, and are precisely the sorts of things that AI assistants can help with.

In an earlier example, we saw that chatbots were already able to handle appointment booking five years ago, so it requires no great leap in imagination to see how slightly more powerful systems would be able to do this on a grander scale. If it soon becomes possible to offload much of the day-to-day of getting guests into their rooms to the machines, that will free up a great deal of human time and attention to go towards more valuable work.

It’s possible, of course, that this will lead to a dramatic reduction in the workforce needed to keep hotels running, but so far, the evidence points the other way; when large language models have been used in contact centers, the result has been more productivity (especially among junior agents), less burnout, and reduced turnover. We can’t say definitively that this will apply in hotel operations, but we also don’t see any reason to think that it wouldn’t.

AI for Managing Hotel Revenues

Another place that AI assistants can change hotels is in forecasting and boosting revenues. We think this will function mainly by making it possible to do far more fine-grained analyses of consumption patterns, inventory needs, etc.

Everyone knows that there are particular times of the year when vacation bookings surge, and others in which there are a relatively small number of bookings. But with the power of big data and sophisticated AI assistants, analysts will be able to do a much better job of predicting surges and declines. This means prices for rooms or other accommodations will be more fluid and dynamic, changing in near real-time in response to changes in demand and the personal preferences of guests. The ultimate effect will be an increase in revenue for hotels.

AI in Marketing and Customer Service

A similar line of argument holds for using AI assistants in marketing and customer service. Just as both hotels and guests are better served when we can build models that allow for predicting future bookings, everyone is better served when it becomes possible to create more bespoke, targeted marketing.

By utilizing data sources like past vacations, Google searches, and browser history, AI assistants will be able to meet potential clients where they’re at, offering them packages tailored to exactly what they want and need. This will not only mean increased revenue for the hotel, but far more satisfaction for the customers (who, after all, might have gotten an offer that they themselves didn’t realize they were looking for.)

If we were trying to find a common theme between this section and the last one, we might settle on “flexibility”. AI assistants will make it possible to flexibly adjust prices (raising them during peak demand and lowering them when bookings level off), flexibly tailor advertising to serve different kinds of customers, and flexibly respond to complaints, changes, etc.

Smart Buildings in Hotels

One particularly exciting area of research in AI centers around so-called “smart buildings”. By now, most of us have seen relatively “smart” thermostats that are able to learn your daily patterns and do things like turn the temperature up when you leave to save on the cooling bill while turning it down to your preferred setting as you’re heading home from work.

These are certainly worthwhile, but they barely even scratch the surface of what will be possible in the future. Imagine a room where every device is part of an internet-of-things, all wired up over a network to communicate with each other and gather data about how to serve your needs.

Your refrigerator would know when you’re running low on a few essentials and automatically place an order, a smart stove might be able to take verbal commands (“cook this chicken to 180 degrees, then power down and wait”) to make sure dinner is ready on time, a smoothie machine might be able to take in data about your blood glucose levels and make you a pre-workout drink specifically tailored to your requirements on that day, and so on.

Pretty much all of this would carry over to the hotel industry as well. As is usually the case there are real privacy concerns here, but assuming those challenges can be met, hotel guests may one day enjoy a level of service that is simply not possible with a staff comprised only of human beings.

Virtual Tours and Guest Experience

Earlier, we mentioned virtual reality in the context of 3D avatars that will enhance customer experience, but it can also be used to provide virtual tours. We’re already seeing applications of this technology in places like real estate, but there’s no reason at all that it couldn’t also be used to entice potential guests to visit different vacation spots.

When combined with flexible and intelligent AI assistants, this too could boost hotel revenues and better meet customer needs.

Using AI Assistants in Hotels

As part of the service industry, hoteliers work constantly to best meet their customers’ needs and, for this reason, they would do well to keep an eye on emerging technologies. Though many advances will have little to do with their core mission, others, like those related to AI assistants, will absolutely help them forecast future demands, provide personalized service, and automate routine parts of their daily jobs.

If all of this sounds fascinating to you, consider checking out the Quiq conversational CX platform. Our sophisticated offering utilizes large language models to help with tasks like question answering, following up with customers, and perfecting your marketing.

Schedule a demo with us to see how we can bring your hotel into the future!

Request A Demo

Explainability vs. Interpretability in Machine Learning Models

In recent months, we’ve produced a tremendous amount of content about generative AI – from high-level primers on what large language models are and how they work, to discussions of how they’re transforming contact centers, to deep dives on the cutting edge of generative technologies.

This amounts to thousands of words, much of it describing how models like ChatGPT were trained by having them e.g. iteratively predict what the final sentence of a paragraph will be given the previous sentences.

But for all that, there’s still a tremendous amount of uncertainty about the inner workings of advanced machine-learning systems. Even the people who build them generally don’t understand how particular functions emerge or what a particular circuit is doing.

It would be more accurate to describe these systems as having been grown, like an inconceivably complex garden. And just as you might have questions if your tomatoes started spitting out math proofs, it’s natural to wonder why generative models are behaving in the way that they are.

These questions are only going to become more important as these technologies are further integrated into contact centers, schools, law firms, medical clinics, and the economy in general.

If we use machine learning to decide who gets a loan, who is likely to have committed a crime, or to have open-ended conversations with our customers, it really matters that we know how all this works.

The two big approaches to this task are explainability and interpretability.

Comparing Explainability and Interpretability

Under normal conditions, this section would come at the very end of the article, after we’d gone through definitions of both these terms and illustrated how they work with copious examples.

We’re electing to include it at the beginning for a reason; while the machine-learning community does broadly agree on what these two terms mean, there’s a lot of confusion about which bucket different techniques go into.

Below, for example, we’ll discuss Shapley Additive Explanations (SHAP). Some sources file this as an approach to explainability, while others consider it a way of making a model more interpretable.

A major contributing factor to this overlap is the simple fact that the two concepts are very closely related. Once you can explain a fact you can probably interpret it, and a big part of interpretation is explanation.

Below, we’ve tried our best to make sense of these important research areas, and have tried to lay everything out in a way that will help you understand what’s going on.

With that caveat out of the way, let’s define explainability and interpretability.

Broadly, explainability means analyzing the behavior of a model to understand why a given course of action was taken. If you want to know why data point “a” was sorted into one category while data point “b” was sorted into another, you’d probably turn to one of the explainability techniques described below.

Interpretability means making features of a model, such as its weights or coefficients, comprehensible to humans. Linear regression models, for example, calculate sums of weighted input features, and interpretability would help you understand what exactly that means.

Here’s an analogy that might help: you probably know at least a little about how a train works. Understanding that it needs fuel to move, has to have tracks constructed a certain way to avoid crashing, and needs brakes in order to stop would all contribute to the interpretability of the train system.

But knowing which kind of fuel it requires and for what reason, why the tracks must be made out of a certain kind of material, and how exactly pulling a brake switch actually gets the train to stop are all facets of the explainability of the train system.

What is Explainability in Machine Learning?

In machine learning, explainability refers to any set of techniques that allow you to reason about the nuts and bolts of the underlying model. If you can at least vaguely follow how data are processed and how they impact the final model output, the system is explainable to that degree.

Before we turn to the techniques utilized in machine learning explainability, let’s talk at a philosophical level about the different types of explanations you might be looking for.

Different Types of Explanations

There are many approaches you might take to explain an opaque machine-learning model. Here are a few:

  • Explanations by text: One of the simplest ways of explaining a model is by reasoning about it with natural language. The better sorts of natural-language explanations will, of course, draw on some of the explainability techniques described below. You can also try to talk about a system logically, by i.e. describing it as calculating logical AND, OR, and NOT operations.
  • Explanations by visualization: For many kinds of models, visualization will help tremendously in increasing explainability. Support vector machines, for example, use a decision boundary to sort data points and this boundary can sometimes be visualized. For extremely complex datasets this may not be appropriate, but it’s usually worth at least trying.
  • Local explanations: There are whole classes of explanation techniques, like LIME, that operate by illustrating how a black-box model works in some particular region. In other words, rather than trying to parse the whole structure of a neural network, we zoom in on one part of it and say “This is what it’s doing right here.”

Approaches to Explainability in Machine Learning

Now that we’ve discussed the varieties of explanation, let’s get into the nitty-gritty of how explainability in machine learning works. There are a number of different explainability techniques, but we’re going to focus on two of the biggest: SHAP and LIME.

Shapley Additive Explanations (SHAP) are derived from game theory and are a commonly-used way of making models more explainable. The basic idea is that you’re trying to parcel out “credit” for the model’s outputs among its input features. In game theory, potential players can choose to enter a game, or not, and this is the first idea that is ported over to SHAP.

SHAP “values” are generally calculated by looking at how a model’s output changes based on different combinations of features. If a model has, say, 10 input features, you could look at the output of four of them, then see how that changes when you add a fifth.

By running this procedure for many different feature sets, you can understand how any given feature contributes to the model’s overall predictions.

Local Interpretable Model-Agnostic Explanation (LIME) is based on the idea that our best bet in understanding a complex model is to first narrow our focus to one part of it, then study a simpler model that captures its local behavior.

Let’s work through an example. Imagine that you’ve taken an enormous amount of housing data and fit a complex random forest model that’s able to predict the price of a house based on features like how old it is, how close it is to neighbors, etc.

LIME lets you figure out what the random forest is doing in a particular region, so you’d start by selecting one row of the data frame, which would contain both the input features for a house and its price. Then, you would “perturb” this sample, which means that for each of its features and its price, you’d sample from a distribution around that data point to create a new, perturbed dataset.

You would feed this perturbed dataset into your random forest model and get a new set of perturbed predictions. On this complete dataset, you’d then train a simple model, like a linear regression.

Linear regression is almost never as flexible and powerful as a random forest, but it does have one advantage: it comes with a bunch of coefficients that are fairly easy to interpret.

This LIME approach won’t tell you what the model is doing everywhere, but it will give you an idea of how the model is behaving in one particular place. If you do a few LIME runs, you can form a picture of how the model is functioning overall.

What is Interpretability in Machine Learning?

In machine learning, interpretability refers to a set of approaches that shed light on a model’s internal workings.

SHAP, LIME, and other explainability techniques can also be used for interpretability work. Rather than go over territory we’ve already covered, we’re going to spend this section focusing on an exciting new field of interpretability, called “mechanistic” interpretability.

Mechanistic Interpretability: A New Frontier

Mechanistic interpretability is defined as “the study of reverse-engineering neural networks”. Rather than examining subsets of input features to see how they impact a model’s output (as we do with SHAP) or training a more interpretable local model (as we do with LIME), mechanistic interpretability involves going directly for the goal of understanding what a trained neural network is really, truly doing.

It’s a very young field that so far has only tackled networks like GPT-2 – no one has yet figured out how GPT-4 functions – but already its results are remarkable. It will allow us to discover the actual algorithms being learned by large language models, which will give us a way to check them for bias and deceit, understand what they’re really capable of, and how to make them even better.

Why are Interpretability and Explainability Important?

Interpretability and explainability are both very important areas of ongoing research. Not so long ago (less than twenty years), neural networks were interesting systems that weren’t able to do a whole lot.

Today, they are feeding us recommendations for news, entertainment, driving cars, trading stocks, generating reams of content, and making decisions that affect people’s lives, forever.

This technology is having a huge and growing impact, and it’s no longer enough for us to have a fuzzy, high-level idea of what they’re doing.

We now know that they work, and with techniques like SHAP, LIME, mechanistic interpretability, etc., we can start to figure out why they work.

Final Thoughts on Interpretability vs. Explainability

In contact centers and elsewhere, large language models are changing the game. But though their power is evident, they remain a predominately empirical triumph.

The inner workings of large language models remain a mystery, one that has only recently begun to be unraveled through techniques like the ones we’ve discussed in this article.

Though it’s probably asking too much to expect contact center managers to become experts in machine learning interpretability or explainability, hopefully, this information will help you make good decisions about how you want to utilize generative AI.

And speaking of good decisions, if you do decide to move forward with deploying a large language model in your contact center, consider doing it through one of the most trusted names in conversational AI. In recent weeks, the Quiq platform has added several tools aimed at making your agents more efficient and your customers happier.

Set up a demo today to see how we can help you!

Request A Demo

AI Translation for Global Brands

AI is already having a dramatic impact on various kinds of work, in places like contact centers, marketing agencies, research outfits, etc.

In this piece, we’re going to take a closer look at one specific arena where people are trying things (and always learning), and that’s AI translation. We’re going to look at how AI systems can help in translation tasks, and how that is helping companies build their global brands.

What is AI Translation?

AI translation, or “machine” translation as it’s also known, is more or less what it sounds like: the use of algorithms, computers, or software to translate from one natural language to another.

The chances are pretty good you’ve used AI translation in one form or another already. If you’ve ever relied on Google Translate to double-check your conjugation of a Spanish verb or to read the lyrics of the latest K-pop sensation in English, you know what it can accomplish.

But the mechanics and history of this technology are equally fascinating, and we’ll cover those now.

How Does AI Translation Work?

There are a few different approaches to AI translation, which broadly fall into three categories.

The first is known as rule-based machine translation, and it works by drawing on the linguistic structure that scaffolds all language. If you have any bad memories of trying to memorize Latin inflections or French grammatical rules, you’ll be more than familiar with these structures, but you may not know that they can also be used to build powerful, flexible AI translation systems.

Three ingredients are required to make rule-based machine translation function: a set of rules describing how the input language works, a set of rules describing how the output language works, and dictionaries translating words between the input and output languages.

It’s probably not hard to puzzle out the major difficulty with rule-based machine translation: it demands a great deal of human time and attention and is therefore very difficult to scale.

The second approach is known as statistical machine translation. Unlike rule-based machine translation, statistical machine translation tends to focus on higher-level groupings, known as “phrases”. Statistical models of the relevant languages are built through an analysis of two kinds of data: bilingual corpora containing both the input and output language, and monolingual corpora in the output language. Once these models have been developed, they can be used to automatically translate between the language pairs.

Finally, there’s neural machine translation. This is the most recently developed AI translation method, and it relies on deep neural networks trained to predict sequences of tokens. Neural machine translation rapidly supplanted statistical methods owing to its remarkable performance, but there can be edge cases where statistical translations do better. As is usually the case, of course, there are also hybrid systems that use both neural and statistical machine translation.

Building a Global Brand with AI

There are many ways in which the emerging technology of artificial intelligence can be used to build a global brand. In this section, we’ll walk through a few examples.

How can AI Translation Be Used to Build a Global Brand?

The first way AI translation can be used for building a global brand is that it helps with internal communications. If you have an international workforce – programmers in Eastern Europe, for example, or support staff in the Phillippines – keeping them all on the same page is even more important than usual. Coordinating your internal teams is hard enough when they’re all in the same building, to say nothing of when they’re spread out across the globe, over multiple time zones and multiple cultures.

The last thing you need is mistakes occurring because of a bad translation from English into their native languages, so getting high-quality AI translations is crucial for the internal cohesion required for building your global brand.

Of course, more or less the exact same case can be made for external communication. It would be awfully difficult to build a global brand that doesn’t routinely communicate with the public, through advertisements, various kinds of content or media, etc. And if the brand is global, most, or perhaps all, of this content will need to be translated somewhere along the way.

There are human beings who can handle this work, but with the rising sophistication of AI translators, it’s becoming possible to automate substantial parts of it. Besides the obvious cost savings, there are other benefits to AI translation. For one thing, AI is increasingly able to translate into what are called “low-resource” languages, i.e. languages for which there isn’t much training material and only small populations of native speakers. If AI is eventually able to translate for these populations, it could open up whole new markets that weren’t reachable before.

For another, it may soon be possible to do dynamic, on-the-fly translations of brand material. We’re not aware of any system that can 1) identify a person’s native language from snippets of their speech or other identifying features, and 2) instantly produce a translation of i.e. a billboard or poster in real-time, but it’s not at all beyond our imagination. If no one has built something that can do this yet, they surely will before too long.

Prompt Engineering for Building a Global Brand

One thing we haven’t touched on much so far is how generative AI will impact marketing. Generative AI is already being used to create drafts of web copy, mockups of new designs for buildings, products, and clothing, translating between languages, and much else besides.

This leads naturally to a discussion of prompt engineering, which refers to the careful sculpting of the linguistic instructions that are given to large generative AI models. These models are enormously complex artifacts whose inner workings are largely mysterious and whose outputs are hard to predict in advance. Skilled prompt engineers have put in the time required to develop a sense for how to phrase instructions just so, and they’re able to get remarkably high-quality output with much less effort than the rest of us.

If you’re thinking about using generative AI in building your global brand you’ll almost certainly need to be thinking prompt engineering, so be sure to check out Quiq’s blog for more in-depth discussions of this and related subjects.

How can AI Translation Benefit the Economy?

Throughout this piece, we’ve discussed various means by which AI translation can help build global brands. But you might still want to see some hard evidence of the economic benefits of machine translation.

Economists Erik Brynjolfsson, Xiang Hui, and Meng Liu conducted a study of how AI translation has actually impacted trade on an e-commerce platform. They found that “… the introduction of a machine translation system…had a significant effect on international trade on this platform, increasing export quantity by 17.5%.”

More specifically, they found evidence of “…a substantial reduction in buyers’ translation-related search costs due to the introduction of this system.” On the whole, their efforts support the conclusion that “… language barriers significantly hinder trade and that AI has already substantially improved overall economic efficiency.”

Though this is only one particular study on one particular mechanism, it’s not hard to see how it can apply more broadly. If more people can read your marketing material, it stands to reason that more people will buy your product, for example.

AI Translation and Global Brands

Global brands face many unique challenges: complex supply chains, distributed workforces, and the bewildering diversity of human language.

This last challenge is something that AI language translation can help with, as it’s already proving useful in boosting trade and exchange by reducing the friction involved in translation.

If you want to build a global brand and are keen to use conversational AI to do it, check out the Quiq platform. Our services include a variety of agent-facing and customer-facing tools, and make it easy to automate question-answering tasks, follow-ups with clients, and many other kinds of work involved in running a contact center. Schedule a demo with us today to see how we can help you build your brand!

Request A Demo

What is Automated Customer Service? – Ultimate Guide

From graph databases to automated machine learning pipelines and beyond, a lot of attention gets paid to new technologies. But the truth is, none of it matters if users aren’t able to handle the more mundane tasks of managing permissions, resolving mysterious errors, and getting the tools installed and working on their native systems.

This is where customer service comes in. Though they don’t often get the credit they deserve, customer service agents are the ones who are responsible for showing up every day to help countless others actually use the latest and greatest technology.

Like every job since the beginning of jobs, there are large components of customer service that have been automated, are currently being automated, or will be automated at some point soon.

That’s our focus for today. We want to explore customer service as a discipline, and then talk about some of how generative AI can automate substantial parts of the standard workflow.

What is Customer Service?

To begin with, we’ll try to clarify what customer service is and why it matters. This will inform our later discussion of automated customer service, and help us think through the value that can be added through automation.

Customer service is more or less what it sounds like: serving your customers – your users, or clients – as they go about the process of utilizing your product. A software company might employ customer service agents to help onboard new users and troubleshoot failures in their product, while a services’ company might use them for canceling appointments and rescheduling.

Over the prior few decades, customer service has evolved alongside many other industries. As mobile phones have become firmly ensconced in everyone’s life, for example, it has become more common for businesses to supplement the traditional avenues of phone calls and emails by adding text messaging and chatbot customer support to their customer service toolkit. This is part of what is known as an omni-channel strategy, in which more effort is made to meet customers where they’re at rather than expecting them to conform to the communication pathways a business already has in place.

Naturally, many of these kinds of interactions can be automated, especially with the rise of tools like large language models. We’ll have more to say about that shortly.

Why is Customer Service Important?

It may be tempting for those writing the code to think that customer service is a “nice to have”, but that’s not the case at all. However good a product’s documentation is, there will simply always be weird behaviors and edge cases in which a skilled customer service agent (perhaps helped along with AI) needs to step in and aid a user in getting everything running properly.

But there are other advantages as well. Besides simply getting a product to function, customer service agents contribute to a company’s overall brand, and the general emotional response users have to the company and its offerings.

High-quality customer service agents can do a lot to contribute to the impression that a company is considerate, and genuinely cares about its users.

What Are Examples of Good Customer Service?

There are many ways in which customer service agents can do this. For example, it helps a lot when customer service agents try to transmit a kind of warmth over the line.

Because so many people spend their days interacting with others through screens, it can be easy to forget what that’s like, as tone of voice and facial expression are hard to digitally convey. But when customer service agents greet a person enthusiastically and go beyond “How may I help you” by exchanging some opening pleasantries, they feel more valued and more at ease. This matters a lot when they’ve been banging their head against a software problem for half a day.

Customer service agents have also adapted to the digital age by utilizing emojis, exclamation points, and various other kinds of internet-speak. We live in a more casual age, and under most circumstances, it’s appropriate to drop the stiffness and formalities when helping someone with a product issue.

That said, you should also remember that you’re talking to customers, and you should be polite. Use words like “please” when asking for something, and don’t forget to add a “thank you.” It can be difficult to remember this when you’re dealing with a customer who is simply being rude, especially when you’ve had several such customers in a row. Nevertheless, it’s part of the job.

Finally, always remember that a customer gets in touch with you when they’re having a problem, and above all else, your job is to get them what they need. From the perspective of contact center managers, this means you need periodic testing or retraining to make sure your agents know the product thoroughly.

It’s reasonable to expect that agents will sometimes need to look up the answer to a question, but if they’re doing that constantly it will not only increase the time it takes to resolve an issue, it will also contribute to customer frustration and a general sense that you don’t have things well in hand.

Automation in Customer Service

Now that we’ve covered what customer service is, why it matters, and how to do it well, we have the context we need to turn to the topic of automated customer service.

For all intents and purposes, “automation” simply refers to outsourcing all or some of a task to a machine. In industries like manufacturing and agriculture, automation has been steadily increasing for hundreds of years.

Until fairly recently, however, the technology didn’t yet exist to automate substantial portions of customer service worth. With the rise of machine learning, and especially large language models like ChatGPT, that’s begun to change dramatically.

Let’s dive into this in more detail.

Examples of Automated Customer Service

There are many ways in which customer service is being automated. Here are a few examples:

  • Automated questions answering – Many questions are fairly prosaic (“How do I reset my password”), and can effectively be outsourced to a properly finetuned large language model. When such a model is trained on a company’s documentation, it’s often powerful enough to handle these kinds of low-level requests.
  • Summarization – There have long been models that could do an adequate job of summarization, but large language models have kicked this functionality into high gear. With an endless stream of new emails, Slack messages, etc. constantly being generated, having an agent that can summarize their contents and keep agents in the loop will do a lot to boost their productivity.
  • Classifying incoming messages – Classification is another thing that models have been able to do for a while, and it’s also something that helps a lot. Having an agent manually sort through different messages to figure out how to prioritize them and where they should go is no longer a good use of time, as algorithms are now good enough to do a major chunk of this kind of work.
  • Translation – One of the first useful things anyone attempted to do with machine learning was translating between different natural languages (i.e. from Russian into English). Once squarely in the purview of human beings, this is now a task that machines can do almost as well, at least for customer service work.

Should We Automate Customer Service?

All this having been said, you may still have questions about the wisdom of automating customer service work. Sure, no one wants to spend hours every day looking up words in Mandarin to answer a question or prioritizing tickets by hand, but aren’t we in danger of losing something important as customer service agents? Might we not automate ourselves out of a job?

No one can predict the future, of course, but the early evidence is quite to the contrary. Economists have conducted studies of how contact centers have changed with the introduction of generative AI, and their findings are very encouraging.

Because these models are (usually) finetuned on conversations from more experienced agents, they’re able to capture a lot of how those agents handle issues. Typical response patterns, politeness, etc. become “baked into” the models. Junior agents using these models are able to climb the learning curve more quickly and, feeling less strained in their new roles, are less likely to quit. This, in turn, puts less of a burden on managers and makes the organization overall more stable. Everyone ends up happier and more productive.

So far, it’s looking like AI-based automation in contact centers will be like automation almost everywhere else: machines will gradually remove the need for human attention in tedious or otherwise low-value tasks, freeing them up to focus on places where they have more of an advantage.

If agents don’t have to sort tickets anymore or resolve routine issues, they can spend more time working on the really thorny problems, and do so with more care.

Moving Quiq-ly into the Future!

Where the rubber of technology meets the road of real-world use cases, customer service agents are extremely important. They not only make sure customers can use a company’s tools, but they also contribute to the company brand in their tone, mannerisms, and helpfulness.

Like most other professions, customer service agents are being impacted by automation. So far, this impact has been overwhelmingly positive and is likely to prove a competitive advantage in the decades ahead.

If you’re intrigued by this possibility, Quiq has created a suite of industry-leading conversational AI tools, both for customer-facing applications and agent-facing applications. Check them out or schedule a demo with us to see what all the fuss is about.

Request A Demo

Top 5 Benefits of AI for Hospitality

As an industry, hospitality is aimed squarely at meeting customer needs. Whether it’s a businesswoman staying in 5-star resorts or a mother of three getting a quiet weekend to herself, the job of the hospitality professionals they interact with is to anticipate what they want and make sure they get it.

As technologies like artificial intelligence become more powerful and pervasive, customer expectations will change. When that businesswoman books a hotel room, she’ll expect there to be a capable virtual assistant talking to her about a vacation spot; when that mother navigates the process of buying a ticket, she’ll expect to be interacting with a very high-quality chatbot, perhaps one that’s indistinguishable from an actual human being.

All of this means that the hospitality industry needs to be thinking about how it will be impacted by AI. It needs to consider what the benefits of AI for hospitality are, what limitations are faced by AI, and how it can be utilized effectively. That’s what we’re here to do today, so let’s get started.

Why is AI Important for Hospitality?

AI is important in hospitality for the same reason it’s important everywhere else: it’s poised to become a transformative technology, and just about every industry – especially those that involve a lot of time interacting through text – could be up-ended by it.

The businesses that emerge the strongest from this ongoing revolution will be those that successfully anticipate how large language models and similar tools change workflows, company setups, cost and pricing structures, etc.

With that in mind, let’s work through some of the ways in which AI is (or will) be used in hospitality.

How is AI Used in Hospitality?

There are many ways in which AI is used in hospitality, and in the sections that follow we’ll walk through a number of the most important ones.

Chatbots and Customer Service

Perhaps the most obvious place to begin is with chatbots and customer service more broadly. Customer-facing chatbots were an early application of natural language processing, and have gotten much better in the decades since. With ChatGPT and similar LLMs, they’re currently in the process of taking another major leap forward.

Now that we have models that can be fine-tuned to answer questions, summarize texts, and carry out open-ended interactions with human users, we expect to see them becoming more and more common in hospitality. Someday soon, it may be the case that most of the steps involved in booking a room or changing a flight happens entirely without human assistance of any kind.

This is especially compelling because we’ve gotten so good at making chatbots that are very deferential and polite (though as we make clear in the final section on “limitations”, this is not always the case.)

Virtual Assistants

AI virtual assistants are a generalization of the idea behind chatbots. Whereas chatbots can be trained to offload many parts of hospitality work, powerful virtual assistants will take this dynamic to the next level. Once we have better agents – systems able to take strings of actions in pursuit of a goal – many more parts of hospitality work will be outsourced to the machines.

What might this look like?

Well, we’ve already seen some tools that can do relatively simple tasks like “book a flight to Indonesia”, but they’re still not all that flexible. Imagine an AI virtual assistant able to handle all the subtleties and details involved in a task like “book a flight for ten executives to Indonesia, and book lodging near the conference center and near the water, too, then make reservations for a meal each night of the week, taking into account the following dietary restrictions.”

Work into building generative agents like this is still in its infancy, but it is nevertheless an active area of research. It’s hard to predict when we’ll have agents who can be trusted to do advanced work with minimal oversight, but once we do, it’ll really begin to change how the hospitality industry runs.

Sentiment Analysis

Sentiment analysis refers to an automated, algorithmic approach to classifying the overall vibe of a piece of text. “The food was great” is obviously positive sentiment, “the food was awful” is obviously negative sentiment, and then there are many subtler cases involving e.g. sarcasm.

The hospitality industry desperately needs tools able to perform sentiment analysis at scale. It helps them understand what clients like and dislike about particular services or locations, and can even help in predicting future demand. If, for example, there’s a bunch of positive sentiment around a concert being given in Indonesia, that indicates that there will probably be a spike in bookings there.

Boosting Revenues for Hospitality

People have long been interested in using AI to make money, whether that be from trading strategies generated by ChatGPT or from using AI to create ultra-targeted marketing campaigns.

All of this presents an enormous opportunity for the hospitality industry. Through a combination of predictive modeling, customer segmentation, sentiment analysis, and related techniques, it’ll become easier to forecast changes in demand, create much more responsive pricing models, and intelligently track inventory.

What this will ultimately mean is better revenues for hotels, event centers, and similar venues. You’ll be able to cross-sell or upsell based on a given client’s unique purchase history and interests, you’ll have fewer rooms go unoccupied, and you’ll be less likely to have clients who are dissatisfied by the fact tha you ran out of something.

Sustainability and Waste Management

An underappreciated way in which AI will benefit hospitality is by making sustainability easier. There are a few ways this could manifest.

One is by increasing energy efficiency. Most of you will already be familiar with currently-existing smart room technology, like thermostats that learn when you’re leaving and turn themselves up, thus lowering your power bill.

But there’s room for this to become much more far-ranging and powerful. If AI is put in charge of managing the HVAC system for an entire building, for example, it could lead to savings on the order of millions of dollars, while simultaneously making customers more comfortable during their stay.

And the same holds true for waste management. AI systems smart enough to discover when a trash can is full means that your cleaning staff won’t have to spend nearly as much time patrolling. They’ll be able to wait until they get a notification to handle the problem, gaining back many hours in their day that can be put towards higher-value work.

What are the Limitations of AI in Hospitality?

None of this is to suggest that there won’t also be drawbacks to using AI in hospitality. To prepare you for these challenges, we’ll spend the next few sections discussing how AI can fail, allowing you to be proactive in mitigating these downsides.

Impersonality in Customer Service

By properly fine-tuning a large language model, it’s possible to get text output that is remarkably polite and conversational. Still, throughout repeated or sustained interactions, the model can come to feel vaguely sterile.

Though it might in principle be hard to tell when you’re interacting with AI v.s. a human, the fact remains that models don’t actually have any empathy. They may say “I’m sorry that you had to deal with that…”, but they won’t truly know what frustration is like, and over time, a human is likely to begin picking up on that.

We can’t say for certain when models will be capable of expressing sympathy in a fully convincing way, but for the time being, you should probably incorporate systems that can flag conversations that are going off the rails so that a human customer service professional can intervene.

Toxic Output, Bias, and Abuse

As in the previous section, a lot of work has gone into finetuning models so that they don’t produce toxic, biased, or abusive language. Still, not all the kinks have been ironed out, and if a question is phrased in just the right way, it’s often possible to get past these safeguards. That means your models might unpredictably become insulting or snarky, which is a problem for a hospitality company.

As we’ve argued elsewhere, careful monitoring is one of the prices that have to be paid when managing an AI assistant. Since this technology is so new, we have at best a very vague idea of what kinds of prompts lead to what kinds of responses. So, you’ll simply have to diligently keep your eyes peeled for examples of model responses that are inappropriate, having a human take over if and when things are going poorly.

(Or, you can work with Quiq – our guardrails ensure none of this is a problem for enterprise hospitality businesses).

AI in Hospitality

New technologies have always changed the way industries operate, and that’s true for hospitality as well. From virtual assistants to chatbots to ultra-efficient waste management, AI offers many benefits (and many challenges) for hospitality.

If you want to explore using these tools in your hospitality enterprise but don’t know the first thing about hiring AI engineers, check out the Quiq conversational CX platform. We’ve built a proprietary large language model offering that makes it easy to incorporate chatbots and other technologies, without having to worry about what’s going on under the hood.

Schedule a demo with us today to find out how you can catch the AI wave!

Request A Demo

4 Benefits of Using AI Assistants in the Retail Industry

Artificial intelligence (AI) has been making remarkable strides in recent months. Owing to the release of ChatGPT in November of 2022, a huge amount of attention has been on large language models, but the truth is, there have been similar breakthroughs in computer vision, reinforcement learning, robotics, and many other fields.

In this piece, we’re going to focus on how these advances might contribute specifically to the retail sector.

We’ll start with a broader overview of AI, then turn to how AI-based tools are making it easier to make targeted advertisements, personalized offers, hiring decisions, and other parts of retail substantially easier.

What are AI assistants in Retail?

Artificial intelligence is famously difficult to define precisely, but for our purposes, you can think of it as any attempt to get intelligent behavior from a machine. This could involve something relatively straightforward, like building a linear regression model to predict future demand for a product line, or something far more complex, like creating neural networks able to quickly spit out multiple ideas for a logo design based on a verbal description.

AI assistants are a little different and specifically require building agents capable of carrying out sequences of actions in the service of a goal. The field of AI is some 70 years old now and has been making remarkable strides over the past decade, but building robust agents remains a major challenge.

It’s anyone’s guess as to when we’ll have the kinds of agents that could successfully execute an order like “run this e-commerce store for me”, but there’s nevertheless been enough work for us to make a few comments about the state of the art.

What are the Ways of Building AI Assistants?

On its own, a model like ChatGPT can (sometimes) generate working code and (often) generate correct API calls. But as things stand, a human being still needs to utilize this code for it to do anything useful.

Efforts are underway to remedy this situation by making models able to use external tools. Auto-GPT, for example, combines an LLM and a separate bot that repeatedly queries it. Together, they can take high-level tasks and break them down into smaller, achievable steps, checking off each as it works toward achieving the overall objective.

AssistGPT and SuperAGI are similar endeavors, but they’re better able to handle “multimodal” tasks, i.e those that also involve manipulating images or sounds rather than just text.

The above is a fairly cursory examination of building AI agents, but it’s not difficult to see how the retail establishments of the future might use agents. You can imagine agents that track inventory and re-order crucial items when they get low, or that keep an eye on sales figures and create reports based on their findings (perhaps even using voice synthesis to actually deliver those reports), or creating customized marketing campaigns, generating their own text, images, and A/B tests to find the highest-performing strategies.

What are the Advantages of Using AI in Retail Business?

Now that we’ve talked a little bit about how AI and AI assistants can be used in retail, let’s spend some time talking about why you might want to do this in the first place. What, in other words, are the big advantages of using AI in retail?

1. Personalized Marketing with AI

People can’t buy your products if they don’t know what you’re selling, which is why marketing is such a big part of retail. For its part, marketing has long been a future-oriented business, interested in leveraging the latest research from psychology or economics on how people make buying decisions.

A kind of holy grail for marketing is making ultra-precise, bespoke marketing efforts that target specific individuals. The kind of messaging that would speak to a childless lawyer in a big city won’t resonate the same way with a suburban mother of five, and vice versa.

The problem, of course, is that there’s just no good way at present to do this at scale. Even if you had everything you needed to craft the ideal copy for both the lawyer and the mother, it’s exceedingly difficult to have human beings do this work and make sure it ends up in front of the appropriate audience.

AI could, in theory, remedy this situation. With the rise of social media, it has become possible to gather stupendous amounts of information about people, grouping them into precise and fine-grained market segments–and, with platforms like Facebook Ads, you can make really target advertisements for each of these segments.

AI can help with the initial analysis of this data, i.e. looking at how people in different occupations or parts of the country differ in their buying patterns. But with advanced prompt engineering and better LLMs, it could also help in actually writing the copy that induces people to buy your products or services.

And it doesn’t require much imagination to see how AI assistants could take over quite a lot of this process. Much of the required information is already available, meaning that an agent would “just” need to be able to build simple models of different customer segments, and then put together a prompt that generates text that speaks to each segment.

2. Personalized Offerings with AI

A related but distinct possibility is using AI assistants to create bespoke offerings. As with messaging, people will respond to different package deals; if you know how to put one together for each potential customer, there could be billions in profits waiting for you. Companies like Starbucks have been moving towards personalized offerings for a while, but AI will make it much easier for other retailers to jump on this trend.

We’ll illustrate how this might work with a fictional example. Let’s say you’re running a toy company, and you’re looking at data for Angela and Bob. Angela is an occasional customer, mostly making purchases around the holidays. When she created her account she indicated that she doesn’t have children, so you figure she’s probably buying toys for a niece or nephew. She’s not a great target for a personalized offer, unless perhaps it’s a generic 35% discount around Christmas time.

Bob, on the other hand, buys fresh trainsets from you on an almost weekly basis. He more than likely has a son or daughter who’s fascinated by toy machines, and you have customer-recommendation algorithms trained on many purchases indicating that parents who buy the trains also tend to buy certain Lego sets. So, next time Bob visits your site, your AI assistant can offer him a personalized discount on Lego sets.

Maybe he bites this time, maybe he doesn’t, but you can see how being able to dynamically create offerings like this would help you move inventory and boost individual customer satisfaction a great deal. AI can’t yet totally replace humans in this kind of process, but it can go a long way toward reducing the friction involved.

3. Smarter Pricing

The scenario we just walked through is part of a broader phenomenon of smart pricing. In economics, there’s a concept known as “price discrimination”, which involves charging a person roughly what they’re willing to pay for an item. There may be people who are interested in buying your book for $20, for example, but others who are only willing to pay $15 for it. If you had a way of changing the price to match what a potential buyer was willing to pay for it, you could make a lot more money (assuming that you’re always charging a price that at least covers printing and shipping costs).

The issue, of course, is that it’s very difficult to know what people will pay for something–but with more data and smarter AI tools, we can get closer. This will have the effect of simultaneously increasing your market (by bringing in people who weren’t quite willing to make a purchase at a higher price) and increasing your earnings (by facilitating many sales that otherwise wouldn’t have taken place).

More or less the same abilities will also help with inventory more generally. If you sell clothing you probably have a clearance rack for items that are out of season, but how much should you discount these items? Some people might be fine paying almost full price, while others might need to see a “60%” off sticker before moving forward. With AI, it’ll soon be possible to adjust such discounts in real-time to make sure you’re always doing brisk business.

4. AI and Smart Hiring

One place where AI has been making real inroads is in hiring. It seems like we can’t listen to any major podcast today without hearing about some hiring company that makes extensive use of natural language processing and similar tools to find the best employees for a given position.

Our prediction is that this trend will only continue. As AI becomes increasingly capable, eventually it will be better than any but the best hiring managers at picking out talent; retail establishments, therefore, will rely on it more and more to put together their sales force, design and engineering teams, etc.

Is it Worth Using AI in Retail?

Throughout this piece, we’ve sung the praises of AI in retail. But the truth is, there are still questions about how much sense it makes to leverage retail at the moment, given its expense and risks.

In this section, we’ll briefly go over some of the challenges of using AI in retail so you can have a fuller picture of how its advantages compare to its disadvantages, and thereby make a better decision for your situation.

The one that’s on everyone’s minds these days is the tendency of even powerful systems like ChatGPT to hallucinate incorrect information or to generate output that is biased or harmful. Finetuning and techniques like retrieval augmented generation can mitigate this somewhat, but you’ll still have to spend a lot of time monitoring and tinkering with the models to make sure that you don’t end up with a PR disaster on your hands.

Another major factor is the expense involved. Training a model on your own can cost millions of dollars, but even just hiring a team to manage an open-source model will likely set you back a fair bit (engineers aren’t cheap).

By far the safest and easiest way of testing out AI for retail is by using a white glove solution like the Quiq conversational CX platform. You can test out our customer-facing and agent-facing AI tools while leaving the technical details to us, and at far less expense than would be involved in hiring engineering talent.

Set up a demo with us to see what we can do for you.

Request A Demo

AI is Changing Retail

From computer-generated imagery to futuristic AI-based marketing plans, retail won’t be the same with the advent of AI. This will be especially true once we have robust AI assistants able to answer customer questions, help them find clothes that fit, and offer precision discounts and offerings tailored to each individual shopper.

If you don’t want to get left behind, you’ll need to begin exploring AI as soon as possible, and we can help you do that. Check out our product or find a time to talk with us, today!

AI in Retail: 5 Ways Retailers Are Using AI Assistants

Businesses have always turned to the latest and greatest technology to better serve their customers, and retail is no different. From early credit card payment systems to the latest in online advertising, retailers know that they need to take advantage of new tools to boost their profits and keep shoppers happy.

These days, the thing that’s on everyone’s mind is artificial intelligence (AI). AI has had many, many definitions over the years, but in this article, we’ll mainly focus on the machine-learning and deep-learning systems that have captured the popular imagination. These include large language models, recommendation engines, basic AI assistants, etc.

In the world of AI in retail, you can broadly think of these systems as falling into one of two categories: “the ones that customers see”, and “the ones that customers don’t see.” In the former category, you’ll find innovations such as customer-facing chatbots and algorithms that offer hyper-personalized options based on shopping history. In the latter, you’ll find precision fraud detection systems and finely-tuned inventory management platforms, among other things.

We’ll cover each of these categories, in order. By the end of this piece, you’ll have a much better understanding of the ways retailers are using AI assistants and will be better able to think about how you want to use this technology in your retail establishment.

Let’s get going!

Using AI Assistants for Better Customer Experience

First, let’s start with AI that interacts directly with customers. The major ways in which AI is transforming the customer experience are through extreme levels of personalization, more “humanized” algorithms, and shopping assistants.

Personalization in Shopping and Recommendations

One of the most obvious ways of improving the customer experience is by tailoring that experience to each individual shopper. There’s just one problem: this is really difficult to do.

On the one hand, most of your customers will be new to you, people about whom you have very little information and whose preferences you have no good way of discovering. On the other, there are the basic limitations of your inventory. If you’re a brick-and-mortar establishment you have a set number of items you can display, and it’s going to be pretty difficult for you to choose them in a way that speaks to each new customer on a personal level.

For a number of reasons, AI has been changing this state of affairs for a while now, and holds the potential to change it much more in the years ahead.

A key part of this trend is recommendation engines, which have gotten very good over the past decade or so. If you’ve ever been surprised by YouTube’s ability to auto-generate a playlist that you really enjoyed, you’ve seen this in action.

Recommendation engines can only work well when there is a great deal of customer data for them to draw on. As more and more of our interactions, shopping, and general existence have begun to take place online, there has arisen a vast treasure trove of data to be analyzed. In some situations, recommendation engines can utilize decades of shopping experience, public comments, reviews, etc. in making their recommendations, which means a far more personalized shopping experience and an overall better customer experience.

What’s more, advances in AR and VR are making it possible to personalize even more of these experiences. There are platforms now that allow you to upload images of your home to see how different pieces of furniture will look, or to see how clothes fit you without the need to try them on first.

We expect that this will continue, especially when combined with smarter printing technology. Imagine getting a 3D-printed sofa made especially to fit in that tricky corner of your living room, or flipping through a physical magazine with advertisements that are tailored to each individual reader.

Humanizing the Machines

Next, we’ll talk about various techniques for making the algorithms and AI assistants we interact with more convincingly human. Admittedly, this isn’t terribly important at the present moment. But as more of our shopping and online activity comes to be mediated by AI, it’ll be important for them to sound empathic, supportive, and attuned to our emotions.

The two big ways this is being pursued at the moment are chatbots and voice AI.

Chatbots, of course, will probably be familiar to you already. ChatGPT is inarguably the most famous example, but you’ve no doubt interacted with many (much simpler) chatbots via online retailers or contact centers.

In the ancient past, chatbots were largely “rule-based”, meaning they were far less flexible and far less capable of passing as human. With the ascendancy of the deep learning paradigm, however, we now have chatbots that are able to tutor you in chemistry, translate between dozens of languages, help you write code, answer questions about company policies, and even file simple tickets for contact center agents.

Naturally, this same flexibility also means that retail managers must tread lightly. Chatbots are known to confidently hallucinate incorrect information, to become abusive, or to “help” people with malicious projects, like building weapons or computer viruses.

Even leaving aside the technical challenges of implementing a chatbot, you have to carefully monitor your chatbots to make sure they’re performing as expected.

Then, there’s voice-based AI. Computers have been synthesizing speech for many years, but it hasn’t been until recently that they’ve become really good at it. Though you can usually tell that a computer is speaking if you listen very carefully, it’s getting harder and harder all the time. We predict that, in the not-too-distant future, you’ll simply have no idea whether it’s a human or a machine on the other end of the line when you call to return an item or get store hours.

But computers have also gotten much better at the other side of voice-based AI, speech recognition. Software like otter.ai, for example, is astonishingly accurate when generating transcriptions of podcast episodes or conversations, even when unusual words are used.

Taken together, advances in both speech synthesis and speech recognition paint a very compelling picture of how the future of retail might unfold. You can imagine walking into a Barnes & Noble in the year 2035 and having a direct conversation with a smart speaker or AI assistant. You’ll tell it what books you’ve enjoyed in the past, it’ll query its recommendation system to find other books you might like, and it’ll speak to you in a voice that sounds just like a human’s.

You’ll be able to ask detailed questions about the different books’ content, and it’ll be able to provide summaries, discuss details with you, and engage in an unscripted, open-ended conversation. It’ll also learn more about you over time, so that eventually it’ll be as though you have a friend that you go shopping with whenever you buy new books, clothing, etc.

Shopping Assistants and AI Agents

So far, we’ve confined our conversation specifically to technologies like large language models and conversational AI. But one thing we haven’t spent much time on yet is the possibility of creating agents in the future.

An agent is a goal-directed entity, one able to take an instruction like “Make me a reservation at an Italian restaurant” and decompose the goal into discrete steps, performing each one until the task is completed.

With clever enough prompt engineering, you can sort of get agent-y behavior out of ChatGPT, but the truth is, the work of building advanced AI agents has only just begun. Tools like AutoGPT and LangChain have made a lot of progress, but we’re still a ways away from having agents able to reliably do complex tasks.

It’s not hard to see how different retail will be when that day arrives, however. Eventually, you may be outsourcing a lot of your shopping to AI assistants, who will make sure the office has all the pens it needs, you’ve got new science fiction to read, and you’re wearing the latest fashion. Your assistant might generate new patterns for t-shirts and have them custom-printed; if LLMs get good enough, they’ll be able to generate whole books and movies tuned to your specific tastes.

Using AI Assistants to Run A Safer, Leaner Operation

Now that we’ve covered the ways AI assistants will impact the things customers can see, let’s talk about how they’ll change the things customers don’t see.

There are lots of moving parts in running a retail establishment. If you’ve got ~1,000 items on display in the front, there are probably several thousand more items in a warehouse somewhere, and all of that has to be tracked. What’s more, there’s a constant process of replenishing your supply, staying on top of new trends, etc.

All of this will also be transformed by AI, and in the following sections, we’ll talk about a few ways in which this could happen.

Fraud Detection and Prevention

Fraud, unfortunately, is a huge part of modern life. There’s an entire industry of people buying and selling personal information for nefarious purposes, and it’s the responsibility of anyone trafficking in that information to put safeguards in place.

That includes a large number of retail establishments, which might keep data related to a customer’s purchases, their preferences, and (of course) their actual account and credit card numbers.

This isn’t the place to get into a protracted discussion of cybersecurity, but much of fraud detection relies on AI, so it’s fair game. Fraud detection techniques range from the fairly basic (flagging transactions that are much larger than usual or happen in an unusual geographic area) to the incredibly complex (training powerful reinforcement learning agents that constantly monitor network traffic).

As AI becomes more advanced, so will fraud detection. It’ll become progressively more difficult for criminals to steal data, and the world will be safer as a result. Of course, some of these techniques are also ones that can be used by the bad guys to defraud people, but that’s why so much effort is going into putting guardrails on new AI models.

Streamlining Inventory

Inventory management is an obvious place for optimization. Correctly forecasting what you’ll need and thereby reducing waste can have a huge impact on your bottom line, which is why there are complex branches of mathematics aimed at modeling these domains.

And – as you may have guessed – AI can help. With machine learning, extremely accurate forecasts can be made of future inventory requirements, and once better AI agents have been built, they may even be able to automate the process of ordering replacement materials.

Forward-looking retail managers will need to keep an eye on this space to fully utilize its potential.

AI Assistants and the Future of Retail

AI is changing a great many things. It’s already making contact center agents more effective and is being utilized by a wide variety of professionals, ranging from copywriters to computer programmers.

But the space is daunting, and there’s so much to learn about implementing, monitoring, and finetuning AI assistants that it’s hard to know where to start. One way to easily dip your toe in these deep waters is with the Quiq Conversational CX platform.

Our technology makes it easy to create customer-facing AI bots and similar tooling, which will allow you to see how AI can figure into your retail enterprise without hiring engineers and worrying about the technical details.

Schedule a demo with us today to get started!

Request A Demo

How Scoped AI Ensures Safety in Customer Service

AI chat applications powered by Large Language Models (LLMs) have helped us reimagine what is possible in a new generation of AI computing.

Along with this excitement, there is also a fair share of concern and fear about the potential risks. Recent media coverage, such as this article from the New York Times, highlights how the safety measures of ChatGPT can be circumvented to produce harmful information.

To better understand the security risks of LLMs in customer service, it’s important we add some context and differentiate between “Broad AI” versus “Scoped AI”. In this article, we’ll discuss some of the tactics used to safely deploy scoped AI assistants in a customer service context.

Broad AI vs. Scoped AI: Understanding the Distinction

Scoped AI is designed to excel in a well-defined domain, guided and limited by a software layer that maintains its behavior within pre-set boundaries. This is in contrast to broad AI, which is designed to perform a wide range of tasks across virtually all domains.

Scoped AI and Broad AI answer questions fundamentally differently. With Scoped AI the LLM is not used to determine the answer, it is used to compose a response from the resources given to it. Conversely, answers to questions in Broad AI are determined by the LLM and cannot be verified.

Broad AI simply takes a user message and generates a response from the LLM; there is no control layer outside of the LLM itself. Scoped AI is a software layer that applies many steps to control the interaction and enforce safety measures applicable to your company.

In the following sections, we’ll dig into a more detailed explanation of the steps.

Ensuring the Safety of Scoped AI in Customer Service

1. Inbound Message Filtering

Your AI should perform a semantic similarity search to recognize in-scope vs out-of-scope messages from a customer. Malicious characters and known prompt injections should be identified and rejected with a static response. Inbound message filtering is an important step in limiting the surface area to the messages expected from your customers.

2. Classifying Scope

LLMs possess strong Natural Language Understanding and Reasoning skills (NLU & NLR). An AI assistant should perform a number of classifications. Common classifications include the topic, user type, sentiment, and sensitivity of the message. These classifications should be specific to your company and the jobs of your AI assistant. A data model and rules engine should be used to apply your safety controls.

3. Resource Integration

Once an inbound message is determined to be in-scope, company-approved resources should be retrieved for the LLM to consult. Common resources include knowledge articles, brand facts, product catalogs, buying guides, user-specific data, or defined conversational flows and steps.

Your AI assistant should support non-LLM-based interactions to securely authenticate the end user or access sensitive resources. Authenticating users and validating data are important safety measures in many conversational flows.

4. Verifying Responses

With a response in hand, the AI should verify the answer is in scope and on brand. Fact-checking and corroboration techniques should be used to ensure the information is derived from the resource material. An outbound message should never be delivered to a customer if it cannot be verified by the context your AI has on hand.

5. Outbound Message Filtering

Outbound message filtering tactics include: conducting prompt leakage analysis, semantic similarity checks, consulting keyword blacklists, and ensuring all links and contact information are in-scope of your company.

6. Safety Monitoring and Analysis

Deploying AI safely also requires that you have mechanisms to capture and retrospect on the completed conversations. Collecting user feedback, tracking resource usage, reviewing state changes, and clustering conversations should be available to help you identify and reinforce the safety measures of your AI.

In addition, performing full conversation classifications will also allow you to identify emerging topics, confirm resolution rates, produce safety reports, and understand the knowledge gaps of your AI.

Other Resources

At Quiq, we actively monitor and endorse the OWASP Top 10 for Large Language Model Applications. This guide is provided to help promote secure and reliable AI practices when working with LLMs. We recommend companies exploring LLMs and evaluating AI safety consult this list to help navigate their projects.

Final Thoughts

By safely leveraging LLM technology through a Scoped AI software layer, CX leaders can:

1. Elevate Customer Experience
2. Boost Operational Efficiency
3. Enhance Decision Making
4. Ensure Consistency and Compliance

Reach out to sales@quiq.com to learn how Quiq is helping companies improve customer satisfaction and drive efficiency at the same.

What is an AI Assistant for Retail?

Over the past few months, we’ve had a lot to say about artificial intelligence, its new frontiers, and the ways in which it is changing the customer service industry.

A natural extension of this analysis is looking at the use of AI in retail. That is our mission today. We’ll look at how techniques like natural language processing and computer vision will impact retail, along with some of the benefits and challenges of this approach.

Let’s get going!

How is AI Used in Retail?

AI is poised to change retail, as it is changing many other industries. In the sections that follow, we’ll talk through three primary AI technologies that are driving these changes, namely natural language processing, computer vision, and machine learning more broadly.

Natural Language Processing

Natural language processing (NLP) refers to a branch of machine learning that attempts to work with spoken or written language algorithmically. Together with computer vision, it is one of the best-researched and most successful attempts to advance AI since the field was founded some seven decades ago.

Of course, these days the main NLP applications everyone has heard of are large language models like ChatGPT. This is not the only way AI assistants will change retail, but it is a big one, so that’s where we’ll start.

An obvious place to use LLMs in retail is with chatbots. There’s a lot of customer interaction that involves very specific questions that need to be handled by a human customer service agent, but a lot of it is fairly banal, consisting of things like “How do I return this item” or “Can you help me unlock my account.” For these sorts of issues, today’s chatbots are already powerful enough to help in most situations.

A related use case for AI in retail is asking questions about specific items. A customer might want to know what fabric an article of clothing is made out of or how it should be cleaned, for example. An out-of-the-box model like ChatGPT won’t be able to help much. but if you’ve used a service like Quiq’s conversational CX platform, it’s possible to finetune an LLM on your specific documentation. Such a model will be able to help customers find the answers they need.

These use cases are all centered around text-based interactions, but algorithms are getting better and better at both speech recognition and speech synthesis. You’ve no doubt had the distinct (dis)pleasure of interacting with an automated system that sounded very artificial and that lacked the flexibility actually to help you very much; but someday soon, you may not be able to tell from a short conversation whether you were talking to a human or a machine.

This may cause a certain amount of concern over technological unemployment. If chatbots and similar AI assistants are doing all this, what will be left for flesh-and-blood human workers? Frankly, it’s too early to say, but the evidence so far suggests that not only is AI not making us obsolete, it’s actually making workers more productive and less prone to burnout.

Computer Vision

Computer vision is the other major triumph of machine learning. CV algorithms have been created that can recognize faces, recognize thousands of different types of objects, and even help with steering autonomous vehicles.

How does any of this help with retail?

We already hinted at one use case in the previous paragraph, i.e. automatically identifying different items. This has major implications for inventory management, but when paired with technologies like virtual reality and augmented reality, it could completely transform the ways in which people shop.

Many platforms already offer the ability to see furniture and similar items in a customer’s actual living space, and there are efforts underway to build tools for automatically sizing them so they know exactly which clothes to try on.

CV is also making it easier to gather and analyze different metrics crucial to a retail enterprise’s success. Algorithms can watch customer foot traffic to identify potential hotspots, meaning that these businesses can figure out which items to offer more of and which to cut altogether.

Machine Learning

As we stated earlier, both natural language processing and computer vision are types of machine learning. We gave them their own sections because they’re so big and important, but they’re not the only ways in which machine learning will impact retail.

Another way is with increasingly personalized recommendations. If you’ve ever taken the advice of Netflix or Spotify as to what entertainment you should consume next then you’ve already made contact with a recommendation engine. But with more data and smarter algorithms, personalization will become much more, well, personalized.

In concrete terms, this means it will become easier and easier to analyze a customer’s past buying history to offer them tailor-made solutions to their problems. Retail is all about consumer satisfaction, so this is poised to be a major development.

Machine learning has long been used for inventory management, demand forecasting, etc., and the role it plays in these efforts will only grow with time. Having more data will mean being able to make more fine-grained predictions. You’ll be able to start printing Taylor Swift t-shirts and setting up targeted ads as soon as people in your area begin buying tickets to her show next month, for example.

Where are AI Assistants Used in Retail?

So far, we’ve spoken in broad terms about the ways in which AI assistants will be used in retail. In these sections, we’ll get more specific and discuss some of the particular locations where these assistants can be deployed.

In Kiosks

Many retail establishments already have kiosks in place that let you swap change for dollars or skip the trip to the DMV. With AI, these will become far more adaptable and useful, able to help customers with a greater variety of transactions.

In Retail Apps

Mobile applications are an obvious place to use recommendations or LLM-based chatbots to help make a sale or get customers what they need.

In Smart Speakers

You’ve probably heard of Alexa, a smart speaker able to play music for you or automate certain household tasks. Well, it isn’t hard to imagine their use in retail, especially as they get better. They’ll be able to help customers choose clothing, handle returns, or do any of a number of related tasks.

In Smart Mirrors

For more or less the same reason, AI-powered smart mirrors could have a major impact on retail. As computer vision improves it’ll be better able to suggest clothing that looks good on different heights and builds, for example.

What are the Benefits of Using AI in Retail?

The main reason that AI is being used more frequently in retail is that there are so many advantages to this approach. In the next few sections, we’ll talk about some of the specific benefits retail establishments can expect to enjoy from their use of AI.

Better Customer Experience and Engagement

These days, there are tons of ways to get access to the goods and services you need. What tends to separate one retail establishment from another is customer experience and customer engagement. AI can help with both.

We’ve already mentioned how much more personalized AI can make the customer experience, but you might also consider the impact of round-the-clock availability that AI makes possible.

Customer service agents will need to eat and sleep sometimes, but AI never will, which means that it’ll always be available to help a customer solve their problems.

More Selling Opportunities

Cross-selling and upselling are both terms that are probably familiar to you, and they represent substantial opportunities for retail outfits to boost their revenue.

With personalized recommendations, sentiment analysis, and similar machine-learning techniques, it will become much faster and easier to identify additional items that a customer might be interested in.

If a customer has already bought Taylor Swift tickets and a t-shirt, for example, perhaps they’d also like a fetching hat that goes along with their outfit. And if you’ve installed the smart mirrors we talked about earlier, AI will even be able to help them find the right size.

Leaner, More Efficient Operations

Inventory management is a never-ending concern in retail. It’s also one place where algorithmic solutions have been used for a long time. We think this trend will only continue, with operations becoming leaner and more responsive to changing market conditions.

All of this ultimately hinges on the use of AI. Better algorithms and more comprehensive data will make it possible to predict what people will want and when, meaning you don’t have to sit on inventory you don’t need and are less likely to run out of anything that’s selling well.

What are the Challenges of Using AI in Retail?

That being said, there are many challenges to using Artificial Intelligence in retail. We’ll cover a few of these now so you can decide how much effort you want to put into using AI.

AI Can Still Be Difficult to Use

To be sure, firing up ChatGPT and asking it to recommend an outfit for a concert doesn’t take very long. But this is a far cry from implementing a full-bore AI solution into your website or mobile applications. Serious technical expertise is required to train, finetune, deploy, and monitor advanced AI, whether that’s an LLM, a computer-vision system, or anything else, and you’ll need to decide whether you think you’ll get enough return to justify the investment.

Expense

And speaking of investment, it remains pretty expensive to utilize AI at any non-trivial scale. If you decide you want to hire an in-house engineering team to build a bespoke model, you’ll have to have a substantial budget to pay for the training and the engineer’s salaries. These salaries are still something you’ll have to account for even if you choose to build on top of an existing solution, because finetuning a model is far from easy.

One solution is to utilize an offering like Quiq. We have already created the custom infrastructure required to utilize AI in a retail setting, meaning you wouldn’t need a serious engineering force to get going with AI.

Bias, Abuse, and Toxicity

A perennial concern with using AI is that a model will generate output that is insulting, harmful, or biased in some way. For obvious reasons this is bad for retail establishments, so you’ll want to make sure that you both carefully finetune this behavior out of your models and continually monitor them in case their behavior changes in the future. Quiq also eliminates this risk.

AI and the Future of Retail

Artificial intelligence has long been expected to change many aspects of our lives, and in the past few years, it has begun delivering on that promise. From ultra-precise recommendations to full-fledged chatbots that help resolve complex issues, retail stands to benefit greatly from this ongoing revolution.

If you want to get in on the action but don’t know where to start, set up a time to check out the Quiq platform. We make it easy to utilize both customer-facing and agent-facing solutions, so you can build an AI-positive business without worrying about the engineering.

Request A Demo

Top 7 AI Trends For 2024

The end of the year is generally a time that prompts reflection about the past. But as a forward-thinking organization, we’re going to use this period instead to think about the future.

Specifically, the future of artificial intelligence (AI). We’ve written a great deal over the past few months about all the ways in which AI is changing contact centers, customer service, and more. But the pioneers of this field do not stand still, and there will no doubt be even larger changes ahead.

This piece presents our research into the seven main AI advancements for 2024, and how we think they’ll matter for you.

Let’s dive in!

What are the 2024 Technology Trends in AI?

In the next seven sections, we’ll discuss what we believe are the major AI trends to look out for in 2024.

Bigger (and Better) Generative AI Models

Probably the simplest trend is that generative models will continue getting bigger. At billions of internal parameters, we already know that large language models are pretty big (it’s in the name, after all). But there’s no reason to believe that the research groups training such models won’t be able to continue scaling them up.

If you’re not familiar with the development of AI, it would be easy to dismiss this out of hand. We don’t get excited when Microsoft releases some new OS with more lines of code than we’ve ever seen before, so why should we care about bigger language models?

For reasons that remain poorly understood, bigger language models tend to mean better performance, in a way that doesn’t hold for traditional programming. Writing 10 times more Python doesn’t guarantee that an application will be better – it’s more likely to be the opposite, in fact – but training a model that’s 10 times bigger probably will get you better performance.

This is more profound than it might seem at first. If you’d shown me ChatGPT 15 years ago, I would’ve assumed that we’d made foundational progress in epistemology, natural language processing, and cognitive psychology. But, it turns out that you can just build gargantuan models and feed them inconceivable amounts of textual data, and out pops an artifact that’s able to translate between languages, answer questions, write excellent code, and do all the other things that have stunned the world since OpenAI released ChatGPT.

As things stand, we have no reason to think that this trend will stop next year. To be sure, we’ll eventually start running into the limits of the “just make it bigger” approach, but it seems to be working quite well so far.

This will impact the way people search for information, build software, run contact centers, handle customer service issues, and so much more.

More Kinds of Generative Models

The basic approach to building a generative model fits well with producing text, but it is not limited to that domain.

DALL-E, Midjourney, and Stable Diffusion are three well-known examples of image-generation models. Though these models sometimes still struggle with details like perspective, faces, and the number of fingers on a human hand, they’re nevertheless capable of jaw-dropping output.

Here’s an example created in ~5 minutes of tinkering with DALL-E 3:
biggest questions about AI

As these image-generation models improve, we expect they’ll come to be used everywhere images are used – which, as you probably know, is a lot of places. YouTube thumbnails, murals in office buildings, dynamically created images in games or music videos, illustrations in scientific papers or books, initial design drafts for cars, consumer products, etc., are all fair game.

Now, text and images are the two major generative AI use cases with which everyone is familiar. But what about music? What about novel protein structures? What about computer chips? We may soon have models that design the chips used to train their successors, with different models synthesizing the music that plays in the chip fabrication plant.

Open Source v.s. Closed Source Models

Concerns around automation and AI-specific existential risk aren’t new, but one major front that’s opened in that war concerns whether models should be closed source or open source.

“Closed source” refers to a paradigm in which a code base (or the weights of a generative model) are kept under lock and key, only available to the small teams of engineers working on them. “Open source”, by contrast, is an antipodal philosophy that believes the best way to create safe, high-quality software is to disseminate the code far and wide, giving legions of people the opportunity to find and fix flaws in its design.

There are many ways in which this interfaces with the broader debate around generative AI. If emerging AI technologies truly present an existential threat, as the “doomers” claim, then releasing model weights is spectacularly dangerous. If you’ve built a model that can output the correct steps for synthesizing weaponized smallpox, for example, open-sourcing it would mean that any terrorist anywhere in the world could download and use it for that purpose.

The “accelerationists”, on the other hand, retort by saying that the basic dynamics of open-source systems hold for AI as they do for every other kind of software. Yes, making AI widely available means that some people will use it to harm others, but it also means that you’ll have far more brains working to create safeguards, guardrails, and sentinel systems able to thwart the aims of the wicked.

It’s still far too early to tell whether AI researchers will choose to adopt the open or closed-source approaches, but we predict that this will continue to be a hotly-contested issue. Though it seems unlikely that OpenAI will soon release the weights for its best models, there will be competitor systems that are almost as good which anyone could download, modify, and deploy. We also think there’ll be more leaks of weights, such as what happened with Meta’s LLaMa model in early 2023.

AI Regulation

For decades, debates around AI safety occurred in academic papers and obscure forums. But with the rise of LLMs, all that changed. It was immediately clear that they would be incredibly powerful, amoral tools, suitable for doing enormous good and enormous harm.

A consequence of this has been that regulators in the United States and abroad are taking notice of AI, and thinking about the kind of legal frameworks that should be established in response.

One manifestation of this trend was the parade of Congressional hearings that took place throughout 2023, with luminaries like Gary Marcus, Sam Altman, and others appearing before the federal government to weigh in on this technology’s future and likely impact.

On October 30th, 2023, the Biden White House issued an executive order meant to set the stage for new policies concerning dual-use foundation models. It gives the executive branch around a year to conduct a sweeping series of reports, with the ultimate aim being to create guidelines for industry as it continues developing powerful AI models.

The gears of government turn slowly, and we expect it will be some time before anything concrete comes out of this effort. Even when it does, questions about its long-term efficacy remain. How will it help to stop dangerous research in the U.S., for example, if China charges ahead without restraint? And what are we to do if some renegade group creates a huge compute cluster in international waters, usable by anyone, anywhere in the world wanting to train a model bigger than GPT-4?

These and other questions will have to be answered by lawmakers and could impact the way AI unfolds for the next century.

The Rise of AI Agents

We’ve written elsewhere about the many ongoing attempts to build AI systems – agents – capable of pursuing long-range goals in complex environments. For all that it can do, ChatGPT is unable to take a high-level instruction like “run this e-commerce store for me” and get it done successfully.

But that may change soon. Systems like Auto-GPT, AssistGPT, and SuperAGI are all attempts to augment existing generative AI models to make them better able to accomplish larger goals. As things stand, agents have a notable tendency to get stuck in unproductive loops or to otherwise arrive at a state they can’t get out of on their own. But we may only be a few breakthroughs away from having much more robust agents, at which point they’ll begin radically changing the economy and the way we live our lives.

New Approaches to AI

When people think about “AI”, they’re usually thinking of a machine learning or deep learning system. But these approaches, though they’ve been very successful, are but a small sample of the many ways in which intelligent machines could be built.
Neurosymbolic AI is another. It usually combines a neural network (such as the ones that power LLMs) with symbolic reasoning systems, able to make arguments, weigh evidence, and do many of the other things we associate with thinking. Given the notable tendency of LLMs to hallucinate false or incorrect information, neurosymbolic scaffolding could make them far better and more useful.

Causal AI is yet another. These AI systems are built to learn causal relationships in the world, such as the fact that dropping glass on a hard surface will cause it to break. This too, is a crucial part of what is missing from current AI systems.

Quantum Computing and AI

Quantum computing represents the emergence of the next great computational substrate. Whereas today’s “classical” computers exploit lightning-fast transistor operations, quantum computers are able to utilize quantum phenomena, such as entanglement and superposition, to solve problems that not even the grandest supercomputers can handle in less than a million years.

Naturally, researchers started thinking about applying quantum computing to artificial intelligence very early on, but it remains to be seen how useful it’ll be. Quantum computers excel at certain kinds of tasks, especially those involving combinatorics, solving optimization problems, and anything utilizing linear algebra. This last undergirds huge amounts of AI work, so it stands to reason that quantum computers will speed up at least some of it.

AI and the Future

It would appear as though the Pandora’s box of AI has been opened for good. Large language models are already changing many fields, from copywriting and marketing to customer service and hospitality – and it’ll likely change many more in the years ahead.

This piece has discussed a number of the most AI industry important trends to look out for in 2024, and should help anyone interfacing with these technologies to prepare themselves for what may come.