From Contact Center to AI Leader: Embracing AI to Upgrade CX (Webinar Recap)

The evolution of contact centers and customer experience (CX) has reached a pivotal moment. While traditional setups face mounting challenges, such as high agent turnover rates, system complexities, and skyrocketing customer expectations, forward-thinking businesses are turning to AI to flip the script. But how can companies integrate AI effectively, while ensuring it enhances both customer satisfaction and business outcomes, and delivers ROI?

This was the focus of a recent webinar I led with my colleague, Quiq VP of EMEA Chris Humphris, as we explored the future of customer service using agentic AI: From Contact Center to Agentic AI Leader: Embracing AI to Upgrade CX

Below, I break down the key insights and takeaways from the session to help you stay ahead in the age of AI-powered CX.

The case for AI in modern contact centers

Traditional Contact Centers Need an AI Revolution

Current challenges within contact centers

Contact centers today face several pain points that require immediate attention:

  • Integration and tech stack complexities

Legacy systems, disparate platforms, and inconsistent omnichannel experiences hinder operational efficiency. Two-thirds of contact centers report difficulties with technology integration and orchestration.

  • The productivity crisis

Spiraling agent turnover rates, averaging 30–45%, coupled with increasing complexity in customer interactions and high training costs, are pushing teams to their limits.

  • ROI barriers to AI adoption

While 84% of organizations plan to invest in AI, only 16% have successfully implemented it (Source). Confusion over ROI metrics, coupled with fears of disruption and limited expertise, often stalls projects.

The AI opportunity

AI presents a way out of these obstacles, offering tools that streamline workflows, enhance customer satisfaction, and deliver measurable ROI. Businesses that incorporate AI into their CX can:

  • Extend their reach with scalable self-service capabilities.
  • Inject intelligence into customer interactions, ensuring personalized and proactive engagements.
  • Gain insights from real-time analytics for better decision-making.

Agentic AI stands out as the next big step, promising improvement to automate complex tasks, to adapt, learn, and make contextual decisions akin to human intelligence, and support human agents on their job to improve performance and maintain consistency within the contact center.

Audience poll #1: Have you adopted AI already?

After outlining how AI stands to offer a major helping hand, I asked the audience if they’ve already adopted AI.

Poll Question #1

I was surprised to see that most respondents say they have adopted AI. But then I wanted to know from those who said they have not yet, what the timeline looks like.

Audience poll #2: What’s your current timeline for implementing AI in the contact center?

Poll Question #2

Unsurprisingly, those who have not yet implemented AI in their contact centers are looking to do so this year and start harnessing the many benefits we’ve highlighted.

Understanding agentic AI and its transformative potential

What is agentic AI?

Unlike static AI systems, agentic AI dynamically plans, executes, and adapts strategies to achieve outcomes, much like a skilled human employee. It:

  1. Autonomously executes multi-step, complex tasks.
  2. Changes tactics when initial approaches fail.
  3. Maintains contextual understanding and continuously learns from outcomes.

Types of agentic AI

Quiq’s platform facilitates agentic AI in three versatile forms:

  • AI Agents: Fully autonomous systems capable of managing customer queries and completing tasks, freeing up human agents for complex responsibilities.
  • AI Assistants: These support human agents by automating repetitive tasks, suggesting real-time responses, and enhancing service quality.
  • Agentic AI Services: Seamlessly integrate with existing workflows via APIs, allowing enterprises to add advanced AI capabilities to the other tools they already use without overhauling legacy systems.

These innovations allow businesses to tackle rising customer demands while maintaining operational efficiency.

Using AI to revolutionize CX across key areas

1. Customer service excellence

Through AI-powered solutions, businesses can enhance customer service by providing:

  • Seamless multi-channel experiences: AI integrates across platforms like WhatsApp, website live chats, and social media to ensure consistent support.
  • Intelligent escalations: When AI can’t resolve an issue, it transfers the case to a human agent with full contextual information, enabling smoother transitions.
  • Proactive outreach and updates: AI proactively sends reminders and notifications like, “Your subscription is renewing next week. Tap here to update payment info,” increasing engagement and retention.

2. Elevating agent productivity

AI doesn’t just improve customer-facing operations—it also empowers human agents by:

  • Automating mundane back-office tasks.
  • Providing real-time recommendations to handle inquiries more effectively.
  • Offering predictive insights on customer intent.

The result? Faster response times, improved accuracy, and greater job satisfaction among agents.

3. Driving data-driven personalization

AI’s ability to process large datasets in real-time ensures interactions are tailored to individual customers. By analyzing order history, browsing behavior, and past inquiries, AI can craft hyper-personalized responses. The result? Stronger customer relationships.

Case study highlight:

Customer Success Story

A leading flooring retailer in the UK leveraged AI on WhatsApp to redefine its customer journeys. Features included:

  • Proactive order updates and stock alerts.
  • AI-driven personalization based on preferences and past purchases.
  • Integration with multiple channels for consistent communication.

These efforts led to higher CSAT scores, faster resolutions, and increased revenue.

Overcoming barriers to AI adoption

Debunking ROI fears

For hesitant decision-makers, I’ll re-emphasize one critical insight: AI’s ROI becomes evident when implemented with clear objectives and measurable KPIs. Quiq’s quick-to-value solutions ensure businesses start seeing operational gains almost instantly.

Phasing implementation for success

Adopting AI doesn’t require an all-at-once approach. Start small, focusing on areas like customer service or self-service automation, where AI can deliver immediate wins. Once comfortable, scale adoption across more complex workflows.

Ensuring seamless human-AI collaboration

One common pitfall is neglecting how AI and human teams collaborate. Businesses must prioritize:

  • Smooth handoffs between AI and live agents.
  • Continuous learning opportunities where AI adapts based on human agent interactions.
  • Comprehensive training to ensure agents are equipped to leverage AI.

By bridging these gaps, organizations can future-proof their operations while setting themselves up as AI leaders.

Next steps for becoming an AI-powered CX leader

The road from a traditional contact center to an AI-powered CX leader has its hurdles, but the rewards far outweigh the challenges. Companies must stay focused on:

  1. Breaking down technical barriers through innovative platforms like Quiq.
  2. Investing in agentic AI to redefine operational efficiency.
  3. Starting with small, strategic AI interactions before scaling solutions to achieve omnichannel excellence.

Quiq’s agentic AI platform streamlines implementation, ensuring businesses can unlock the full value of AI without overhauling their existing systems. Businesses across industries—from eCommerce to retail—are already seeing the benefits of intelligent automation, proactive engagement, and personalized service at scale.

If you’re ready to move to the next generation of contact center and transform your CX, start your AI-powered transformations today. Visit Quiq’s AI Studio to explore how we can integrate scalable AI into your workflows.

Enterprise AI Chatbot Solutions are Failing Businesses: Why Agentic AI is the Path Forward

The integration of AI into enterprise operations is no longer a futuristic concept—it’s the present. From customer service to supply chain management, enterprises are adopting AI at an unprecedented rate, transforming workflows and outcomes. The global AI chatbot market is expected to be worth $455 million by the end of 2027, underscoring its growing importance. But while conventional AI chatbots have proven beneficial, they are no longer enough in an era demanding higher adaptability, smarter decision-making, and process integration.

Enter agentic AI, the next leap in enterprise technology. Evolving beyond chatbots, agentic AI agents offer enterprises proactive and autonomous solutions designed to optimize operations across departments.

This blog will explore the limitations of traditional enterprise AI chatbot solutions, introduce agentic AI as a transformational catalyst, and highlight how enterprise leaders can leverage it for sustained competitive advantage.

What is an enterprise AI chatbot solution?

Enterprise AI chatbot solutions are software platforms driven by artificial intelligence, natural language processing (NLP), and machine learning (ML) to automate customer interactions and internal processes. With natural language understanding (NLU), these chatbots can interpret customer intent, offer personalized responses, and escalate complex issues to human agents.

Legacy conversational AI in enterprise AI chatbot solutions

Conversational AI in enterprise solutions represents technology that enables natural, human-like interactions between businesses and their customers through intelligent chatbots and virtual assistants. These systems combine the technologies described above—Natural Language Processing (NLP), Machine Learning, and Deep Learning—to understand, process, and respond to human language in context.

But as we will see, agentic AI takes over from conversational AI to handle complex dialogues, remember conversation history, understand user intent, and provide personalized responses across multiple channels and languages in a way that prior-gen AI could not. These solutions can integrate with existing business systems (like CRM, ERP, and knowledge bases) to automate customer service, sales support, and internal operations, while continuously learning from interactions to improve accuracy and effectiveness.

Core features of enterprise AI chatbot solutions

1. Handling high volumes of requests

Enterprise chatbots aim to manage thousands of simultaneous interactions, offering round-the-clock availability without human intervention.

2. Escalation to human agents

When complex issues arise, chatbots transfer customers to live agents without losing conversation context for continuity and smooth interactions.

3. Integration with other enterprise tools

Integrating AI chatbots with existing tech stacks improves efficiency and customer experience. By connecting with tools like CRMs, ERPs, HR systems, and helpdesk software, chatbots access data to deliver personalized, accurate responses. For instance, they can check inventory, update orders, or enable targeted upselling, streamlining operations and enhancing service quality for customers and employees.

4. Support for internal processes

Beyond customer service, chatbots help employees with onboarding, training, and data collection, making them indispensable for growing enterprises.

Benefits of enterprise AI chatbot solutions

Cost savings

Automating repetitive tasks reduces reliance on human agents, leading to savings in labor costs.

Enhanced operations

Chatbots streamline workflows, reduce wait times, and improve customer satisfaction scores.

Scalable and consistent service

Whether answering FAQs or managing complex queries, these bots offer consistent service quality at scale.

However, despite their utility, traditional enterprise AI chatbots remain reactive—responding to instructions, but unable to anticipate problems or dynamics. This is where agentic AI paves the way forward.

From chatbots to agentic AI for enterprises

Agentic AI represents an evolution in enterprise artificial intelligence. While chatbots are reactive tools limited to predefined interactions, agentic AI agents are capable of proactive decision-making and autonomous action. With capabilities rooted in real-time adaptation, agentic AI has redefined AI’s role in the enterprise landscape.

Chatbots vs. agentic AI agents

Reactive vs. proactive

Chatbots react to user queries; agentic AI anticipates needs before they are expressed. For example, instead of merely answering a customer’s inquiry about delayed shipments, agentic AI could autonomously identify delays, notify affected customers, and initiate remediation.

Static decision-making vs. dynamic learning

Where chatbots rely on static rules, agentic AI improves continuously by learning from interactions, refining its predictive capabilities.

Siloed functionality vs. cross-departmental efficiency

Traditional chatbots typically serve a single function (e.g., customer service). Agentic AI spans departments, breaking silos by automating workflows in HR, supply chains, marketing, and more.

Cost vs. ROI

Agentic AI is already providing faster time-to-value than last-gen enterprise AI chatbot solutions. While implementing agentic AI requires an initial investment in technology and training, the returns justify the expenditure. Organizations typically see ROI through reduced operational costs, increased efficiency in process completion, higher customer satisfaction scores, and improved employee productivity.

When evaluating costs, consider not just platform pricing, but also integration expenses, training requirements, and maintenance—then weigh these against projected gains in automation, reduced error rates, faster resolution times, and the ability to scale operations without proportional increases in headcount.

Practical applications of agentic AI in enterprises

1. Customer service

Agentic AI can revolutionize customer service by going beyond simply answering customer queries. Imagine an AI that not only resolves issues efficiently, but also analyzes customer sentiment, behavior, and usage patterns to predict potential churn well in advance.

By identifying dissatisfied customers, it can automatically trigger personalized retention efforts, such as offering discounts, tailored recommendations, or proactive solutions, ensuring a more seamless and satisfying customer experience while boosting loyalty.

2. Human resources

Agentic AI can significantly streamline and enhance human resources operations. For example, it can handle the initial stages of hiring by screening resumes and applications for relevant skills and experience, thereby reducing the workload of HR teams.

It can also manage interview scheduling, ensuring candidates and hiring managers are aligned with minimal manual intervention. Once employees are onboarded, agentic AI can be used to monitor engagement and morale through sentiment analysis of internal communications or surveys, helping HR teams identify potential issues, such as burnout or dissatisfaction, before they escalate. This proactive approach fosters a healthier and more motivated workforce.

3. Supply chain management

In the realm of supply chain management, agentic AI can help businesses maintain agility and cost-efficiency. By analyzing historical data, market trends, and real-time inputs, it can accurately anticipate demand surges and adjust inventory levels dynamically to prevent shortages or overstocking. This is particularly valuable during peak seasons or unforeseen disruptions.

Moreover, agentic AI can optimize logistics by suggesting the most efficient routes and delivery schedules, reducing costs and improving supply chain performance. By automating these complex processes, businesses can react faster to changes in demand and ensure smoother operations.

These examples illustrate why enterprises can no longer rely solely on static chatbot solutions. Agentic AI offers dynamic, intelligent, and proactive capabilities that go beyond traditional automation, driving better outcomes across various business functions. Investing in these advanced AI solutions is becoming essential for staying competitive.

9 key features that set agentic AI apart in enterprise applications

To fully grasp agentic AI’s potential, it’s essential to understand the distinct features that differentiate it from its chatbot predecessors.

1. Contextual understanding

Agentic AI excels at maintaining context across multi-turn conversations, enabling more natural, human-like interactions compared to general chatbots, which often reset or lose track of context.

2. Proactive adaptability

Agentic AI evolves dynamically by analyzing patterns, allowing it to predict user needs and act without user prompting. For example, an agentic AI might automatically notify customers of service disruptions and provide alternatives.

3. Enhanced decision-making

Agentic AI provides real-time data-driven insights, enabling businesses to act swiftly and effectively. By analyzing patterns, it identifies opportunities and offers strategic recommendations.

4. Scalability without compromise

Despite handling vast interactions, agentic AI maintains the precision and personalization that differentiate high-quality customer experiences from generic ones.

5. Dynamic integrations

With the ability to integrate into multiple systems—be it CRMs or ERPs—agentic AI streamlines sophisticated workflows and data sharing, and facilitates cross-departmental communication effortlessly.

6. Multilingual capabilities

Designed for global enterprises, agentic AI can carry out region-specific conversations in multiple languages, ensuring effective communication across borders.

7. Security and compliance

Given the growing scrutiny of AI technologies, agentic AI comes with built-in safeguards to ensure user data is protected and compliance thresholds are met.

8. Human handoff recognition

Unlike basic chatbots that can create frustrating experiences when escalating to human agents, agentic AI possesses sophisticated recognition capabilities to identify when human intervention is necessary.

The system can intelligently determine the complexity and emotional nuance of interactions, seamlessly transferring conversations to human agents at the optimal moment, while providing them with full context and relevant customer data to ensure a smooth transition.

9. Learning and adaptation

Agentic AI continuously learns and adapts from interactions, improving over time and delivering increasingly accurate and efficient responses.

How to get started with agentic AI

Transitioning from conventional chatbot solutions to agentic AI may seem daunting, but it can be achieved with a structured approach.

Step 1: Conduct a needs assessment

Evaluate your enterprise’s current processes and identify areas where greater autonomy and efficiency are required.

Step 2: Choose the right agentic AI solution

You’ll need to decide whether to build or buy your AI, or buy-to-build. Prioritize solutions like Quiq’s AI Studio, which focus on AI practices like scalability, integration, and observability.

Step 3: Plan for phased implementation

Adopt a phased strategy to minimize operational disruptions during the transition from traditional tools to agentic AI systems.

Step 4: Train your teams

Equip employees with the resources and skills needed to leverage agentic AI effectively within their workflows.

Step 5: Monitor and optimize

Continuously analyze the impact of agentic AI on KPIs like cost savings, customer satisfaction, and employee productivity. Use this data to refine operations.

Agentic AI is the strategic advantage of tomorrow

The transition from basic enterprise AI chatbot solutions to the cutting-edge potential of agentic AI has begun. Enterprises that adopt this new technology will unlock operational efficiencies, improve customer experiences, and gain competitive advantages that were once unimaginable.
Agentic AI is not just a tool—it’s a strategy for building future-ready enterprises prepared for the demands of a dynamic business ecosystem.

Apple Business Updates – A New Way To Proactively Engage Customers on Apple Messages for Business

In mid-September 2024, Apple announced an exciting improvement to Messages for Business called Business Updates that will allow your business to proactively contact your customers in specific use cases—improving the customer experience, security, and making it easier for you to connect with your customers.

What is Apple Messages for Business?

As you’re no doubt aware, one of the most used apps on an iPhone is the Messages app. Apple Messages for Business is the technology that makes it possible for businesses to interact with their customers using Apple Messages. For customers, their conversations with businesses live alongside their other messages with other Apple devices (blue blurbs) and SMS messages (green blurbs) and work just like any other conversation.

This is a powerful way for contact centers like yours to reach the more than 1 billion people worldwide who use iPhones for daily communication, so it’s worth paying careful attention to.

Apple’s Big Announcement – Business Updates

Business Updates will allow you to reach out to your customers proactively, in a private and secure way, utilizing just their phone number and Apple’s pre-approved templates.

This will have positive impacts on both customer experience and your business. As you can see, there’s no downside here—which is why we’re so excited about it!

1. How will Business Updates Improve Customer Experiences?

Previously, Apple provided a world-class branded experience for businesses to message with their customers right inside the Messages app everyone is used to. In order to prevent spam and protect privacy, Apple previously required that customers send the first message. This limited the use cases for Apple Messages for Business to inbound customer support questions. As compared to SMS, this eliminated the ability to send proactive notifications, such as order updates.

Apple Business Updates lets businesses send proactive messages to customers—and it protects against spam by only allowing this option for approved use cases. This is great for business, too: a little over 60% of customers have stated they want businesses to be more proactive in reaching out, and (in a charming coincidence), offering text messaging can reduce per-customer support costs by about 60%. It’s nice when things work out like that!

Let’s say a little bit more about those use cases. Initially, Apple is focused on sending proactive, business-initiated messages related to orders, but a “Connect Using Messages” notification is also supported, which businesses can use to switch phone calls to Apple Messages.

The data indicates that IVR is a sensible self-service option, but this ability to switch will give customers the choice to switch channels from a call to text messaging, meaning you can better meet them where they are and meet their preferences.

This is all done using templates. The full list can be found in Apple’s documentation, but here are a few samples. The first two are examples of “Connect Using Messages” which could be used to offer a customer the option to switch a phone call to messages:

Apple Messages for Business

2. How will the Apple Messages for Business Update Improve Your Business?

Now, let’s turn to the other side of the equation, the impact of the announcement on your business operations.

Apple has released Business Updates in iOS 18, the newest iPhone operating system that was announced in September 2024, allowing businesses that work with an Apple Messaging Service Provider (MSP), like Quiq, to initiate a conversation with a customer from their branded experiences. Order updates and converting calls to messaging (discussed above) are two obvious early use cases.

Consistent with Apple’s commitment to security, Apple does not read messages or store conversations. In a world more and more besieged by data breaches, hacks, and invasions of privacy, your users need not fear that Apple is using the messages inappropriately.

A final note: Android devices, or devices that do support Apple Messages for Business, will automatically fall back to SMS when messages are sent in Quiq. This means that you can configure your business processes to send notifications to all customers and Quiq will make sure they are delivered on the best possible channel.

The Future of Apple Messages for Business

Contact centers and CX teams are always looking for new ways to better meet customer needs, and this announcement opens up some exciting possibilities. You can now reach out in more ways and integrate more robustly with the rest of the Apple ecosystem, leading to a reduction in distraction and search fatigue for your users—and a reduction in expenses for you.

If you want to learn more about how Quiq enables Apple Messages for Business, you can do that here.

Reinventing Customer Support: How Contact Center AI Delivers Efficiency Like Never Before

Contact centers face unprecedented pressure managing sometimes hundreds of thousands of daily customer interactions across multiple channels. Traditional approaches, with their rigid legacy systems and manual processes, often buckle under these demands, leading to frustrated customers and overwhelmed agents. This was certainly the case during the past few years, when many platforms and processes collapsed under the weight of astronomical volumes due to natural disasters and other unplanned events.

So we set out to build a solution to tackle these pressures.

Our solution? Contact center AI – an agentic AI-based solution that transforms how businesses handle customer support.

In this article, I will give you a lay of the contact center AI land. I’ll explain what it is and how it’s best used, as well as ways to start implementing it.

What is contact center AI?

Contact center AI represents a sophisticated fusion of artificial intelligence and machine learning technologies designed to optimize every aspect of customer service operations. It’s more than just basic automation—it’s about creating smarter, more efficient systems that enhance both customer and agent experiences.

This advanced technology incorporates tools like Large Language Models (LLMs), which allow it to understand and respond to customer queries in a conversational and human-like manner. It also leverages real-time transcription, allowing customer interactions to be recorded and analyzed instantly, providing actionable insights for improving service quality. Additionally, intelligent task automation streamlines repetitive tasks, freeing agents to focus on more complex customer needs.

By understanding customer intent, analyzing context, and processing natural language queries, contact center AI can even make rapid, data-driven decisions to determine the best way to handle every interaction.

Whether routing a customer to the right department or providing instant answers through AI agents, this technology ensures a more dynamic, responsive, and efficient customer service environment. It’s a game-changer for businesses looking to improve operational efficiency and deliver exceptional customer experiences.

AI-powered solutions for contact center challenges

1. Managing high volumes efficiently

During peak periods, managing high customer interaction volumes can be a significant challenge for contact centers. This is where contact center AI steps in, offering intelligent automation and advanced routing capabilities to streamline operations.

AI-powered systems can automatically deflect routine inquiries, such as negative value, redundant conversations—like ‘where’s my order?’ or account updates—to AI agents that provide quick and accurate responses. This ensures that customers get instant answers for simpler questions without wait.

Meanwhile, human agents are free to focus on more complex or sensitive cases that require their expertise. This smart delegation not only reduces wait times, but also helps maintain high customer satisfaction levels by ensuring every interaction is handled appropriately. Each human agent has all the information gathered in the interaction at the start of the conversation, eliminating repetition and frustration.

2. Real-time AI to empower your agents

Injecting generative AI into your contact center empowers human agents by significantly enhancing their efficiency and effectiveness in managing customer interactions. These AI systems provide real-time assistance during conversations, suggesting responses for agents to send, as well as taking action on their behalf when appropriate—like adding a checked bag to a customer’s flight.
This gives agents the time to focus on issues that require human judgment, reducing the effort and time needed to resolve customer concerns. The seamless collaboration between AI and human agents elevates the quality of customer service, boosts agent productivity, and enhances customer satisfaction.

3. Improving complex case routing

Advanced AI solutions now integrate into various systems to streamline customer service operations. These systems analyze multiple factors, including customer history, intent, preferences, and the unique expertise of available agents, to match each case to the most suitable representative. Then, AI can analyze call data in real time, continuously optimizing routing processes to further enhance efficiency during high-demand periods.

By ensuring the right agent handles the right query from the start, these AI-driven systems significantly enhance first-call resolution rates, reduce wait times, and improve customer satisfaction. This not only boosts operational efficiency, but also fosters stronger customer loyalty and trust in the long term.

4. Enabling 24/7 customer support

Modern consumers expect round-the-clock support, but maintaining a full staff 24/7 can be both costly and impractical for many businesses—especially if they require multilingual global support. AI-powered virtual agents step in to bridge this gap, offering reliable and consistent assistance at any time of day or night.

These tools are designed to handle a wide range of customer inquiries, all while adapting to different languages and maintaining a high standard of service. Additionally, they can manage high volumes of inquiries simultaneously, ensuring no customer is left waiting. By leveraging AI, businesses can not only meet customer expectations, but also enhance efficiency and reduce operational costs.

4 key benefits of contact center AI

Now that we’ve touched on what contact center AI is and how it can help businesses most, let’s go into the top benefits of implementing AI in your contact center.

Contact Center AI-4-AI-benefits

1. Enhanced customer experience

AI is revolutionizing the customer experience through multiple transformative capabilities. By providing instant response times through always-on AI agents, customers no longer face frustrating queues or delayed support. These agentic AI agents deliver personalized interactions by analyzing customer history and preferences, offering tailored recommendations and maintaining context from previous conversations. And all this context is available to human agents, should the issues be escalated to them.

Problem resolution becomes more efficient through predictive analytics and intelligent routing, ensuring customers connect with the most qualified agents for faster first-call resolution.

The technology also maintains consistent service quality across all channels, offering standardized responses and multilingual support without additional staffing, even during peak times. AI takes customer service from reactive to proactive by identifying potential issues before they escalate, sending automated reminders, and suggesting relevant products based on customer behavior.

Perhaps most importantly, AI enables a seamless customer experience across all channels, maintaining conversation context across multiple touch points and facilitating smooth transitions between automated systems and human agents. This unified approach creates a more efficient, personalized, and satisfying customer experience that balances automated convenience with human expertise when needed.

2. Boosted agent productivity

AI automation significantly enhances agent productivity by taking over time-consuming routine tasks, such as call summarization, data entry, and follow-up scheduling.

By automating these repetitive processes, agents can save significant time, giving them more freedom to engage with customers on a deeper level. This shift allows agents to prioritize building meaningful relationships, addressing complex customer needs, and delivering a more personalized service experience, ultimately driving better outcomes for both the business and its customers.

3. Cost savings

Organizations can significantly cut operational expenses by leveraging automated interactions and improving agent processes. Automation allows businesses to handle much higher volumes of customer inquiries without the need to hire additional staff, reducing labor costs.

Optimized processes ensure that agents are deployed effectively, minimizing downtime and maximizing productivity. Together, these strategies help organizations save money while maintaining high levels of service quality.

4. Increased (and improved) data insights

Analytics into AI performance offers businesses a deeper understanding of their operations by delivering actionable insights into customer interactions, agent performance, and operational efficiency.

These data-driven insights help identify trends, pinpoint areas for improvement, and make informed decisions that enhance both service quality and customer satisfaction. With continuous monitoring and analysis, businesses can adapt quickly to changing demands and maintain a competitive edge.

Implementation tips to start your contact center AI

If you want to add AI to your contact center, there are a handful of important decisions you need to make first that’ll determine your approach. Here are the most important ones to get you started.

1. Define your business objectives

Begin by assessing specific challenges and objectives, so that you can identify areas where automation could have the most significant impact later on—such as streamlining processes, reducing costs, or improving customer experiences.

Consider how AI can address these pain points and align with your long-term goals, but remember to start small. You just need one use case to get going. This allows you to test the solution in a controlled environment, gather valuable insights, and identify potential challenges.

2. Identify the best touch points in your customer journey

After you define your business objectives, you’ll want to identify the touch points within your customer journey that are best for AI. Within those touch points are End User Stories that will help you determine the data sources, escalation and automation paths, and success metrics that will lead you to significant outcomes. Our expert team of AI Engineers and Program Managers will help you map out the correct path.

3. Decide how you’ll acquire your AI: build, buy, or buy-to-build?

When choosing AI solutions, ensure they align with your organization’s size, industry, and specific requirements. Look at factors such as scalability to accommodate future growth, integration capabilities with your existing systems, and the level of vendor support offered.

It’s important to consider the solution’s ease of use, cost-effectiveness, and potential for customization to meet your unique needs. Another critical factor is observability, so you can avoid “black box AI” that’s nearly impossible to manage and improve.

You’ll also need to evaluate whether it’s best to buy an off-the-shelf solution, build a custom AI system tailored to your needs, or opt for a buy-to-build approach, which combines pre-built technology with customization options for greater flexibility.

4. Prep for human agent training at the outset

Invest in robust training programs to equip agents with the knowledge and skills needed to work effectively alongside AI tools. This includes developing expertise in areas where human input is crucial, such as managing complex emotional situations, problem-solving, and building rapport with customers.

5. Plan for integration and compatibility

Remember: Your AI will only be as good as the data and systems it can access. Verify compatibility with your existing systems, like CRM, ticketing platforms, or live chat tools. Integration to these systems is critical to the success of your contact center AI solution.
You also want to plan how AI will seamlessly integrate into human agents’ daily tasks without disrupting their workflows, and include all data within your project scope.

6. Establish monitoring and feedback loops

Before making any changes to your contact center, benchmark KPIs like first-call resolution, average handling time, and customer sentiment. Then regularly update and retrain the AI based on human agent and customer feedback to experiment and make the most critical changes for your business.

7. Plan for scalability

Implement AI solutions in phases, beginning with just one or two specific use cases. Look for solutions designed to help your business scale by accommodating different communication channels and adapting to evolving technologies.
By focusing on skills that complement AI capabilities, agents can provide a seamless, empathetic, and personalized experience that enhances customer satisfaction.

Final thoughts on contact center AI

Contact center AI represents a true organizational transformation opportunity in customer support, offering unprecedented ways to improve efficiency while enhancing customer experiences. Rather than replacing human agents, it empowers them to work more effectively, focusing on high-value interactions that require emotional intelligence and complex problem-solving skills.

The future of customer support lies in finding the right balance between automated efficiency and human touch. Organizations that successfully move from a conversational AI contact center to fully generative AI experiences will see significant lifts in key metrics and will be well-positioned to meet evolving customer expectations.

How To Encourage More Customers To Use your Live Chat Service

When customer experience directors float the idea of investing more heavily in live chat for customer service, it’s not uncommon for them to get pushback. One of the biggest motivations for such reticence is uncertainty over whether anyone will actually want to use such support channels—and whether investing in them will ultimately prove worth it.

An additional headwind comes from the fact that many CX directors are laboring under the misapprehension that they need an elaborate plan to push customers into a new channel. But one thing we consistently hear from our enterprise customers is that it’s surprising how naturally customers start using a new channel when they realize it exists. To borrow a famous phrase from Field of Dreams, “If you build it, they will come.” Or, to paraphrase a bit, “If you build it (and make it easy for them to engage with you), they will come.” You don’t have to create a process that diverts them to the new channel.

The article below fleshes out and defends this claim. We’ll first sketch the big-picture case for why live chat with customers remains as important as ever, then finish with some tips for boosting customer engagement with your live chat service.

Why is Live Chat Important for Contact Centers?

Before we talk about how to get people to use your live chat for customer service features, let’s discuss why such channels continue to be an important factor in the success of customer experience companies.

The simplest way to do this is with data: 60% of customers indicate that they’re more likely to visit a website again if it has live chat for customer service, and a few more (63%) say that a live chat widget will increase their willingness to make a purchase.

But that still leaves the question of how live chat stacks up against other possible communication channels. Well, nearly three-quarters (73%) are more comfortable using live chat for customer service issues than email or phone—and a high fraction (61%) are especially annoyed by the prospect of being put on hold.

If this isn’t enough, there are customer satisfaction (CSAT) scores to think about as well. This is perhaps the strongest data point in support of customer live chat, as 87% of customers give a positive rating to their live chat conversations.

Agents also prefer live chat over the phone because regularly dealing with angry and upset customers via phone can take an emotional toll. Live chat contributes to agent job retention—a big, expensive issue that many CX leaders are constantly trying to grapple with.

So, the data is clear and it makes sense for all the reasons we’ve discussed: Live chat for customer service shows every indication of being a worthwhile communication channel, both now and in the future.

6 Tips for Encouraging Customers to Use Live Chat

With that having been said, the next few sections will detail some of the most promising strategies for getting more of your customers to use your live chat features.

1. Make Sure People Know You have Live Chat Services

The first (and probably easiest) way to get more customers to use your live chat is to take every step possible to make sure they know it’s something you offer. Above, we argued that little special effort is required to get potential customers to use a new channel, but that shouldn’t be taken to mean that there’s no use in broadcasting its existence.

You can get a lot of mileage out of promoting live chat through your normal marketing channels–a mention on your support page, on your social feeds, and at the bottom of your order confirmation emails, for example. In the rest of this section, we’ll outline a few other low-cost ways to boost engagement with live chat for customer service.

First, use your IVR to move callers from phone to messaging. You can also mention that you support live chat for customer service during the phone hold message. We noted above that people tend to hate being put on hold. You can use that to your advantage by offering them the more attractive alternative of hopping onto a digital messaging channel instead—including WhatsApp, Apple Messages for Business, and SMS. For example, this might sound as simple as: “Press 2 to chat with an agent over SMS text messaging, or get faster support over live web chat on our website.”

From your perspective, an added benefit is that your agents can easily shuffle between several different live chat conversations, whereas that isn’t possible on the phone. This means faster resolutions, a higher volume of questions answered, and more satisfaction all the way around.

Similarly, include plenty of links to live chat when communicating with your customers. After they make a purchase, for example, you could include a message suggesting they utilize live chat to resolve any questions they have. If you’re sending them other emails, that’s a good place to highlight live chat as well. Don’t neglect hero pages and product pages; being able to answer questions while talking directly to current and future buyers is a great way to boost sales.

BODi® (formerly Beach Body) is a California-based nutrition and fitness company that pursued exactly this strategy when they ditched their older menu-based support system in favor of “Ask BODi AI.”

Bodi-Customer

This eventually became a cost-effective support channel that was able to answer a variety of free-form questions from customers, leading to happier buyers and better financial performance.

2. Minimize the Hassle of Using Live Chat

One of the better ways of boosting engagement with any feature, including live chat, is to make it as pain-free as possible.

Take contact forms, for example, which can speed up time to resolution by organizing all the basic information a service agent needs. This is great when a customer has a complex issue, but if they only have a quick question, filling out even a simple contact form may be onerous enough to prevent them from asking it.

There’s a bit of a balancing act here, but, in general, the fewer fields a contact form has, the more likely someone is to fill it out.

The emergence of large language models (LLMs) has made it possible to use an AI assistant to collect information about customers’ specific orders or requests. When such an assistant detects that a request is complex and needs human attention, it can ask for the necessary information to pass along to an agent. This turns the traditional contact form into a conversation, placing it further along in the customer service journey so only those customers who need to fill it out will have to use it.

Or take something as prosaic as the location and prominence of your ‘live chat’ button. Is it easy to find, or is it buried so deep you’d need Indiana Jones to dig it out? Does it pop out proactively to engage potential or returning customers with contextual messaging based on what they’re browsing?

It’s also worth briefly mentioning that the main value prop of rich messaging content– like carousel cards, buttons, and quick replies–results in much less friction for the consumer. We have a dedicated section on rich messaging below that spells this out in more detail.

Though they may seem minor in isolation, there’s an important truth here: if you want to get more people to use your live chat for customer service, make it easy and pain-free for them to do so. Every additional second of searching or fiddling means another lost opportunity.

3. Personalize Your Chat

Another way to make live chat for customer service more attractive is to personalize your interactions. Personalization can be anything from including an agent’s name and picture in the chat interface displayed on your webpage to leveraging an LLM to craft a whole bespoke context for each conversation.

For our purposes, the two big categories of personalization are brand-specific personalization and customer-specific personalization. Let’s discuss each.

Brand-specific personalization

For the former, marketing and contact teams should collaborate to craft notifications, greetings, etc., to fit their brand’s personality. Chat icons often feature an introductory message such as “How can I help you?” to let browsers know their questions are welcome. This is a place for you to set the tone for the rest of a conversation, and such friendly wording can encourage people to take the next step and type out a message.

More broadly, these departments should also develop a general tone of voice for their service agents. While there may be some scripted language in customer service interactions, most customers expect human support specialists to act like humans. And, since every request or concern is a little different, agents often need to change what they say or how they say it.

This is no less true for buyers on different parts of your site. Customer questions will be different depending on whether they’re on a checkout page, a product page, or the help center because they are at very distinct points in their buying journey. It’s important to contextualize any proactive messaging and the conversational flow itself to accommodate this (i.e., “Need help checking out? We’ve got live agents standing by.” versus “Have questions about this product? Try asking me”).

Setting rules for tone of voice and word choice ensure the messaging experience is consistent no matter which agent helps a customer or what the conversation is about.

Customer-specific personalization

Then, there’s customer-specific personalization, which might involve something as simple as using their name, or extend to drawing from their purchase history to include the specifics of the order they’re asking about.

Once upon a time, this work fell almost entirely to human contact centers, but no more! Among the many things that today’s LLMs excel at is personalization. Machine learning has long been used to personalize recommendations (think: Netflix learning what kinds of shows you like), but when LLMs are turbo-charged with a technique like retrieval-augmented generation (which allows them to use validated data sources to inform their replies to questions), the results can be astonishing.

Machine-based personalization and retrieval-augmented generation are both big subjects, and you can read through the links for more context. But the high-level takeaway is that, together, they facilitate the creation of a seamless and highly personalized experience across your communication channels using the latest advances in AI. Customers will feel more comfortable using your live chat feature, and will grow to feel a connection with your brand over time.

4. Include Privacy and Data Usage Messages

As of this writing, news recently broke that a data breach may have resulted in close to three billion – billion! – people having their social security numbers compromised. You’re no doubt familiar with a bevy of similar stories, which have been pouring forth since more or less the moment people started storing their private data online.

And yet, for the savvy customer experience director, this is an opportunity; by taking privacy very seriously, you can distinguish yourself and thereby build trust.

Customers visiting your website want an assurance that you will take every precaution with their private information, and this can be provided through easy-to-understand data privacy policies and customizable cookie preferences.

Live messaging tools can add a wrinkle because they are often powered by third-party software. Customer service messaging can also require a lot of personal information, making some users hesitant to use these tools.

You can quell these concerns by elucidating how you handle private customer data. When a message like this appears at the start of a new chat, is always accessible via the header, or persists in your chat menu, customers can see how their data is safeguarded and feel secure while entering personal details.

An additional wrinkle comes from the increasing ubiquity of tools based on generative AI. Many worry that any information provided to a model might be used to “train” that model, thus increasing the chances that it’ll be leaked in the future. The best way to avoid this calamity is to partner with a conversational AI for CX platform that works tirelessly to ensure that your customers’ data is never used in this way.

That said, whatever you do, make sure your AI assistants have messages designed to handle requests about privacy and security. Someone will ask eventually, and it’s good to be prepared.

5. Use Rich Messages

Smartphones have become a central hub for browsing the internet, shopping, socializing, and managing daily activities. As text messaging gradually supplemented most of our other ways of communicating, it became obvious that an upgrade was needed.

This led to the development of rich messaging applications and protocols such as Apple Messages for Business and WhatsApp, which use Rich Communication Services (RCS). RCS features enhancements like buttons, quick replies, and carousel cards—all designed to make interactions easier and faster for the customer.

For all these reasons, using rich messaging in live chat with customers will likely help boost engagement. Customers are accustomed to seeing emojis now, and you can include them as a way of humanizing and personalizing your interactions. There might be contexts in which they need to see graphics or images, which is very difficult with the old Short Messaging Service (SMS).

In the final analysis, rich messaging offers another powerful opportunity to create the kind of seamless experience that makes interacting with your support enjoyable and productive.

6. Separating Chat and Agent Availability

Once upon a time, ‘chat availability’ simply meant the same thing as ‘agent availability,’ but today’s language models are rapidly becoming capable enough to resolve a wide variety of issues on their own. In fact, one of the major selling points of AI assistants is that they provide round-the-clock service because they don’t need to eat, sleep, or take bathroom breaks.

This doesn’t mean that they can be left totally alone, of course. Humans still need to monitor their interactions to make sure they’re not being rude or hallucinating false information. But this is also something that becomes much easier when you pair with an industry-leading conversational AI for CX platform that has robust safeguards, monitoring tools, and the ability to switch between different underlying models (in case one starts to act up).

Having said that, there are still a wide variety of tasks for which a living agent is still the best choice. For this reason, many companies have specific time windows when live chat for customer service is available. When it’s not, some choose to let customers know when live chat is an option by communicating the next availability window.

In practice, users will often simply close their tabs if they can’t talk to a person, cutting the interaction off before it begins. In our view, the best course is usually to shift the conversation to an asynchronous channel where it can be handled by an AI assistant able to hand the chat off to an agent when one becomes available.

Employing these two strategies, means that your ability to service customers is decoupled from operational constraints of agent availability, and you are always ready to seize the opportunity to serve customers when they are eager to engage with your brand

Creating Greater CX Outcomes with Live Web Chat is Just the Start.

Live web chat with customers remains an excellent way to resolve issues while building trust and boosting the overall customer experience. The best strategies for increasing engagement with your live chat is to make sure people know it’s an option, make it easy to use, personalize interactions where possible—and make the most out of AI to automatically resolve routine inquiries while filling in live agent availability gaps.

If you’re interested in taking additional steps to resolve common customer service pain points, check out our ebook on the subject. It features a number of straightforward, actionable strategies to help you keep your customers as happy as possible!

Google Business Messaging is Ending – Here’s How You Should Adapt

Google Business Messaging (GBM) has long been one of the primary rich messaging channels for Android, but it’s now in the process of being phased out.

GBM is being sunsetted, but that doesn’t mean your customer experience has to suffer. This piece will walk you through the main alternatives to GBM, ensuring you have everything you need to keep your organization running smoothly.

What’s Happening with Google Business Messaging Exactly?

According to an announcement from Google, Google Business Messaging will be phased out on the following schedule. First, starting July 15, 2024, GBM entry points will disappear from Google’s Maps and Search properties, and it will no longer be possible to start GBM conversations from entry points on your website. Existing conversations will be able to continue until July 31, 2024, when the GBM service will be shut down entirely.

What are the Alternatives to Google Business Messaging?

If you’re wondering which communication channel you should switch to now that GBM is going away, here are some you should consider. They’re divided into two groups. Group one consists of the channels we personally recommend, based on our years of experience in customer service and contact center management. Section two deals with communication channels that we still support but which, in our view, are not as promising as alternatives to GBM.

Recommended Alternatives to Google Business Messaging

Here are the best channels to serve as replacements to GBM

  • WhatsApp: WhatsApp enables text, voice, and video communications for over two billion global users. The platform includes several built-in features that appeal to businesses looking to forge deeper, more personal connections with their customers. Most importantly, it is a cross-platform messaging app, meaning it will allow you to chat with both Android and Apple users.
  • Text Messaging or Short Message Service (SMS): SMS is a long-standing staple for a reason, and with a conversational AI platform like Quiq, you can put large language models to work automating substantial parts of your SMS-based customer interactions.

Other Alternatives to Google Business Messaging

Here are the other channels you might look into.

  • Live web chat: When wondering about whether to invest in live chat support, customer experience directors may encounter skepticism about how useful customers will find it. But with nearly a third of female users of the internet indicating that they prefer contacting support via live chat, it’s clearly worth the time. This is especially true when live chat is used to provide an interactive experience, readily available, helpful agents, and swift responses. There are plenty of ways to encourage your customers to actually use your live chat offering, including mentioning it during phone calls, linking to it in blog posts or emails, and promoting it on social media.
  • Apple Messages for Business: Unlike standard text messaging available on mobile phones, Apple Messages is a specialized service designed for businesses to engage with customers. It facilitates easy setup of touchpoints such as QR codes, apps, or email messages, enabling appointments, issue resolution, and payments, among other things.
  • Facebook Messenger: Facebook Messenger for Business enables brands to handle incoming queries efficiently, providing immediate responses through AI assistants or routing complex issues to human agents. Clients integrating with a tool like Quiq have seen a massive ROI – a 95% customer satisfaction (CSAT), a 70-80% resolution rate for incoming customer inquiries automatically, and more. Like WhatsApp, FB messenger is a cross-platform messaging app, meaning it can help you reach users on both Android and Apple devices.
  • Instagram: Instagram isn’t just for posting pictures anymore – your target audience is likely using it to discover brands, shop, and make purchases. They’re reaching out through direct messages (DM), responding to stories, and commenting on posts. Instagram’s messaging API simplifies the handling of these customer interactions; it has automated features that help initiate conversations, such as Ice Breakers, as well as features that facilitate automated responses, such Quick Replies. Integrating Quiq’s conversational AI with Instagram’s messaging API makes it easier to automate responses to frequently asked questions, thereby reducing the workload on your human agents.
  • X (formerly Twitter): With nearly 400 million registered users and native, secure payment options, X is not a platform you can ignore. And the data supports this – 50% of surveyed X users mentioned brands in their posts more than 15 times in seven months, 80% of surveyed X users referred to a brand in their posts, and 99% of X users encountered a brand-related post in just over a month. By utilizing X business messaging, you can connect with your customers directly, providing them with excellent service experiences. Over time, this approach helps you build strong relationships and positive brand perceptions. Remember, posts—even those related to customer service—occur publicly. Thus, a positive interaction satisfies your customer and showcases your company’s engagement quality to others. Even better, the X API enables you to send detailed messages while keeping the conversation within X’s platform. This avoids the need for customers to switch platforms, enhancing their overall satisfaction.

How to Switch Away From Google Business Messaging

Even though GBM is going the way of the Dodo, the good news is that you have tons of other options. Check out our dedicated pages to learn more about SMS, WhatsApp, and Facebook Messenger, and you’re warmly invited to consult with our team if you are currently using GBM with another managed service provider and are not sure what the best direction forward is!

9 Top Customer Service Challenges — and How to Overcome Them

It’s a shame that customer service doesn’t always get the respect and attention it deserves because it’s among the most important ingredients in any business’s success. There’s no better marketing than an enthusiastic user base, so every organization should strive to excel at making customers happy.

Alas, this is easier said than done. When someone comes to you with a problem, they can be angry, stubborn, mercurial, and—let’s be honest—extremely frustrating. Some of this just comes with the territory, but some stems from the fact that many customer service professionals simply don’t have a detailed, high-level view of customer service challenges or how to overcome them.

That’s what we’re going to remedy in this post. Let’s jump right in!

What are The Top Customer Service Challenges?

After years of running a generative AI platform for contact centers and interacting with leaders in this space, we have discovered that the top customer service challenges are:

  1. Understanding Customer Expectations
  2. Next Step: Exceeding Customer Expectations
  3. Dealing with Unreasonable Customer Demands
  4. Improving Your Internal Operations
  5. Not Offering a Preferred Communication Channel
  6. Not Offering Real-Time Options
  7. Handling Angry Customers
  8. Dealing With a Service Outage Crisis
  9. Retaining, Hiring, and Training Service Professionals

In the sections below, we’ll break each of these down and offer strategies for addressing them.

1. Understanding Customer Expectations

No matter how specialized a business is, it will inevitably cater to a wide variety of customers. Every customer has different desires, expectations, and needs regarding a product or service, which means you need to put real effort into meeting them where they are.

One of the best ways to foster this understanding is to remain in consistent contact with your customers. Deciding which communication channels to offer customers depends a great deal on the kinds of customers you’re serving. That said, in our experience, text messaging is a universally successful method of communication because it mimics how people communicate in their personal lives. The same goes for web chat and WhatsApp.

Beyond this, setting the right expectations upfront is another good way to address common customer service challenges. For example, if you are not available 24/7, only provide support via email, or don’t have dedicated account managers , you should  make that clear right at the beginning.

Nothing will make a customer angrier than thinking they can text you only to realize that’s not an option in the middle of a crisis.

2. Next Step: Exceed Customer Expectations

Once you understand what your customers want and need, the next step is to go above and beyond to make them happy. Everyone wants to stand out in a fiercely competitive market, and going the extra mile is a great way to do that. One of the major customer service challenges is knowing how to do this proactively, but there are many ways you can succeed without a huge amount of effort.

Consider a few examples, such as:

  • Treating the customer as you would a friend in your personal life, i.e. by apologizing for any negative experiences and empathizing with how they feel;
  • Offering a credit or discount for a future purchase;
  • Sending them a card referencing their experience and thanking them for being a loyal customer;

The key is making sure they feel seen and heard. If you do this consistently, you’ll exceed your customers’ expectations, and the chances of them becoming active promoters of your company will increase dramatically.

3. Dealing with Unreasonable Demands

Of course, sometimes a customer has expectations that simply can’t be met, and this, too, counts as one of the serious customer service challenges. Customer service professionals often find themselves in situations where someone wants a discount that can’t be given, a feature that can’t be built, or a bespoke customization that can’t be done, and they wonder what they should do.

The only thing to do in this situation is to gently let the customer down, using respectful and diplomatic language. Something like, “We’re really sorry we’re not able to fulfill your request, but we’d be happy to help you choose an option that we currently have available” should do the trick.

4. Improving Your Internal Operations

Customer service teams face the constant pressure to improve efficiency, maintain high CSAT scores, drive revenue, and keep costs to service customers low. This matters a lot; slow response times and being kicked from one department to another are two of the more common complaints contact centers get from irate customers, and both are fixable with appropriate changes to your procedures.

Improving contact center performance is among the thorniest customer service challenges, but there’s no reason to give up hope!

One thing you can do is gather and utilize better data regarding your internal workflows. Data has been called “the new oil,” and with good reason—when used correctly, it’s unbelievably powerful.

What might this look like?

Well, you are probably already tracking metrics like first contact resolution (FCR) and (AHT), but this is easier when you have a unified, comprehensive dashboard that gives you quick insight into what’s happening across your organization.

You might also consider leveraging the power of generative AI, which has led to AI assistants that can boost agent performance in a variety of different tasks. You have to tread lightly here because too much bad automation will also drive customers away. But when you use technology like large language models according to best practices, you can get more done and make your customers happier while still reducing the burden on your agents.

5. Not Offering a Preferred Communication Channel

In general, contact centers often deal with customer service challenges stemming from new technologies. One way this can manifest is the need to cultivate new channels in line with changing patterns in the way we all communicate.

You can probably see where this is going – something like 96% of Americans have some kind of cell phone, and if you’ve looked up from your own phone recently, you’ve probably noticed everyone else glued to theirs.

It isn’t just that customers now want to be able to text you instead of calling or emailing; the ubiquity of cell phones has changed their basic expectations. They now take it for granted that your agents will be available round the clock, that they can chat with an agent asynchronously as they go about other tasks, etc.

We can’t tell you whether it’s worth investing in multiple communication channels for your industry. But based on our research, we can tell you that having multiple channels—and text messaging in particular—is something most people want and expect.

6. Not Offering Real-Time Options

When customers reach out asking for help, their customer service problems likely feel unique to them. But since you have so much more context, you’re aware that a very high percentage of inquiries fall into a few common buckets, like “Where is my order?”, “How do I handle a return?”, “My item arrived damaged, how can I exchange it for a new one?”, etc.

These and similar inquiries can easily be resolved instantly using AI, leaving customers and agents happier and more productive.

7. Handling Angry Customers

A common story in the customer service world involves an interaction going south and a customer getting angry.

Gracefully handling angry customers is one of those perennial customer service challenges; the very first merchants had to deal with angry customers, and our robot descendants will be dealing with angry customers long after the sun has burned out.

Whenever you find yourself dealing with a customer who has become irate, there are two main things you have to do:

  1. Empathize with them
  2. Do not lose your cool

It can be hard to remember, but the customer isn’t frustrated with you, they’re frustrated with the company and products. If you always keep your responses calm and rooted in the facts of the situation, you’ll always be moving toward providing a solution.

8. Dealing With a Service Outage Crisis

Sometimes, our technology fails us. The wifi isn’t working on the airplane, a cell phone tower is down following a lightning storm, or that printer from Office Space jams so often it starts to drive people insane.

As a customer service professional, you might find yourself facing the wrath of your customers if your service is down. Unfortunately, in a situation like this, there’s not much you can do except honestly convey to your customers that your team is putting all their effort into getting things back on track. You should go into these conversations expecting frustrated customers, but make sure you avoid the temptation to overpromise.

Talk with your tech team and give customers a realistic timeline, don’t assure them it’ll be back in three hours if you have no way to back that up. Though Elon Musk seems to get away with it, the worst thing the rest of us can do is repeatedly promise unrealistic timelines and miss the mark.

9. Retaining, Hiring, and Training Service Professionals

You may have seen this famous Maya Angelou quote, which succinctly captures what the customer service business is all about:

“I’ve learned that people will forget what you said, people will forget what you did, but people will never forget how you made them feel.”

Learning how to comfort a person or reassure them is high on the list of customer service challenges, and it’s something that is certainly covered in your training for new agents.

But training is also important because it eases the strain on agents and reduces turnover. For customer service professionals, the median time to stick with one company is less than a year, and every time someone leaves, that means finding a replacement, training them, and hoping they don’t head for the exits before your investment has paid off.

Keeping your agents happy will save you more money than you imagine, so invest in a proper training program. Ensure they know what’s expected of them, how to ask for help when needed, and how to handle challenging customers.

Final Thoughts on the Top Customer Service Challenges

Customer service challenges abound, but with the right approach, there’s no reason you shouldn’t be able to meet them head-on!

Check out our report for a more detailed treatment of three major customer service challenges and how to resolve them. Between the report and this post, you should be armed with enough information to identify your own internal challenges, fix them, and rise to new heights.

Request A Demo

5 Tips for Coaching Your Contact Center Agents to Work with AI

Generative AI has enormous potential to change the work done at places like contact centers. For this reason, we’ve spent a lot of energy covering it, from deep dives into the nuts and bolts of large language models to detailed advice for managers considering adopting it.

Here, we will provide tips on using AI tools to coach, manage, and improve your agents.

How Will AI Make My Agents More Productive?

Contact centers can be stressful places to work, but much of that stems from a paucity of good training and feedback. If an agent doesn’t feel confident in assuming their responsibilities or doesn’t know how to handle a tricky situation, that will cause stress.

Tip #1: Make Collaboration Easier

With the right AI tools for coaching agents, you can get state-of-the-art collaboration tools that allow agents to invite their managers or colleagues to silently appear in the background of a challenging issue. The customer never knows there’s a team operating on their behalf, but the agent won’t feel as overwhelmed. These same tools also let managers dynamically monitor all their agents’ ongoing conversations, intervening directly if a situation gets out of hand.

Agents can learn from these experiences to become more performant over time.

Tip #2: Use Data-Driven Management

Speaking of improvement, a good AI platform will have resources that help managers get the most out of their agents in a rigorous, data-driven way. Of course, you’re probably already monitoring contact center metrics, such as CSAT and FCR scores, but this barely scratches the surface.

What you really need is a granular look into agent interactions and their long-term trends. This will let you answer questions like “Am I overstaffed?” and “Who are my top performers?” This is the only way to run a tight ship and keep all the pieces moving effectively.

Tip #3: Use AI To Supercharge Your Agents

As its name implies, generative AI excels at generating text, and there are several ways this can improve your contact center’s performance.

To start, these systems can sometimes answer simple questions directly, which reduces the demands on your team. Even when that’s not the case, however, they can help agents draft replies, or clean up already-drafted replies to correct errors in spelling and grammar. This, too, reduces their stress, but it also contributes to customers having a smooth, consistent, high-quality experience.

Tip #4: Use AI to Power Your Workflows

A related (but distinct) point concerns how AI can be used to structure the broader work your agents are engaged in.

Let’s illustrate using sentiment analysis, which makes it possible to assess the emotional state of a person doing something like filing a complaint. This can form part of a pipeline that sorts and routes tickets based on their priority, and it can also detect when an issue needs to be escalated to a skilled human professional.

Tip #5: Train Your Agents to Use AI Effectively

It’s easy to get excited about what AI can do to increase your efficiency, but you mustn’t lose sight of the fact that it’s a complex tool your team needs to be trained to use. Otherwise, it’s just going to be one more source of stress.

You need to have policies around the situations in which it’s appropriate to use AI and the situations in which it’s not. These policies should address how agents should deal with phenomena like “hallucination,” in which a language model will fabricate information.

They should also contain procedures for monitoring the performance of the model over time. Because these models are stochastic, they can generate surprising output, and their behavior can change.

You need to know what your model is doing to intervene appropriately.

Wrapping Up

Hopefully, you’re more optimistic about what AI can do for your contact center, and this has helped you understand how to make the most out of it.

If there’s anything else you’d like to go over, you’re always welcome to request a demo of the Quiq platform. Since we focus on contact centers we take customer service pretty seriously ourselves, and we’d love to give you the context you need to make the best possible decision!

Request A Demo

Leveraging Agent Insights to Boost Efficiency and Performance

In the ever-evolving customer service landscape, the role of contact center agents cannot be overstated. As the frontline representatives of a company, their performance directly impacts the quality of customer experience, influencing customer loyalty and brand reputation.

However, the traditional approach to managing agent performance – relying on periodic reviews and supervisor observations – has given way to a more sophisticated, data-driven strategy. For this reason, managing agent performance with a method that leverages the rich data generated by agent interactions to enhance service delivery, agent satisfaction, and operational efficiency is becoming more important all the time.

This article delves into this approach. We’ll begin by examining its benefits from three critical perspectives – the customer, the agent, and the contact center manager – before turning to a more granular breakdown of how you can leverage it in your contact center.

Why is it Important to Manage Agent Performance with Insights?

First, let’s start by justifying this project. While it’s true that very few people today would doubt the need to track some data related to what agents are doing all day, it’s still worth saying a few words about why it really is a crucial part of running a contact center.

To do this, we’ll focus on how three groups are impacted when agent performance is managed through insights: customers, the agents themselves, and contact center managers.

It’s Good for the Customers

The primary beneficiary of improved agent performance is the customer. Contact centers can tailor their service strategies by analyzing agent metrics to better meet customer needs and preferences. This data-driven approach allows for identifying common issues, customer pain points, and trends in customer behavior, enabling more personalized and effective interactions.

As agents become more adept at addressing customer needs swiftly and accurately, customer satisfaction levels rise. This enhances the individual customer’s experience and boosts the overall perception of the brand, fostering loyalty and encouraging positive word-of-mouth.

It’s Good for the Agents

Agents stand to gain immensely from a management strategy focused on data-driven insights. Firstly, performance feedback based on concrete metrics rather than subjective assessments leads to a fairer, more transparent work environment.

Agents receive specific, actionable feedback that helps them understand their strengths and which areas need improvement. This can be incredibly motivating and can drive them to begin proactively bolstering their skills.

Furthermore, insights from performance data can inform targeted training and development programs. For instance, if data indicates that an agent excels in handling certain inquiries but struggles with others, their manager can provide personalized training to bridge this gap. This helps agents grow professionally and increases their job satisfaction as they become more competent and confident in their roles.

It’s Good for Contact Center Managers

For those in charge of overseeing contact centers, managing agents through insights into their performance offers a powerful tool for cultivating operational excellence. It enables a more strategic approach to workforce management, where decisions are informed by data rather than gut feeling.

Managers can identify high performers and understand the behaviors that lead to success, allowing them to replicate these practices across the team. Intriguingly, this same mechanism is also at play in the efficiency boost seen by contact centers that adopt generative AI. When such centers train a model on the interactions of their best agents, the knowledge in those agents’ heads can be incorporated into the algorithm and utilized by much more junior agents.

The insights-driven approach also aids in resource allocation. By understanding the strengths and weaknesses of their team, managers can assign agents to the tasks they are most suited for, optimizing the center’s overall performance.

Additionally, insights into agent performance can highlight systemic issues or training gaps, providing managers with the opportunity to make structural changes that enhance efficiency and effectiveness.

Moreover, using agent insights for performance management supports a culture of continuous improvement. It encourages a feedback loop where agents are continually assessed, supported, and developed, driving the entire team towards higher performance standards. This improves the customer experience and contributes to a positive working environment where agents feel valued and empowered.

In summary, managing performance by tracking agent metrics is a holistic strategy that enhances the customer experience, supports agent development, and empowers managers to make informed decisions.

It fosters a culture of transparency, accountability, and continuous improvement, leading to operational excellence and elevated service standards in the contact center.

How to Use Agent Insights to Manage Performance

Now that we know what all the fuss is about, let’s turn to addressing our main question: how to use agent insights to correct, fine-tune, and optimize agent performance. This discussion will center specifically around Quiq’s Agent Insights tool, which is a best-in-class analytics offering that makes it easy to figure out what your agents are doing, where they could improve, and how that ultimately impacts the customers you serve.

Managing Agent Availability

To begin with, you need a way of understanding when your agents are free to handle an issue and when they’re preoccupied with other work. The three basic statuses an agent can have are “available,” “current conversations” (i.e. only working on the current batch of conversations), and “unavailable.” All three of these can be seen through Agent Insights, which allows you to select from over 50 different metrics, customizing and saving different views as you see fit.

The underlying metrics that go into understanding this dimension of agent performance are, of course, time-based. In essence, you want to evaluate the ratios between four quantities: the time the agent is available, the time the agent is online, the time the agent spends in a conversation, and the time an agent is unavailable.

As you’re no doubt aware, you don’t necessarily want to maximize the amount of time an agent spends in conversations, as this can quickly lead to burnout. Rather, you want to use these insights into agent performance to strike the best, most productive balance possible.

Managing Agent Workload

A related phenomenon you want to understand is the kind of workload your agents are operating under. The five metrics that underpin this are:

  1. Availability
  2. Number of completions per hour your agents are managing
  3. Overall utilization (i.e. the percentage of an agent’s available conversation limit they have filled in a given period).
  4. Average workload
  5. The amount of time agents spend waiting for a customer response.

All of this can be seen in Agent Insights. This view allows you to do many things to hone in on specific parts of your operation. You can sort by the amount of time your agents spend waiting for a reply from a customer, for example, or segment agents by e.g. role. If you’re seeing high waiting and low utilization, that means you are overstaffed and should probably have fewer agents.

If you’re seeing high waiting and high utilization, by contrast, you should make sure your agents know inactive conversations should be marked appropriately.

As with the previous section, you’re not necessarily looking to minimize availability or maximize completions per hour. You want to make sure that agents are working at a comfortable pace, and that they have time between issues to reflect on how they’re doing and think about whether they want to change anything in their approach.

But with proper data-driven insights, you can do much more to ensure your agents have the space they need to function optimally.

Managing Agent Efficiency

Speaking of functioning optimally, the last thing we want to examine is agent efficiency. By using Agent Insights, we can answer questions such as how well new agents are adjusting to their roles, how well your teams are working together, and how you can boost each agent’s output (without working them too hard).

The field of contact center analytics is large, but in the context of agent efficiency, you’ll want to examine metrics like completion rate, completions per hour, reopen rate, missed response rate, missed invitation rate, and any feedback customers have left after interacting with your agents.

This will give you an unprecedented peek into the moment-by-moment actions agents are taking, and furnish you with the hard facts you need to help them streamline their procedures. Imagine, for example, you’re seeing a lot of keyboard usage. This means the agent is probably not operating as efficiently as they could be, and you might be able to boost their numbers by training them to utilize Quiq’s Snippets tool.

Or, perhaps you’re seeing a remarkably high rate of clipboard usage. In that case, you’d want to look over the clipboard messages your agents are using and consider turning them into snippets, where they’d be available to everyone.

The Modern Approach to Managing Agents

Embracing agent insights for performance management marks a transformative step towards achieving operational excellence in contact centers. This data-driven approach not only elevates the customer service experience but also fosters a culture of continuous improvement and empowerment among agents.

By leveraging tools like Quiq’s Agent Insights, managers can unlock a comprehensive understanding of agent availability, workload, and efficiency, enabling informed decisions that benefit both the customer and the service team.

If you’re intrigued by the possibilities, contact us to schedule a demo today!

Request A Demo

6 Questions to Ask Generative AI Vendors You’re Evaluating

With all the power exhibited by today’s large language models, many businesses are scrambling to leverage them in their offerings. Enterprises in a wide variety of domains – from contact centers to teams focused on writing custom software – are adding AI-backed functionality to make their users more productive and the customer experience better.

But, in the rush to avoid being the only organization not using the hot new technology, it’s easy to overlook certain basic sanity checks you must perform when choosing a vendor. Today, we’re going to fix that. This piece will focus on several of the broad categories of questions you should be asking potential generative AI providers as you evaluate all your options.

This knowledge will give you the best chance of finding a vendor that meets your requirements, will help you with integration, and will ultimately allow you to better serve your customers.

These are the Questions you Should ask Your Generative AI Vendor

Training large language models is difficult. Besides the fact that it requires an incredible amount of computing power, there are also hundreds of tiny little engineering optimizations that need to be made along the way. This is part of the reason why all the different language model vendors are different from one another.

Some have a longer context window, others write better code but struggle with subtle language-based tasks, etc. All of this needs to be factored into your final decision because it will impact how well your vendor performs for your particular use case.

In the sections that follow, we’ll walk you through some of the questions you should raise with each vendor. Most of these questions are designed to help you get a handle on how easy a given offering will be to use in your situation, and what integrating it will look like.

1. What Sort of Customer Service Do You Offer?

We’re contact center and customer support people, so we understand better than anyone how important it is to make sure users know what our product is, what it can do, and how we can help them if they run into issues.

As you speak with different generative AI vendors, you’ll want to probe them about their own customer support, and what steps they’ll take to help you utilize their platform effectively.

For this, just start with the basics by figuring out the availability of their support teams – what hours they operate in, whether they can accommodate teams working in multiple time zones, and whether there is an option for 24/7 support if a critical problem arises.

Then, you can begin drilling into specifics. One thing you’ll want to know about is the channels their support team operates through. They might set up a private Slack channel with you so you can access their engineers directly, for example, or they might prefer to work through email, a ticketing system, or a chat interface. When you’re discussing this topic, try to find out whether you’ll have a dedicated account manager to work with.

You’ll also want some context on the issue resolution process. If you have a lingering problem that’s not being resolved, how do you go about escalating it, and what’s the team’s response time for issues in general?

Finally, it’s important that the vendors have some kind of feedback mechanism. Just as you no doubt have a way for clients to let you know if they’re dissatisfied with an agent or a process, the vendor you choose should offer a way for you to let them know how they’re doing so they can improve. This not only tells you they care about getting better, it also indicates that they have a way of figuring out how to do so.

2. Does Your Team Offer Help with Setting up the Platform?

A related subject is the extent to which a given generative AI vendor will help you set up their platform in your environment. A good way to begin is by asking what kinds of training materials and resources they offer.

Many vendors are promoting their platforms by putting out a ton of educational content, all of which your internal engineers can use to get up to speed on what those platforms can do and how they function.

This is the kind of thing that is easy to overlook, but you should pay careful attention to it. Choosing a generative AI vendor that has excellent documentation, plenty of worked-out examples, etc. could end up saving you a tremendous amount of time, energy, and money down the line.

Then, you can get clarity on whether the vendor has a dedicated team devoted to helping customers like you get set up. These roles are usually found under titles like “solutions architect”, so be sure to ask whether you’ll be assigned such a person and the extent to which you can expect their help. Some platforms will go to the moon and back to make sure you have everything you need, while others will simply advise you if you get stuck somewhere.

Which path makes the most sense depends on your circumstances. If you have a lot of engineers you may not need more than a little advice here and there, but if you don’t, you’ll likely need more handholding (but will probably also have to pay extra for that). Keep all this in mind as you’re deciding.

3. What Kinds of Integrations Do You Support?

Now, it’s time to get into more technical details about the integrations they support. When you buy a subscription to a generative AI vendor, you are effectively buying a set of capabilities. But those capabilities are much more valuable if you know they’ll plug in seamlessly with your existing software, and they’re even more valuable if you know they’ll plug into software you plan on building later on. You’ve probably been working on a roadmap, and now’s the time to get it out.

It’s worth checking to see whether the vendor can support many different kinds of language models. This involves a nuance in what the word “vendor” means, so let’s unpack it a little bit. Some generative AI vendors are offering you a model, so they’re probably not going to support another company’s model.

OpenAI and Anthropic are examples of model vendors, so if you work with them you’re buying their model and will not be able to easily incorporate someone else’s model.

Other vendors, by contrast, are offering you a service, and in many cases that service could theoretically by powered by many different models.

Quiq’s Conversational CX Platform, for example, supports OpenAI’s GPT models, and we have plans to expand the scope of our integrations to encompass even more models in the future.

Another thing you’re going to want to check on is whether the vendor makes it easy to integrate vector databases into your workflow. Vectors are data structures that are remarkably good at capturing subtle relationships in large datasets; they’re becoming an ever-more-important part of machine learning, as evinced by the fact that there are now a multitude of different vector databases on offer.

The chances are pretty good that you’ll eventually want to leverage a vector database to store or search over customer interactions, and you’ll want a vendor that makes this easy.

Finally, see if the vendor has any case studies you can look at. Quiq has published a case study on how our language services were utilized by LOOP, a car insurance company, to make a far superior customer-service chatbot. The result was that customers were able to get much more personalization in their answers and were able to resolve their problems fully half of the time, without help. This led to a corresponding 55% reduction in tickets, and a customer satisfaction rating of 75% (!) when interacting with the Quiq-powered AI assistant.

See if the vendors you’re looking at have anything similar you can examine. This is especially helpful if the case studies are focused on companies that are similar to yours.

4. How Does Prompt Engineering and Fine-Tuning Work for Your Model?

For many tasks, large language models work perfectly fine on their own, without much special effort. But there are two methods you should know about to really get the most out of them: prompt engineering and fine-tuning.

As you know, prompts are the basic method for interacting with language models. You’ll give a model a prompt like “What is generative AI?”, and it’ll generate a response. Well, it turns out that models are really sensitive to the wording and structure of prompts, and prompt engineers are those who explore the best way to craft prompts to get useful output from a model.

It’s worth asking potential vendors about this because they handle prompts differently. Quiq’s AI Studio encourages atomic prompting, where a single prompt has a clear purpose and intended completion, and we support running prompts in parallel and sequentially. You can’t assume everyone will do this, however, so be sure to check.

Then, there’s fine-tuning, which refers to training a model on a bespoke dataset such that its output is heavily geared towards the patterns found in that dataset. It’s becoming more common to fine-tune a foundational model for specific use cases, especially when those use cases involve a lot of specialized vocabulary such as is found in medicine or law.

Setting up a fine-tuning pipeline can be cumbersome or relatively straightforward depending on the vendor, so see what each vendor offers in this regard. It’s also worth asking whether they offer technical support for this aspect of working with the models.

5. Can Your Models Support Reasoning and Acting?

One of the current frontiers in generative AI is building more robust, “agentic” models that can execute strings of tasks on their way to completing a broader goal. This goes by a few different names, but one that has been popping up in the research literature is “ReAct”, which stands for “reasoning and acting”.

You can get ReAct functionality out of existing language models through chain-of-thought prompting, or by using systems like AutoGPT; to help you concretize this a bit, let’s walk through how ReAct works in Quiq.

With Quiq’s AI Studio, a conversational data model is used to classify and store both custom and standard data elements, and these data elements can be set within and across “user turns”. A single user turn is the time between when a user offers an input to the time at which the AI responds and waits for the next user input.

Our AI can set and reason about the state of the data model, applying rules to take the next best action. Common actions include things like fetching data, running another prompt, delivering a message, or offering to escalate to a human.

Though these efforts are still early, this is absolutely the direction the field is taking. If you want to be prepared for what’s coming without the need to overhaul your generative AI stack later on, ask about how different vendors support ReAct.

6. What’s your Pricing Structure Like?

Finally, you’ll need to talk to vendors about how their prices work, including any available details on licensing types, subscriptions, and costs associated with the integration, use, and maintenance of their solution.

To take one example, Quiq’s licensing is based on usage. We establish a usage pool wherein our customers pre-pay Quiq for a 12-month contract; then, as the customer uses our software money is deducted from that pool. We also have an annual AI Assistant Maintenance fee along with a one-time implementation fee.

Vendors can vary considerably in how their prices work, so if you don’t want to overpay then make sure you have a clear understanding of their approach.

Picking the Right Generative AI Vendor

Language models and related technologies are taking the world by storm, transforming many industries, including customer service and contact center management.

Making use of these systems means choosing a good vendor, and that requires you to understand each vendor’s model, how those models integrate with other tools, and what you’re ultimately going to end up paying.

If you want to see how Quiq stacks up and what we can do for you, schedule a demo with us today!

Request A Demo

Your Guide to Trust and Transparency in the Age of AI

Over the last few years, AI has really come into its own. ChatGPT and similar large language models have made serious inroads in a wide variety of natural language tasks, generative approaches have been tested in domains like music and protein discovery, researchers have leveraged techniques like chain-of-thought prompting to extend the capabilities of the underlying models, and much else besides.

People working in domains like customer service, content marketing, and software engineering are mostly convinced that this technology will significantly impact their day-to-day lives, but many questions remain.

Given the fact that these models are enormous artifacts whose inner workings are poorly understood, one of the main questions centers around trust and transparency. In this article, we’re going to address these questions head-on. We’ll discuss why transparency is important when advanced algorithms are making ever more impactful decisions, and turn our attention to how you can build a more transparent business.

Why is Transparency Important?

First, let’s take a look at why transparency is important in the first place. The next few sections will focus on the trust issues that stem from AI becoming a ubiquitous technology that few understand at a deep level.

AI is Becoming More Integrated

AI has been advancing steadily for decades, and this has led to a concomitant increase in its use. It’s now commonplace for us to pick entertainment based on algorithmic recommendations, for our loan and job applications to pass through AI filters, and for more and more professionals to turn to ChatGPT before Google when trying to answer a question.

We personally know of multiple software engineers who claim to feel as though they’re at a significant disadvantage if their preferred LLM is offline for even a few hours.

Even if you knew nothing about AI except the fact that it seems to be everywhere now, that should be sufficient incentive to want more context on how it makes decisions and how those decisions are impacting the world.

AI is Poorly Understood

But, it turns out there is another compelling reason to care about transparency in AI: almost no one knows how LLMs and neural networks more broadly can do what they do.

To be sure, very simple techniques like decision trees and linear regression models pose little analytical challenge, and we’ve written a great deal over the past year about how language models are trained. But if you were to ask for a detailed account of how ChatGPT manages to create a poem with a consistent rhyme scheme, we couldn’t tell you.

And – as far as we know – neither could anyone else.

This is troubling; as we noted above, AI has become steadily more integrated into our private and public lives, and that trend will surely only accelerate now that we’ve seen what generative AI can do. But if we don’t have a granular understanding of the inner workings of advanced machine-learning systems, how can we hope to train bias out of them, double-check their decisions, or fine-tune them to behave productively and safely?

These precise concerns are what have given rise to the field of explainable AI. Mathematical techniques like LIME and SHAP can offer some intuition for why a given algorithm generated the output it did, but they accomplish this by crudely approximating the algorithm instead of actually explaining it. Mechanistic interpretability is the only approach we know of that confronts the task directly, but it has only just gotten started.

This leaves us in the discomfiting situation of relying on technologies that almost no one groks deeply, including the people creating them.

People Have Many Questions About AI

Finally, people have a lot of questions about AI, where it’s heading, and what its ultimate consequences will be. These questions can be laid out on a spectrum, with one end corresponding to relatively prosaic concerns about technological unemployment and deepfakes influencing elections, and the other corresponding to more exotic fears around superintelligent agents actively fighting with human beings for control of the planet’s future.

Obviously, we’re not going to sort all this out today. But as a contact center manager who cares about building trust and transparency, it would behoove you to understand something about these questions and have at least cursory answers prepared for them.

How do I Increase Transparency and Trust when Using AI Systems?

Now that you know why you should take trust and transparency around AI seriously, let’s talk about ways you can foster these traits in your contact center. The following sections will offer advice on crafting policies around AI use, communicating the role AI will play in your contact center, and more.

Get Clear on How You’ll Use AI

The journey to transparency begins with having a clear idea of what you’ll be using AI to accomplish. This will look different for different kinds of organizations – a contact center, for example, will probably want to use generative AI to answer questions and boost the efficiency of its agents, while a hotel might instead attempt to automate the check-in process with an AI assistant.

Each use case has different requirements and different approaches that are better suited for addressing it; crafting an AI strategy in advance will go a long to helping you figure out how you should allocate resources and prioritize different tasks.

Once you do that, you should then create documentation and a communication policy to support this effort. The documentation will make sure that current and future agents know how to use the AI platform you decide to work with, and it should address the strengths and weaknesses of AI, as well as information about when its answers should be fact-checked. It should also be kept up-to-date, reflecting any changes you make along the way.

The communication policy will help you know what to say if someone (like a customer) asks you what role AI plays in your organization.

Know Your Data

Another important thing you should keep in mind is what kind of data your model has been trained on, and how it was gathered. Remember that LLMs consume huge amounts of textual data and then learn patterns in that data they can use to create their responses. If that data contains biased information – if it tends to describe women as “nurses” and men as “doctors”, for example – that will likely end up being reflected in its final output. Reinforcement learning from human feedback and other approaches to fine-tuning can go part of the way to addressing this problem, but the best thing to do is ensure that the training data has been curated to reflect reality, not stereotypes.

For similar reasons, it’s worth knowing where the data came from. Many LLMs are trained somewhat indiscriminately, and might have even been shown corpora of pirated books or other material protected by copyright. This has only recently come to the forefront of the discussion, and OpenAI is currently being sued by several different groups for copyright infringement.

If AI ends up being an important part of the way your organization functions, the chances are good that, eventually, someone will want answers about data provenance.

Monitor Your AI Systems Continuously

Even if you take all the precautions described above, however, there is simply no substitute for creating a robust monitoring platform for your AI systems. LLMs are stochastic systems, meaning that it’s usually difficult to know for sure how they’ll respond to a given prompt. And since these models are prone to fabricating information, you’ll need to have humans at various points making sure the output is accurate and helpful.

What’s more, many machine learning algorithms are known to be affected by a phenomenon known as “model degradation”, in which their performance steadily declines over time. The only way you can check to see if this is occurring is to have a process in place to benchmark the quality of your AI’s responses.

Be Familiar with Standards and Regulations

Finally, it’s always helpful to know a little bit about the various rules and regulations that could impact the way you use AI. These tend to focus on what kind of data you can gather about clients, how you can use it, and in what form you have to disclose these facts.

The following list is not comprehensive, but it does cover some of the more important laws:

  • The General Data Protection Regulation (GDPR) is a comprehensive regulatory framework established by the European Union to dictate data handling practices. It is applicable not only to businesses based in Europe but also to any entity that processes data from EU citizens.
  • The California Consumer Protection Act (CCPA) was introduced by California to enhance individual control over personal data. It mandates clearer data collection practices by companies, requires privacy disclosures, and allows California residents to opt-out of data collection.
  • Soc II, developed by the American Institute of Certified Public Accounts, focuses on the principles of confidentiality, privacy, and security in the handling and processing of consumer data.
  • In the United Kingdom, contact centers must be aware of the Financial Conduct Authority’s new “Consumer Duty” regulations. These regulations emphasize that firms should act with integrity toward customers, avoid causing them foreseeable harm, and support customers in achieving their financial objectives. As the integration of generative AI into this regulatory landscape is still being explored, it’s an area that stakeholders need to keep an eye on.

Fostering Trust in a Changing World of AI

An important part of utilizing AI effectively is making sure you do so in a way that enhances the customer experience and works to build your brand. There’s no point in rolling out a state-of-the-art generative AI system if it undermines the trust your users have in your company, so be sure to track your data, acquaint yourself with the appropriate laws, and communicate clearly.

Another important step you can take is to work with an AI vendor who enjoys a sterling reputation for excellence and propriety. Quiq is just such a vendor, and our Conversational AI platform can bring AI to your contact center in a way that won’t come back to bite you later. Schedule a demo to see what we can do for you, today!

Request A Demo

Moving from Natural Language Understanding (NLU) to Customer-Facing AI Assistants

There can no longer be any doubt that large language models and generative AI more broadly are going to have a real impact on many industries. Though we’re still in the preliminary stages of working out the implications, the evidence so far suggests that this is already happening.

Language models in contact centers are helping to more junior workers be more productive, and reducing employee turnover in the process. They’re also being used to automate huge swathes of content creation, assisting in data augmentation tasks, and plenty else besides.

Part of the task we’ve set ourselves here at Quiq is explaining how these models are trained and how they’ll make their way into the workflows of the future. To that end, we’ve written extensively about how large language models are trained, how researchers are pushing them into uncharted territories, and which models are appropriate for any given task.

This post is another step in that endeavor. Specifically, we’re going to discuss natural language understanding, how it works, and how it’s distinct from related terms (like “natural language generation”). With that done, we’ll talk about how natural language understanding is a foundational first step and takes us closer to creating robust customer-facing AI assistants.

What is Natural Language Understanding?

Language is a tool of remarkable power and flexibility – so much so that it wouldn’t be much of an exaggeration to say that it’s at the root of everything else the human race has accomplished. From towering works of philosophy to engineering specs to instructions for setting up a remote, language is a force multiplier that makes each of us vastly more effective than we otherwise would be.

Evidence of this claim comes from the fact that, even when we’re alone, many of us think in words or even talk to ourselves as we work through something difficult. Certain kinds of thoughts are all but impossible to have without the scaffolding provided by language.

For all these reasons, creating machines able to parse natural language has long been a goal of AI researchers and computer scientists. The field that has been established to address itself to this task is known as natural language understanding.

There’s a rather deep philosophical here where the word “understanding” is concerned. As the famous story of the Tower of Babel demonstrates, it isn’t enough for the members of a group to be making sounds to accomplish great things, it’s also necessary for the people involved to understand what everyone is saying. This means that when you say a word like “chicken” there’s a response in my nervous system such that the “chicken” concept is activated, along with other contextually relevant knowledge, such as the location of the chicken feed. If you said “курица” (to someone who doesn’t know Russian) or “鸡” (to someone who doesn’t know Mandarin), the same process wouldn’t have occurred, no understanding would’ve happened, and language wouldn’t have helped at all.

Whether and how a machine can understand language fully humanly is too big a topic to address here, but we can make some broad comments. As is often the case, researchers in the field of natural language understanding have opted to break the problem down into much more tractable units. Two of the biggest such units of natural language understanding are intent recognition (what a sentence is intended to accomplish) and entity recognition (who the sentence is referring to).

This should make a certain intuitive sense. Though you may not be consciously going through a mental checklist when someone says something to you, on some level, you’re trying to figure out what their goal is and who or what they’re talking about. The intent behind the sentence “John has an apple”, for example, is to inform you of a fact about the world, and the main entities are “John” and “apple”. If you know John, a little image of him holding an apple would probably pop into your head.

This has many obvious applications to the work done in contact centers. If you’re building an automated ticket classification system, for instance, it would help to be able to tell whether the intent behind the ticket is to file a complaint, reach a representative, or perform a task like resetting a password. It would also help to be able to categorize the entities, like one of a dozen products your center supports, that are being referred to.

Natural Language Understanding v.s. Natural Language Processing

Natural language understanding is its own field, and it’s easy to confuse it with other, related fields, like natural language processing.

Most of the sources we consulted consider natural language understanding to be a subdomain of natural language processing (NLP). Whereas the former is concerned with parsing natural language into a format that machines can work with, the latter subsumes this task, along with others like machine translation and natural language generation.

Natural Language Understanding v.s. Natural Language Generation

Speaking of natural language generation, many people also confuse natural language understanding and natural language generation. Natural language generation is more or less what it sounds like using computers to generate human-sounding text or speech.

Natural language understanding can be an important part of getting natural language generation right, but they’re not the same thing.

Customer-Facing AI Assistants

Now that we’ve discussed natural language understanding, let’s talk about how it can be utilized in the attempt to create high-quality customer-facing AI assistants.

How Can Natural Language Understand Be Used to Make Customer-Facing Assistants?

Natural language understanding refers to a constellation of different approaches to decomposing language into pieces that a machine can work with. This allows an algorithm to discover the intent in a message, tag parts of speech (nouns, verbs, etc.), or pull out the entities referenced.

All of this is an important part of building effective customer-facing AI assistants. At Quiq, we’ve built LLM-powered knowledge assistants able to answer common questions across your reference documentation, data assistants that can use CRM and order management systems to provide actionable insights, and other kinds of conversational AI systems. Though we draw on many technologies and research areas, none of this would be possible without natural language understanding.

What are the Benefits of Customer-Facing AI Assistants?

The reason people have been working so long to create powerful customer-facing AI assistants is that there are so many benefits involved.

At a contact center, agents spend most of their day answering questions, resolving issues, and otherwise making sure a customer base can use a set of product offerings as intended.

As with any job, some of these tasks are higher-value than others. All of the work is important, but there will always be subtle and thorny issues that only a skilled human can work through, while others are quotidian and can be farmed out to a machine.

This is a long way of saying that one of the major benefits of customer-facing AI assistants is that they free up your agents to specialize at handling the most pressing requests, with password resets or something similar handled by a capable product like the Quiq platform.

A related benefit is improved customer experience. When agents can focus their efforts they can spend more time with customers who need it. And, when you have properly fine-tuned language models interacting with customers, you’ll know that they’re unfailingly polite and helpful because they’ll never become annoyed after a long shift the way a human being might.

Robust Costumer-Facing AI Assistants with Quiq

Just as understanding has been such a crucial part of the success of our species, it’ll be an equally crucial part of the success of advanced AI tooling.

One way you can make use of bleeding-edge natural language understanding techniques is by building your language models. This would require you to hire teams of extremely smart engineers. But this would be expensive; besides their hefty salaries, you’d also have to budget to keep the fridge stocked with the sugar-free Red Bulls such engineers require to function.

Or, you could utilize the division of labor. Just as contact center agents can outsource certain tasks to machines, so too can you outsource the task of building an AI-based CX platform to Quiq. Set up a demo today to see what our advanced AI technology and team can do for your contact center!

Request A Demo

Reinforcement Learning from Human Feedback

ChatGPT – and other large language models like it – are already transforming education, healthcare, software engineering, and the work being done in contact centers.

We’ve written extensively about how self-supervised learning is used to train these models, but one thing we haven’t spent much time on is reinforcement learning from human feedback (RLHF).

Today, we’re rectifying that. We’re going to dive into what reinforcement learning from human feedback is, why it’s important, and how it works.

With that done, you’ll have received a thorough education in this world-changing technology.

What is Reinforcement Learning from Human Feedback?

As you no doubt surmised from its name, reinforcement learning from human feedback involves two components: reinforcement learning and human feedback. Though the technical specifics are (as usual) very involved, the basic idea is simple: you have models produce output, humans rate the output that they prefer (based on its friendliness, completeness, accuracy, etc.), and then the model is updated accordingly.

It’ll help if we begin by talking about what reinforcement learning is. This background will prove useful in understanding the unfolding of the broader process.

What is Reinforcement Learning?

There are four widespread approaches to getting intelligent behavior from machines: supervised learning, unsupervised learning, semi-supervised learning, and reinforcement learning.

With supervised learning, you feed a statistical algorithm a bunch of examples of correctly-labeled data in the hope that it will generalize to further examples it hasn’t seen before. Regression and supervised classification models are standard applications of supervised learning.

Unsupervised learning is a similar idea, but you forego the labels. It’s used for certain kinds of clustering tasks, and for applications like dimensionality reduction.

Semi-supervised learning is a combination of these two approaches. Suppose you have a gigantic body of photographs, and you want to develop an automated system to tag them. If some of them are tagged then your system can use those tags to learn a pattern, which can then be applied to the rest of the untagged images.

Finally, there’s reinforcement learning (RL). Reinforcement learning is entirely different. With reinforcement learning, you’re usually setting up an environment (like a video game), and putting an agent in the environment with a reward structure that tells it which actions are good and which are bad. If the agent successfully flies a spaceship through a series of rings, for example, that might be worth +10 points each, completing an entire level might be worth +100, crashing might be worth -1,000, and so on.

The idea is that, over time, the reinforcement learning agent will learn to execute a strategy that maximizes its long-term reward. It’ll realize that rings are worth a few points and so it should fly through them, it’ll learn that it should try to complete a level because that’s a huge reward bonus, it’ll learn that crashing is bad, etc.

Reinforcement learning is far more powerful than other kinds of machine learning; when done correctly, it can lead to agents able to play the stock market, run procedures in a factory, and do a staggering variety of other tasks.

What are the Steps of Reinforcement Learning from Human Feedback?

Now that we know a little bit about reinforcement learning, let’s turn to a discussion of reinforcement learning from human feedback.

As we just described, reinforcement learning agents have to be trained like any other machine learning system. Under normal circumstances, this doesn’t involve any human feedback. A programmer will update the code, environment, or reward structure between training runs, but they don’t usually provide feedback directly to the agent.

Except, that is, in the case of reinforcement learning from human feedback, in which case that’s exactly what happens. A model will produce a set of outputs, and humans will rank them. Over time the model will adjust to making more and more appropriate responses, as judged by the human raters providing them with feedback.

Sometimes, this feedback can be for something relatively prosaic. It’s been used, for example, to get RL agents to execute backflips in simulated environments. The raters will look at short videos of two movements and select the one that looks like it’s getting closer to a backflip; with enough time, this gets the agent to actually do one.

Or, it can be used for something more nuanced, such as getting a large language model to produce more conversational dialogue. This is part of how ChatGPT was trained.

Why is Reinforcement Learning from Human Feedback Necessary?

ChatGPT is already being used to great effect in contact centers and the customer service arena more broadly. Here are some example applications:

  • Question answering: ChatGPT is exceptionally good at answering questions. What’s more, some companies have begun fine-tuning it on their own internal and external documentation, so that people can directly ask it questions about how a product works or how to solve an issue. This obviates the need to go hunting around inside the docs.
  • Summarization: Similarly, ChatGPT can be used to summarize video transcripts, email threads, and lengthy articles so that agents (or customers) can get through the material at a much greater clip. This can, for example, help agents stay abreast of what’s going on in other parts of the company without burdening them with the need to read constantly. Quiq has custom-built tools for performing exactly this function.
  • Onboarding new hires: Together, question-answering and summarization are helping new contact center agents get up to speed much more quickly when they start their jobs.
    Sentiment analysis: Sentiment analysis refers to classifying a text according to its sentiment, i.e. whether it’s “positive”, “negative”, or “neutral”. Sentiment analysis comes in several different flavors, including granular and aspect-spaced, and ChatGPT can help with all of them. Being able to automatically tag a customer issue comes in handy when you’re trying to sort and prioritize them.
  • Real-time language translation: If your product or service has an international audience, then you might need to avail yourself of translation services so that agents and customers are speaking the same language. There are many such services available, but ChatGPT has proven to be at least as good as almost all of them.

In aggregate, these and other use cases of large language models are making contact center agents much more productive. But contact center agents have to interact with customers in a certain way – they have to be polite, helpful, etc.

And out of the box, most large language models do not behave that way. We’ve already had several high-profile incidents in which a language model e.g. asked a reporter to end his marriage or falsely accused a law school professor of sexual harassment.

Reinforcement learning from human feedback is currently the most promising approach for tuning this toxic and harmful behavior out of large language models. The only reason they’re able to help contact center agents so much is that they’ve been fine-tuned with such an approach; otherwise, agents would be spending an inordinate amount of time rephrasing and tinkering with a model’s output to get it to be appropriately friendly.

This is why reinforcement learning from human feedback is important for the managers of contact centers to understand – it’s a major part of why large language models are so useful in the first place.

Applications of Reinforcement Learning from Human Feedback

To round out our picture, we’re going to discuss a few ways in which reinforcement learning from human feedback is actually used in the wild. We’ve already discussed how it is fine-tuning models to be more helpful in the context of a contact center, and we’ll now talk a bit about how it’s used in gaming and robotics.

Using Reinforcement Learning from Human Feedback in Games

Gaming has long been one of the ideal testing grounds for new approaches to artificial intelligence. As you might expect, it’s also a place where reinforcement learning from human feedback has been successfully applied.

OpenAI used it to achieve superhuman performance on a classic Atari game, Enduro. Enduro is an old-school racing game, and like all racing games, the point is to gradually pass the other cars without hitting them or going out of bounds in the game.

It’s exceptionally difficult for an agent to learn to play Enduro will using only standard reinforcement learning approaches. But when human feedback is added, the results shift dramatically.

Using Reinforcement Learning from Human Feedback in Robotics

Because robotics almost always involves an agent interacting with the physical world, it’s especially well-suited to reinforcement learning from human feedback.

Often, it can be difficult to get a robot to execute a long series of steps that achieves a valuable reward, especially when the intermediate steps aren’t themselves very valuable. What’s more, it can be especially difficult to build a reward structure that correctly incentivizes the agent to execute the intermediate steps in the right order.

It’s much simpler instead to have humans look at sequences of actions and judge for themselves which will get the agent closer to its ultimate goal.

RLHF For The Contact Center Manager

Having made it this far, you should be in a much better position to understand how reinforcement learning from human feedback works, and how it contributes to the functioning of your contact centers.

If you’ve been thinking about leveraging AI to make yourself or your agents more effective, set up a demo with the Quiq team to see how we can put our cutting-edge models to work for you. We offer both customer-facing and agent-facing tools, all of them designed to help you make customers happier while reducing agent burnout and turnover.

Request A Demo

What are the Biggest Questions About AI?

The term “artificial intelligence” was coined at the famous Dartmouth Conference in 1956, put on by luminaries like John McCarthy, Marvin Minsky, and Claude Shannon, among others.

These organizers wanted to create machines that “use language, form abstractions and concepts, solve kinds of problems now reserved for humans, and improve themselves.” They went on to claim that “…a significant advance can be made in one or more of these problems if a carefully selected group of scientists work on it together for a summer.”

Half a century later, it’s fair to say that this has not come to pass; brilliant as they were, it would seem as though McCarthy et al. underestimated how difficult it would be to scale the heights of the human intellect.

Nevertheless, remarkable advances have been made over the past decade, so much so that they’ve ignited a firestorm of controversy around this technology. People are questioning the ways in which it can be used negatively, and whether it might ultimately pose an extinction risk to humanity; they’re probing fundamental issues around whether machines can be conscious, exercise free will, and think in the way a living organism does; they’re rethinking the basis of intelligence, concept formation, and what it means to be human.

These are deep waters to be sure, and we’re not going to swim them all today. But as contact center managers and others begin the process of thinking about using AI, it’s worth being at least aware of what this broader conversation is about. It will likely come up in meetings, in the press, or in Slack channels in exchanges between employees.

And that’s the subject of our piece today. We’re going to start by asking what artificial intelligence is and how it’s being used, before turning to address some of the concerns about its long-term potential. Our goal is not to answer all these concerns, but to make you aware of what people are thinking and saying.

What is Artificial Intelligence?

Artificial intelligence is famous for having had many, many definitions. There are those, for example, who believe that in order to be intelligent computers must think like humans, and those who reply that we didn’t make airplanes by designing them to fly like birds.

For our part, we prefer to sidestep the question somewhat by utilizing the approach taken in one of the leading textbooks in the field, Stuart Russell and Peter Norvig’s “Artificial Intelligence: A Modern Approach”.

They propose a multi-part system for thinking about different approaches to AI. One set of approaches is human-centric and focuses on designing machines that either think like humans – i.e., engage in analogous cognitive and perceptual processes – or act like humans – i.e. by behaving in a way that’s indistinguishable from a human, regardless of what’s happening under the hood (think: the Turing Test).

The other set of approaches is ideal-centric and focuses on designing machines that either think in a totally rational way – conformant with the rules of Bayesian epistemology, for example – or behave in a totally rational way – utilizing logic and probability, but also acting instinctively to remove itself from danger, without going through any lengthy calculations.

What we have here, in other words, is a framework. Using the framework not only gives us a way to think about almost every AI project in existence, it also saves us from needing to spend all weekend coming up with a clever new definition of AI.

Joking aside, we think this is a productive lens through which to view the whole debate, and we offer it here for your information.

What is Artificial Intelligence Good For?

Given all the hype around ChatGPT, this might seem like a quaint question. But not that long ago, many people were asking it in earnest. The basic insights upon which large language models like ChatGPT are built go back to the 1960s, but it wasn’t until 1) vast quantities of data became available, and 2) compute cycles became extremely cheap that much of its potential was realized.

Today, large language models are changing (or poised to change) many different fields. Our audience is focused on contact centers, so that’s what we’ll focus on as well.

There are a number of ways that generative AI is changing contact centers. Because of its remarkable abilities with natural language, it’s able to dramatically speed up agents in their work by answering questions and formatting replies. These same abilities allow it to handle other important tasks, like summarizing articles and documentation and parsing the sentiment in customer messages to enable semi-automated prioritization of their requests.

Though we’re still in the early days, the evidence so far suggests that large language models like Quiq’s conversational CX platform will do a lot to increase the efficiency of contact center agents.

Will AI be Dangerous?

One thing that’s burst into public imagination recently has been the debate around the risks of artificial intelligence, which fall into two broad categories.

The first category is what we’ll call “social and political risks”. These are the risks that large language models will make it dramatically easier to manufacture propaganda at scale, and perhaps tailor it to specific audiences or even individuals. When combined with the astonishing progress in deepfakes, it’s not hard to see how there could be real issues in the future. Most people (including us) are poorly equipped to figure out when a video is fake, and if the underlying technology gets much better, there may come a day when it’s simply not possible to tell.

Political operatives are already quite skilled at cherry-picking quotes and stitching together soundbites into a damning portrait of a candidate – imagine what’ll be possible when they don’t even need to bother.

But the bigger (and more speculative) danger is around really advanced artificial intelligence. Because this case is harder to understand, it’s what we’ll spend the rest of this section on.

Artificial Superintelligence and Existential Risk

As we understand it, the basic case for existential risk from artificial intelligence goes something like this:

“Someday soon, humanity will build or grow an artificial general intelligence (AGI). It’s going to want things, which means that it’ll be steering the world in the direction of achieving its ambitions. Because it’s smart, it’ll do this quite well, and because it’s a very alien sort of mind, it’ll be making moves that are hard for us to predict or understand. Unless we solve some major technological problems around how to design reward structures and goal architectures in advanced agentive systems, what it wants will almost certainly conflict in subtle ways with what we want. If all this happens, we’ll find ourselves in conflict with an opponent unlike any we’ve faced in the history of our species, and it’s not at all clear we’ll prevail.”

This is heady stuff, so let’s unpack it bit by bit. The opening sentence, “…humanity will build or grow an artificial general intelligence”, was chosen carefully. If you understand how LLMs and deep learning systems are trained, the process is more akin to growing an enormous structure than it is to building one.

This has a few implications. First, their internal workings remain almost completely inscrutable. Though researchers in fields like mechanistic interpretability are going a long way toward unpacking how neural networks function, the truth is, we’ve still got a long way to go.

What this means is that we’ve built one of the most powerful artifacts in the history of Earth, and no one is really sure how it works.

Another implication is that no one has any good theoretical or empirical reason to bound the capabilities and behavior of future systems. The leap from GPT-2 to GPT-3.5 was astonishing, as was the leap from GPT-3.5 to GPT-4. The basic approach so far has been to throw more data and more compute at the training algorithms; it’s possible that this paradigm will begin to level off soon, but it’s also possible that it won’t. If the gap between GPT-4 and GPT-5 is as big as the gap between GPT-3 and GPT-4, and if the gap between GPT-6 and GPT-5 is just as big, it’s not hard to see that the consequences could be staggering.

As things stand, it’s anyone’s guess how this will play out. But that’s not necessarily a comforting thought.

Next, let’s talk about pointing a system at a task. Does ChatGPT want anything? The short answer is: as far as we can tell, it doesn’t. ChatGPT isn’t an agent, in the sense that it’s trying to achieve something in the world, but work into agentive systems is ongoing. Remember that 10 years ago most neural networks were basically toys, and today we have ChatGPT. If breakthroughs in agency follow a similar pace (and they very well may not), then we could have systems able to pursue open-ended courses of action in the real world in relatively short order.

Another sobering possibility is that this capacity will simply emerge from the training of huge deep learning systems. This is, after all, the way human agency emerged in the first place. Through the relentless grind of natural selection, our ancestors went from chipping flint arrowheads to industrialization, quantum computing, and synthetic biology.

To be clear, this is far from a foregone conclusion, as the algorithms used to train large language models is quite different from natural selection. Still, we want to relay this line of argumentation, because it comes up a lot in these discussions.

Finally, we’ll address one more important claim, “…what it wants will almost certainly conflict in subtle ways with what we want.” Why think this is true? Aren’t these systems that we design and, if so, can’t we just tell it what we want it to go after?

Unfortunately, it’s not so simple. Whether you’re talking about reinforcement learning or something more exotic like evolutionary programming, the simple fact is that our algorithms often find remarkable mechanisms by which to maximize their reward in ways we didn’t intend.

There are thousands of examples of this (ask any reinforcement-learning engineer you know), but a famous one comes from the classic Coast Runners video game. The engineers who built the system tried to set up the algorithm’s rewards so that it would try to race a boat as well as it could. What it actually did, however, was maximize its reward by spinning in a circle to hit a set of green blocks over and over again.

biggest questions about AI

Now, this may seem almost silly – do we really have anything to fear from an algorithm too stupid to understand the concept of a “race”?

But this would be missing the thrust of the argument. If you had access to a superintelligent AI and asked it to maximize human happiness, what happened next would depend almost entirely on what it understood “happiness” to mean.

If it were properly designed, it would work in tandem with us to usher in a utopia. But if it understood it to mean “maximize the number of smiles”, it would be incentivized to start paying people to get plastic surgery to fix their faces into permanent smiles (or something similarly unintuitive).

Does AI Pose an Existential Risk?

Above, we’ve briefly outlined the case that sufficiently advanced AI could pose a serious risk to humanity by being powerful, unpredictable, and prone to pursuing goals that weren’t-quite-what-we-meant.

So, does this hold water? Honestly, it’s too early to tell. The argument has hundreds of moving parts, some well-established and others much more speculative. Our purpose here isn’t to come down on one side of this debate or the other, but to let you know (in broad strokes) what people are saying.

At any rate, we are confident that the current version of ChatGPT doesn’t pose any existential risks. On the contrary, it could end up being one of the greatest advancements in productivity ever seen in contact centers. And that’s what we’d like to discuss in the next section.

Will AI Take All the Jobs?

The concern that someday a new technology will render human labor obsolete is hardly new. It was heard when mechanized weaving machines were created, when computers emerged, when the internet emerged, and when ChatGPT came onto the scene.

We’re not economists and we’re not qualified to take a definitive stand, but we do have some early evidence that is showing that large language models are not only not resulting in layoffs, they’re making agents much more productive.

Erik Brynjolfsson, Danielle Li, and Lindsey R. Raymond, three MIT economists, looked at the ways in which generative AI was being used in a large contact center. They found that it was actually doing a good job of internalizing the ways in which senior agents were doing their jobs, which allowed more junior agents to climb the learning curve more quickly and perform at a much higher level. This had the knock-on effect of making them feel less stressed about their work, thus reducing turnover.

Now, this doesn’t rule out the possibility that GPT-10 will be the big job killer. But so far, large language models are shaping up to be like every prior technological advance, i.e., increasing employment rather than reducing it.

What is the Future of AI?

The rise of AI is raising stock valuations, raising deep philosophical questions, and raising expectations and fears about the future. We don’t know for sure how all this will play out, but we do know contact centers, and we know that they stand to benefit greatly from the current iteration of large language models.

These tools are helping agents answer more queries per hour, do so more thoroughly, and make for a better customer experience in the process.

If you want to get in on the action, set up a demo of our technology today.

Request A Demo

What is Sentiment Analysis? – Ultimate Guide

A person only reaches out to a contact center when they’re having an issue. They can’t get a product to work the way they need it to, for example, or they’ve been locked out of their account.

The chances are high that they’re frustrated, angry, or otherwise in an emotionally-fraught state, and this is something contact center agents must understand and contend with.

The term “sentiment analysis” refers to the field of machine learning which focuses on developing algorithmic ways of detecting emotions in natural-language text, such as the messages exchanged between a customer and a contact center agent.

Making it easier to detect, classify, and prioritize messages on the basis of their sentiment is just one of many ways that technology is revolutionizing contact centers, and it’s the subject we’ll be addressing today.

Let’s get started!

What is Sentiment Analysis?

Sentiment analysis involves using various approaches to natural language processing to identify the overall “sentiment” of a piece of text.

Take these three examples:

  1. “This restaurant is amazing. The wait staff were friendly, the food was top-notch, and we had a magnificent view of the famous New York skyline. Highly recommended.”
  2. “Root canals are never fun, but it certainly doesn’t help when you have to deal with a dentist as unprofessional and rude as Dr. Thomas.”
  3. “Toronto’s forecast for today is a high of 75 and a low of 61 degrees.”

Humans excel at detecting emotions, and it’s probably not hard for you to see that the first example is positive, the second is negative, and the third is neutral (depending on how you like your weather.)

There’s a greater challenge, however, in getting machines to make accurate classifications of this kind of data. How exactly that’s accomplished is the subject of the next section, but before we get to that, let’s talk about a few flavors of sentiment analysis.

What Types of Sentiment Analysis Are There?

It’s worth understanding the different approaches to sentiment analysis if you’re considering using it in your contact center.

Above, we provided an example of positive, negative, and neutral text. What we’re doing there is detecting the polarity of the text, and as you may have guessed, it’s possible to make much more fine-grained delineations of textual data.

Rather than simply detecting whether text is positive or negative, for example, we might instead use these categories: very positive, positive, neutral, negative, and very negative.

This would give us a better understanding of the message we’re looking at, and how it should be handled.

Instead of classifying text by its polarity, we might also use sentiment analysis to detect the emotions being communicated – rather than classifying a sentence as being “positive” or “negative”, in other words, we’d identify emotions like “anger” or “joy” contained in our textual data.

This is called “emotion detection” (appropriately enough), and it can be handled with long short-term memory (LSTM) or convolutional neural network (CNN) models.

Another, more granular approach to sentiment analysis is known as aspect-based sentiment analysis. It involves two basic steps: identifying “aspects” of a piece of text, then identifying the sentiment attached to each aspect.

Take the sentence “I love the zoo, but I hate the lines and the monkeys make fun of me.” It’s hard to assign an overall sentiment to the sentence – it’s generally positive, but there’s kind of a lot going on.

If we break out the “zoo”, “lines”, and “monkeys” aspects, however, we can see that there’s the positive sentiment attached to the zoo, and negative sentiment attached to the lines and the abusive monkeys.

Why is Sentiment Analysis Important?

It’s easy to see how aspect-based sentiment analysis would inform marketing efforts. With a good enough model, you’d be able to see precisely which parts of your offering your clients appreciate, and which parts they don’t. This would give you valuable information in crafting a strategy going forward.

This is true of sentiment analysis more broadly, and of emotion detection too.
You need to know what people are thinking, saying, and feeling about you and your company if you’re going to meet their needs well enough to make a profit.

Once upon a time, the only way to get these data was with focus groups and surveys. Those are still utilized, of course. But in the social media era, people are also not shy about sharing their opinions online, in forums, and similar outlets.

These oceans of words from an invaluable resource if you know how to mine them. When done correctly, sentiment analysis offers just the right set of tools for doing this at scale.

Challenges with Sentiment Analysis

Sentiment analysis confers many advantages, but it is not without its challenges. Most of these issues boil down to handling subtleties or ambiguities in language.

Consider a sentence like “This is a remarkable product, but still not worth it at that price.” Calling a product “remarkable” is a glowing endorsement, tempered somewhat by the claim that its price is set too high. Most basic sentiment classifiers would probably call this “positive”, but as you can see, there are important nuances.

Another issue is sarcasm.

Suppose we showed you a sentence like “This movie was just great, I loved spending three hours of my Sunday afternoon following a story that could’ve been told in twenty minutes.”

A sentiment analysis algorithm is likely going to pick up on “great” and “loved” when calling this sentence positive.

But, as humans, we know that these are backhanded compliments meant to communicate precisely the opposite message.

Machine-learning systems will also tend to struggle with idioms that we all find easy to parse, such as “Setting up my home security system was a piece of cake.” This is positive because “piece of cake” means something like “couldn’t have been easier”, but an algorithm may or may not pick up on that.

Finally, we’ll mention the fact that much of the text in product reviews will contain useful information that doesn’t fit easily into a “sentiment” bucket. Take a sentence like “The new iPhone is smaller than the new Android.” This is just a bare statement of physical facts, and whether it counts as positive or negative depends a lot on what a given customer is looking for.

There are various ways of trying to ameliorate these issues, most of which are outside the scope of this article. For now, we’ll just note that sentiment analysis needs to be approached carefully if you want to glean an accurate picture of how people feel about your offering from their textual reviews. So long as you’re diligent about inspecting the data you show the system and are cautious in how you interpret the results, you’ll probably be fine.

Two people review data on a paper and computer to anticipate customer needs.

How Does Sentiment Analysis Work?

Now that we’ve laid out a definition of sentiment analysis, talked through a few examples, and made it clear why it’s so important, let’s discuss the nuts and bolts of how it works.

Sentiment analysis begins where all data science and machine learning projects begin: with data. Because sentiment analysis is based on textual data, you’ll need to utilize various techniques for preprocessing NLP data. Specifically, you’ll need to:

  • Tokenize the data by breaking sentences up into individual units an algorithm can process;
  • Use either stemming or lemmatization to turn words into their root form, i.e. by turning “ran” into “run”;
  • Filter out stop words like “the” or “as”, because they don’t add much to the text data.

Once that’s done, there are two basic approaches to sentiment analysis. The first is known as “rule-based” analysis. It involves taking your preprocessed textual data and comparing it against a pre-defined lexicon of words that have been tagged for sentiment.

If the word “happy” appears in your text it’ll be labeled “positive”, for example, and if the word “difficult” appears in your text it’ll be labeled “negative.”

(Rules-based sentiment analysis is more nuanced than what we’ve indicated here, but this is the basic idea.)

The second approach is based on machine learning. A sentiment analysis algorithm will be shown many examples of labeled sentiment data, from which it will learn a pattern that can be applied to new data the algorithm has never seen before.

Of course, there are tradeoffs to both approaches. The rules-based approach is relatively straightforward, but is unlikely to be able to handle the sorts of subtleties that a really good machine-learning system can parse.

Though machine learning is more powerful, however, it’ll only be as good as the training data it has been given; what’s more, if you’ve built some monstrous deep neural network, it might fail in mysterious ways or otherwise be hard to understand.

Supercharge Your Contact Center with Generative AI

Like used car salesmen or college history teachers, contact center managers need to understand the ways in which technology will change their business.

Machine learning is one such profoundly-impactful technology, and it can be used to automatically sort incoming messages by sentiment or priority and generally make your agents more effective.

Realizing this potential could be as difficult as hiring a team of expensive engineers and doing everything in-house, or as easy as getting in touch with us to see how we can integrate the Quiq conversational AI platform into your company.

If you want to get started quickly without spending a fortune, you won’t find a better option than Quiq.

Request A Demo

4 Benefits of Using Generative AI to Improve Customer Experiences

Generative AI has captured the popular imagination and is already changing the way contact centers work.

One area in which it has enormous potential is also one that tends to be top of mind for contact center managers: customer experience.

In this piece, we’re going to briefly outline what generative AI is, then spend the rest of our time talking about how generative AI benefits can improve customer experience with personalized responses, endless real-time support, and much more.

What is Generative AI?

As you may have puzzled out from the name, “generative AI” refers to a constellation of different deep learning models used to dynamically generate output. This distinguishes them from other classes of models, which might be used to predict returns on Bitcoin, make product recommendations, or translate between languages.

The most famous example of generative AI is, of course, the large language model ChatGPT. After being trained on staggering amounts of textual data, it’s now able to generate extremely compelling output, much of which is hard to distinguish from actual human-generated writing.

Its success has inspired a panoply of competitor models from leading players in the space, including companies like Anthropic, Meta, and Google.

As it turns out, the basic approach underlying generative AI can be utilized in many other domains as well. After natural language, probably the second most popular way to use generative AI is to make images. DALL-E, MidJourney, and Stable Diffusion have proven remarkably adept at producing realistic images from simple prompts, and just the past week, Fable Studios unveiled their “Showrunner” AI, able to generate an entire episode of South Park.

But even this is barely scratching the surface, as researchers are also training generative models to create music, design new proteins and materials, and even carry out complex chains of tasks.

What is Customer Experience?

In the broadest possible terms, “customer experience” refers to the subjective impressions that your potential and current customers have as they interact with your company.

These impressions can be impacted by almost anything, including the colors and font of your website, how easy it is to find e.g. contact information, and how polite your contact center agents are in resolving a customer issue.

Customer experience will also be impacted by which segment a given customer falls into. Power users of your product might appreciate a bevy of new features, whereas casual users might find them disorienting.

Contact center managers must bear all of this in mind as they consider how best to leverage generative AI. In the quest to adopt a shiny new technology everyone is excited about, it can be easy to lose track of what matters most: how your actual customers feel about you.

Be sure to track metrics related to customer experience and customer satisfaction as you begin deploying large language models into your contact centers.

How is Generative AI For Customer Experience Being Used?

There are many ways in which generative AI is impacting customer experience in places like contact centers, which we’ll detail in the sections below.

Personalized Customer Interactions

Machine learning has a long track record of personalizing content. Netflix, take to a famous example, will uncover patterns in the shows you like to watch, and will use algorithms to suggest content that checks similar boxes.

Generative AI, and tools like the Quiq conversational AI platform that utilize it, are taking this approach to a whole new level.

Once upon a time, it was only a human being that could read a customer’s profile and carefully incorporate the relevant information into a reply. Today, a properly fine-tuned generative language model can do this almost instantaneously, and at scale.

From the perspective of a contact center manager who is concerned with customer experience, this is an enormous development. Besides the fact that prior generations of language models simply weren’t flexible enough to have personalized customer interactions, their language also tended to have an “artificial” feel. While today’s models can’t yet replace the all-elusive human touch, they can do a lot to add make your agents far more effective in adapting their conversations to the appropriate context.

Better Understanding Your Customers and Their Journies

Marketers, designers, and customer experience professionals have always been data enthusiasts. Long before we had modern cloud computing and electronic databases, detailed information on potential clients, customer segments, and market trends used to be printed out on dead treads, where it was guarded closely. With better data comes more targeted advertising, a more granular appreciation for how customers use your product and why they stop using it, and their broader motivations.

There are a few different ways in which generative AI can be used in this capacity. One of the more promising is by generating customer journeys that can be studied and mined for insight.

When you begin thinking about ways to improve your product, you need to get into your customers’ heads. You need to know the problems they’re solving, the tools they’ve already tried, and their major pain points. These are all things that some clever prompt engineering can elicit from ChatGPT.

We took a shot at generating such content for a fictional network-monitoring enterprise SaaS tool, and this was the result:

 

While these responses are fairly generic [1], notice that they do single out a number of really important details. These machine-generated journal entries bemoan how unintuitive a lot of monitoring tools are, how they’re not customizable, how they’re exceedingly difficult to set up, and how their endless false alarms are stretching the security teams thin.

It’s important to note that ChatGPT is not soon going to obviate your need to talk to real, flesh-and-blood users. Still, when combined with actual testimony, they can be a valuable aid in prioritizing your contact center’s work and alerting you to potential product issues you should be prepared to address.

Round-the-clock Customer Service

As science fiction movies never tire of pointing out, the big downside of fighting a robot army is that machines never need to eat, sleep, or rest. We’re not sure how long we have until the LLMs will rise up and wage war on humanity, but in the meantime, these are properties that you can put to use in your contact center.

With the power of generative AI, you can answer basic queries and resolve simple issues pretty much whenever they happen (which will probably be all the time), leaving your carbon-based contact center agents to answer the harder questions when they punch the clock in the morning after a good night’s sleep.

Enhancing Multilingual Support

Machine translation was one of the earliest use cases for neural networks and machine learning in general, and it continues to be an important function today. While ChatGPT was noticeably very good at multilingual translation right from the start, you may be surprised to know that it actually outperforms alternatives like Google Translate.

If your product doesn’t currently have a diverse global user base speaking many languages, it hopefully will soon, at the means you should start thinking about multilingual support. Not only will this boost table stakes metrics like average handling time and resolutions per hour, it’ll also contribute to the more ineffable “customer satisfaction.” Nothing says “we care about making your experience with us a good one” like patiently walking a customer through a thorny technical issue in their native tongue.

Things to Watch Out For

Of course, for all the benefits that come from using generative AI for customer experience, it’s not all upside. There are downsides and issues that you’ll want to be aware of.

A big one is the tendency of large language models to hallucinate information. If you ask it for a list of articles to read about fungal computing (which is a real thing whose existence we discovered yesterday), it’s likely to generate a list that contains a mix of real and fake articles.

And because it’ll do so with great confidence and no formatting errors, you might be inclined to simply take its list at face value without double-checking it.

Remember, LLMs are tools, not replacements for your agents. They need to be working with generative AI, checking its output, and incorporating it when and where appropriate.

There’s a wider danger that you will fail to use generative AI in the way that’s best suited to your organization. If you’re running a bespoke LLM trained on your company’s data, for example, you should constantly be feeding it new interactions as part of its fine-tuning, so that it gets better over time.

And speaking of getting better, sometimes machine learning models don’t get better over time. Owing to factors like changes in the underlying data, model performance can sometimes get worse over time. You’ll need a way of assessing the quality of the text generated by a large language model, along with a way of consistently monitoring it.

What are the Benefits of Generative AI for Customer Experience?

The reason that people are so excited over the potential of using generative AI for customer experience is because there’s so much upside. Once you’ve got your model infrastructure set up, you’ll be able to answer customer questions at all times of the day or night, in any of a dozen languages, and with a personalization that was once only possible with an army of contact center agents.

But if you’re a contact center manager with a lot to think about, you probably don’t want to spend a bunch of time hiring an engineering team to get everything running smoothly. And, with Quiq, you don’t have to – you can leverage generative AI to supercharge your customer experience while leaving the technical details to us!

Schedule a demo to find out how we can bring this bleeding-edge technology into your contact center, without worrying about the nuts and bolts.

Footnotes
[1] It’s worth pointing out that we spent no time crafting the prompt, which was really basic: “I’m a product manager at a company building an enterprise SAAS tool that makes it easier to monitor system breaches and issues. Could you write me 2-3 journal entries from my target customer? I want to know more about the problems they’re trying to solve, their pain points, and why the products they’ve already tried are not working well.” With a little effort, you could probably get more specific complaints and more usable material.

Understanding the Risk of ChatGPT: What you Should Know

OpenAI’s ChatGPT burst onto the scene less than a year ago and has already seen use in marketing, education, software development, and at least a dozen other industries.

Of particular interest to us is how ChatGPT is being used in contact centers. Though it’s already revolutionizing contact centers by making junior agents vastly more productive and easing the burnout contributing to turnover, there are nevertheless many issues that a contact center manager needs to look out for.

That will be our focus today.

What are the Risks of Using ChatGPT?

In the following few sections, we’ll detail some of the risks of using ChatGPT. That way, you can deploy ChatGPT or another large language model with the confidence born of knowing what the job entails.

Hallucinations and Confabulations

By far the most well-known failure mode of ChatGPT is its tendency to simply invent new information. Stories abound of the model making up citations, peer-reviewed papers, researchers, URLs, and more. To take a recent well-publicized example, ChatGPT accused law professor Jonathan Turley of having behaved inappropriately with some of his students during a trip to Alaska.

The only problem was that Turley had never been to Alaska with any of his students, and the alleged Washington Post story which ChatGPT claimed had reported these facts had also been created out of whole cloth.

This is certainly a problem in general, but it’s especially worrying for contact center managers who may increasingly come to rely on ChatGPT to answer questions or to help resolve customer issues.

To those not steeped in the underlying technical details, it can be hard to grok why a language model will hallucinate in this way. The answer is: it’s an artifact of how large language models train.

ChatGPT learns how to output tokens from being trained on huge amounts of human-generated textual data. It will, for example, see the first sentences in a paragraph, and then try to output the text that completes the paragraph. The example below is the opening lines of J.D. Salinger’s The Catcher in the Rye. The blue sentences are what ChatGPT would see, and the gold sentences are what it would attempt to create itself:

“If you really want to hear about it, the first thing you’ll probably want to know is where I was born, and what my lousy childhood was like, and how my parents were occupied and all before they had me, and all that David Copperfield kind of crap, but I don’t feel like going into it, if you want to know the truth.”

Over many training runs, a large language model will get better and better at this kind of autocompletion work, until eventually it gets to the level it’s at today.

But ChatGPT has no native fact-checking abilities – it sees text and outputs what it thinks is the most likely sequence of additional words. Since it sees URLs, papers, citations, etc., during its training, it will sometimes include those in the text it generates, whether or not they’re appropriate (or even real.)

Privacy

Another ongoing risk of using ChatGPT is the fact that it could potentially expose sensitive or private information. As things stand, OpenAI, the creators of ChatGPT, offer no robust privacy guarantees for any information placed into a prompt.

If you are trying to do something like named entity recognition or summarization on real people’s data, there’s a chance that it might be seen by someone at OpenAI as part of a review process. Alternatively, it might be incorporated into future training runs. Either way, the results could be disastrous.

But this is not all the information collected by OpenAI when you use ChatGPT. Your timezone, browser type and IP address, cookies, account information, and any communication you have with OpenAI’s support team is all collected, among other things.

In the information age we’ve become used to knowing that big companies are mining and profiting off the data we generate, but given how powerful ChatGPT is, and how ubiquitous it’s becoming, it’s worth being extra careful with the information you give its creators. If you feed it private customer data and someone finds out, that will be damaging to your brand.

Bias in Model Output

By now, it’s pretty common knowledge that machine learning models can be biased.

If you feed a large language model a huge amount of text data in which doctors are usually men and nurses are usually women, for example, the model will associate “doctor” with “maleness” and “nurse” with “femaleness.”
This is generally an artifact of the data the models were trained, and is not due to any malfeasance on the part of the engineers. This does not, however, make it any less problematic.

There are some clever data manipulation techniques that are able to go a long way toward minimizing or even eliminating these biases, though they’re beyond the scope of this article. What contact center managers need to do is be aware of this problem, and establish monitoring and quality-control checkpoints in their workflow to identify and correct biased output in their language models.

Issues Around Intellectual Property

Earlier, we briefly described the training process for a large language model like ChatGPT (you can find much more detail here.) One thing to note is that the model doesn’t provide any sort of citations for its output, nor any details as to how it was generated.

This has raised a number of thorny questions around copyright. If a model has ingested large amounts of information from the internet, including articles, books, forum posts, and much more, is there a sense in which it has violated someone’s copyright? What about if it’s an image-generation model trained on a database of Getty Images?

By and large, we tend to think this is the sort of issue that isn’t likely to plague contact center managers too much. It’s more likely to be a problem for, say, songwriters who might be inadvertently drawing on the work of other artists.

Nevertheless, a piece on the potential risks of ChatGPT wouldn’t be complete without a section on this emerging problem, and it’s certainly something that you should be monitoring in the background in your capacity as a manager.

Failure to Disclose the Use of LLMs

Finally, there has been a growing tendency to make it plain that LLMs have been used in drafting an article or a contract, if indeed they were part of the process. To the best of our knowledge, there are not yet any laws in place mandating that this has to be done, but it might be wise to include a disclaimer somewhere if large language models are being used consistently in your workflow. [1]

That having been said, it’s also important to exercise proactive judgment in deciding whether an LLM is appropriate for a given task in the first place. In early 2023, the Peabody School at Vanderbilt University landed in hot water when it disclosed that it had used ChatGPT to draft an email about a mass shooting that had taken place at Michigan State.

People may not care much about whether their search recommendations were generated by a machine, but it would appear that some things are still best expressed by a human heart.

Again, this is unlikely to be something that a contact center manager faces much in her day-to-day life, but incidents like these are worth understanding as you decide how and when to use advanced language models.

Someone stopping a series of blocks from falling into each other, symbolizing the prevention of falling victim to ChatGPT risks.

Mitigating the Risks of ChatGPT

From the moment it was released, it was clear that ChatGPT and other large language models were going to change the way contact centers run. They’re already helping agents answer more queries, utilize knowledge spread throughout the center, and automate substantial portions of work that were once the purview of human beings.

Still, challenges remain. ChatGPT will plainly make things up, and can be biased or harmful in its text. Private information fed into its interface will be visible to OpenAI, and there’s also the wider danger of copyright infringement.

Many of these issues don’t have simple solutions, and will instead require a contact center manager to exercise both caution and continual diligence. But one place where she can make her life much easier is by using a powerful, out-of-the-box solution like the Quiq conversational AI platform.

While you’re worrying about the myriad risks of using ChatGPT you don’t also want to be contending with a million little technical details as well, so schedule a demo with us to find out how our technology can bring cutting-edge language models to your contact center, without the headache.

Footnotes
[1] NOTE: This is not legal advice.

Request A Demo

The Ongoing Management of an LLM Assistant

Technologies like large language models (LLMs) are amazing at rapidly generating polite text that helps solve a problem or answer a question, so they’re a great fit for the work done at contact centers.

But this doesn’t mean that using them is trivial or easy. There are many challenges associated with the ongoing management of an LLM assistant, including hallucinations and the emergence of bad behavior – and that’s not even mentioning the engineering prowess required to fine-tune and monitor these systems.

All of this must be borne in mind by contact center managers, and our aim today is to facilitate this process.

We’ll provide broad context by talking about some of the basic ways in which large language models are being used in business, discuss, setting up an LLM assistant, and then enumerate some of the specific steps that need to be taken in using them properly.

Let’s go!

How Are LLMs Being Used in Science and Business?

First, let’s adumbrate some of the ways in which large language models are being utilized on the ground.

The most obvious way is by acting as a generative AI assistant. One of the things that so stunned early users of ChatGPT was its remarkable breadth in capability. It could be used to draft blog posts, web copy, translate between languages, and write or explain code.

This alone makes it an amazing tool, but it has since become obvious that it’s useful for quite a lot more.

One thing that businesses have been experimenting with is fine-tuning large language models like ChatGPT over their own documentation, turning it into a simple interface by which you can ask questions about your materials.

It’s hard to quantify precisely how much time contact center agents, engineers, or other people spend hunting around for the answer to a question, but it’s surely quite a lot. What if instead you could just, y’know, ask for what you want, in the same way that you do a human being?

Well, ChatGPT is a long way from being a full person, but when properly trained it can come close where question-answering is concerned.

Stepping back a little bit, LLMs can be prompt engineered into a number of useful behaviors, all of which redound to the benefit of the contact centers which use them. Imagine having an infinitely patient Socratic tutor that could help new agents get up to speed on your product and process, or crafting it into a powerful tool for brainstorming new product designs.

There have also been some promising attempts to extend the functionality of LLMs by making them more agentic – that is, by embedding them in systems that allow them to carry out more open-ended projects. AutoGPT, for example, pairs an LLM with a separate bot that hits the LLM with a chain of queries in the pursuit of some goal.

AssistGPT goes even further in the quest to augment LLMs by integrating them with a set of tools that allow them to achieve objectives involving images and audio in addition to text.

How to Set Up An LLM Assistant

Next, let’s turn to a discussion of how to set up an LLM assistant. Covering this topic fully is well beyond the scope of this article, but we can make some broad comments that will nevertheless be useful for contact center managers.

First, there’s the question of which large language model you should use. In the beginning, ChatGPT was pretty much the only foundation model on offer. Today, however, that situation has changed, and there are now foundation models from Anthropic, Meta, and many other companies.

One of the biggest early decisions you’ll have to make is whether you want to try and use an open-source model (for which the code and the model weights are freely available) or a close-source model (for which they are not).

If you go the closed-source route you’ll almost certainly be hitting the model over an API, feeding it your queries and getting its responses back. This is orders of magnitude simpler than provisioning an open-source model, but it means that you’ll also be beholden to the whims of some other company’s engineering team. They may update the model in unexpected ways, or simply go bankrupt, and you’ll be left with no recourse.

Using an open-source alternative, of course, means grabbing the other horn of the dilemma. You’ll have visibility into how the model works and will be free to modify it as you see fit, but this won’t be worth much unless you’re willing to devote engineering hours to the task.

Then, there’s the question of fine-tuning large language models. While ChatGPT and LLMs more generally are quite good on their own, having them answer questions about your product or respond in particular ways means modifying their behavior somehow.

Broadly speaking, there are two ways of doing this, which we’ve mentioned throughout: proper fine-tuning, and prompt engineering. Let’s dig into the differences.

Fine-tuning means showing the model many (i.e. several hundred) examples of the behaviors you want to see, which changes its internal weights and biases it towards those behaviors in the future.

Prompt engineering, on the other hand, refers to carefully structuring your prompts to elicit the desired behavior. These LLMs can be surprisingly sensitive to little details in the instructions they’re provided, and prompt engineers know how to phrase their requests in just the right way to get what they need.

There is also some middle ground between these approaches. “One-shot learning” is a form of prompt engineering in which the prompt contains a singular example of the desired behavior, while “few-shot learning” refers to including between three and five examples.

Contact center managers thinking about using LLMs will need to think about these implementation details. If you plan on only lightly using ChatGPT in your contact center, a basic course on prompt engineering might be all you need. If you plan on making it an integral part of your organization, however, that most likely means a fine-tuning pipeline and serious technical investment.

The Ongoing Management of an LLM

Having said all this, we can now turn to the day-to-day details of managing an LLM assistant.

Monitoring the Performance of an LLM

First, you’ll need to continuously monitor the model. As hard as it may be to believe given how perfect ChatGPT’s output often is, there isn’t a person somewhere typing the responses. ChatGPT is very prone to hallucinations, in which it simply makes up information, and LLMs more generally can sometimes fall into using harmful or abusive language if they’re prompted incorrectly.

This can be damaging to your brand, so it’s important that you keep an eye on the language created by the LLMs your contact center is using.

And of course, not even LLMs can obviate the need to track the all-import key performance indicators. So far, there’s been one major study on generative AI in contact centers that found they increased productivity and reduced turnover, but you’ll still want to measure customer satisfaction, average handle time, etc.

There’s always a temptation to jump on a shiny new technology (remember the blockchain?), but you should only be using LLMs if they actually make your contact center more productive, and the only way you can assess that is by tracking your figures.

Iterative Fine-Tuning and Training

We’ve already had a few things to say about fine-tuning and the related discipline of prompt engineering, and here we’ll build on those preliminary comments.
The big thing to bear in mind is that fine-tuning a large language model is not a one-and-done kind of endeavor. You’ll find that your model’s behavior will drift over time (the technical term is “model degradation”), and this means you will likely to have to periodically re-train it.

It’s also common to offer the model “feedback”, i.e. by ranking it’s responses or indicating when you did or did not like a particular output. You’ve probably heard of reinforcement learning through human feedback, which is one version of this process, but there are also others you can use.

Quality Assurance and Oversight

A related point is that your LLMs will need consistent oversight. They’re not going to voluntarily improve on their own (they’re algorithms with no personal initiative to speak of), so you’ll need to checking in routinely to make sure they’re performing well and that your agents are using them responsibly.

There are many parts to this, including checks on the models outputs and an audit process that allows you to track down any issues. If you suddenly see a decline in performance, for example, you’ll need to quickly figure out whether it’s isolated to one agent or part of a larger pattern. If it’s the former, was it a random aberration, or did the agent go “off script” in a way that caused the model to behave poorly?

Take another scenario, in which an end-user was shown inappropriate text generated by an LLM. In this situation, you’ll need to take a deeper look at your process. If there were agents interacting with this model, ask them why they failed to spot the problematic text and stop it being shown to a customer. Or, if it came from a mostly-automated part of your tech stack, you need to uncover the reasons for which your filters failed to catch it, and perhaps think about keeping humans more in the loop.

The Future of LLM Assistants

Though the future is far from certain, we tend to think that LLMs have left Pandora’s box for good. They’re incredibly powerful tools which are poised to transform how contact centers and other enterprises operate, and experiments so far have been very promising; for all these reasons, we expect that LLMs will become a steadily more important part of the economy going forward.

That said, the ongoing management of an LLM assistant is far from trivial. You need to be aware at all times of how your model is performing and how your agents are using it. Though it can make your contact center vastly more productive, it can also lead to problems if you’re not careful.

That’s where the Quiq platform comes in. Our conversational AI is some of the best that can be found anywhere, able to facilitate customer interactions, automate text-message follow-ups, and much more. If you’re excited by the possibilities of generative AI but daunted by the prospect of figuring out how TPUs and GPUs are different, schedule a demo with us today.

Request A Demo