• Download AI-Ready CX: A Leader’s Guide for Change, Adoption, and Impact for 12+ templates, tools, and more. Get e-book -->

AI Benchmarking Best Practices: A Framework for CX Leaders

Is your AI investment delivering provable value, or is it still operating like a black box?

In today’s rapidly evolving customer experience (CX) landscape, where Artificial Intelligence (AI) promises transformative results, like decreasing service costs by up to 30% and yielding an average ROI of $1.41 for every dollar spent, simply implementing AI isn’t enough. You need to measure its impact. AI benchmarking holds the key.

Effective AI benchmarking is critical for evaluating progress, sustaining momentum, and refining your AI initiatives. By comparing performance internally and against industry standards, organizations ensure their strategies are competitive, effective, and aligned with evolving customer expectations. Robust benchmarking also builds credibility by quantifying success and providing a clear narrative for stakeholders. This is vital, as industry projections suggest AI could handle a significant majority of customer interactions, potentially between 70% (per Gartner) and 95% by 2025.

This article cuts through the complexity to deliver actionable AI benchmarking strategies specifically designed for CX professionals who need to demonstrate tangible results. Whether you’re just beginning your AI journey or looking to optimize existing implementations, you’ll learn how to develop an AI benchmarking framework aligned with your strategic goals. I’ll walk you through selecting the right metrics, establishing meaningful baselines, and creating a continuous improvement cycle that drives CX excellence. By the end, you’ll be equipped with practical tools to quantify AI’s impact, turning data into compelling narratives that secure stakeholder buy-in and position your organization as a CX leader. Let’s get started.

The Role of Benchmarking in Al-Driven CX

AI benchmarking goes beyond measuring outcomes; it establishes a clear context for performance. It highlights where AI initiatives deliver value and identifies gaps that require attention. In an era where AI investment is accelerating (98% of leaders plan to boost AI spending in 2025), benchmarking is vital for several reasons:

  • Identifying Best Practices: Learning from internal successes or external examples to guide future improvements.
  • Gaining Buy-In: Demonstrating progress and ROI with data-driven insights helps secure support from leadership and operational teams.
  • Driving Innovation: Comparing results against industry leaders inspires new strategies and reinforces a commitment to continuous improvement.

Understanding why AI benchmarking matters sets the stage. Now, let’s look at what top performance actually looks like in the current landscape.

What Good Looks Like in 2025

Based on current AI benchmarks and successful implementations, “good” Al-powered CX in 2025 isn’t just about isolated metrics. It’s about a holistic transformation that delivers significant, measurable value across the board. Here’s a snapshot:

Substantial Automation & Efficiency

Leading organizations achieve high AI Deflection Rates, with virtual agents fully resolving significant portions of inquiries without human intervention. Reported rates vary widely based on industry and use case, often ranging from 43% to over 75%.

This translates to significant reductions in Average Handle Time (AHT), sometimes resulting in 5x faster resolutions, and major Agent Productivity gains, often between 15-30%. Operational costs see marked decreases, potentially reaching the significant levels mentioned earlier.

Enhanced Customer Experience

Critically, efficiency gains do not come at the expense of satisfaction. Top performers maintain or even improve CSAT scores, often seeing lifts like Motel Rocks’ 9.44 point increase or Quiq clients like Accor achieving 89% CSAT. This is achieved through faster responses, 24/7 availability, increased personalization, and effective Human-Al Orchestration, ensuring empathy for complex issues. Improved First Contact Resolution (FCR) is key, with reductions in repeat contacts of 25-30% reported.

Tangible Business Outcomes & ROI

Success is measured in clear financial terms. Organizations demonstrate strong ROI, often reaching the average levels noted earlier, and achieve significant cost savings (Gartner projects $80 billion globally by 2026). Furthermore, Al is leveraged for revenue growth through Conversational Commerce, turning service interactions into sales opportunities, as seen with Klarna projecting $40M in additional profit, or Quiq clients attributing 10% of daily sales to chat.

Strategic & Integrated Approach

Excellence involves strategically deploying Al within asynchronous messaging channels (SMS, web chat, etc.) favored by customers. It requires robust Al Governance, seamless integration with existing systems, continuous iteration based on data, and commitment to agent training.

Leveraging Advanced, Accurate Al

Successful implementations increasingly use sophisticated conversational Al, often incorporating Large Language Models (LLMs) enhanced with techniques like Retrieval-Augmented Generation (RAG) for factual accuracy grounded in company knowledge. Agent-Assist tools are widely used to empower human agents.

“In essence, ‘good’ in 2025 means Al is deeply embedded, driving efficiency, enhancing customer satisfaction, delivering clear financial returns, and strategically positioning the organization for future innovation…” – Greg Dreyfus, Head of Solution Consulting at Quiq

Achieving this level of success requires a structured approach to measurement. Let’s look at the different ways you can benchmark your progress.

Types of AI Benchmarking

Internal Benchmarking

Focuses on comparing Al-driven performance within the organization to establish a baseline and track improvements over time.

  • Example: Compare resolution times and CSAT scores for Al versus human-handled inquiries.
  • Benefits: Highlights immediate wins, uncovers inefficiencies, and ensures alignment with goals.

Competitive Benchmarking

Involves comparing your organization’s metrics against direct competitors.

  • Example: Evaluate how your Al adoption impacts NPS or cost-per-interaction relative to others in your sector.
  • Benefits: Identifies competitive gaps or advantages, informs positioning strategies.

Industry Benchmarking

Assesses performance against general industry standards and best practices.

  • Example: Use analyst reports to compare your productivity gains (e.g., aiming for the 15-30% range) with sector leaders.
  • Benefits: Provides a macro view, uncovers broad trends for innovation.

Customer-Centric Benchmarking

Focuses on measuring outcomes that directly impact customer perceptions and loyalty.

  • Example: Compare Customer Effort Scores (CES) before and after implementing Al.
  • Benefits: Ensures CX initiatives genuinely improve the customer experience.

With these benchmarking types in mind, how do you build a practical framework for your organization?

Building an Al Benchmarking Framework

1. Establish Al Governance & Define Scope (Foundation)

Before deploying Al widely, create a clear Al Governance framework. Assemble a cross-functional team (CX, IT, Legal, Compliance) to define responsible usage policies, ethical guardrails, and risk protocols. Determine which metrics are most relevant to your goals and tie them to business outcomes like cost reduction, revenue growth, or retention.

2. Set Benchmarks at Multiple Levels

Establish benchmarks evaluating:

  • Operational Impact: FCR, Deflection Rate, AHT, Agent Productivity.
  • Customer Impact: CSAT, NPS, CES, Churn.
  • Financial Impact: ROI, Cost Savings, Revenue Influence.
  • AI Agent Mechanics: Evaluate core components like routing accuracy (did the right skill get called?), skill/tool correctness (did the skill/tool execute properly?).

3. Leverage Tools and Technology

Use appropriate tools to gather and analyze data efficiently. This includes:

  • Analytics Platforms: To track KPIs and visualize trends.
  • Customer Feedback Tools: For CSAT, NPS, CES surveys.
  • CX Automation Platforms (like Quiq): That often have built-in reporting and facilitate AI deployment, especially in asynchronous messaging channels.
  • Ensure robust integration with existing systems (CRMs, order management, etc…) to avoid data silos and enable personalized experiences.

4. Regularly Review and Update Benchmarks

Metrics and goals must evolve as AI capabilities mature. Schedule regular reviews (e.g., quarterly) to assess performance and adjust strategies. Stay current with industry reports, as benchmarks change rapidly.

Take our free AI readiness assessment to discover where you are on the AI maturity path.

Now that the framework is outlined, let’s dive deeper into the specific metrics you should be tracking, along with current industry benchmarks.

Key Metrics for Al Benchmarking in CX (with 2024-2025 Benchmarks)

Here are top metrics across key categories, updated with recent industry benchmarks:

1. Al Performance & Adoption Metrics

  • Al Deflection / Containment Rate: Percentage of inquiries handled or fully resolved by Al without human intervention.
    • Benchmark: Highly variable based on industry, use case complexity, and AI maturity.
      • Commonly reported rates range from 43% (e.g., Motel Rocks) up to 70-75% for specific sectors (e.g., AirAsia, some telcos).
      • For routine, high-volume tasks, AI may handle up to 80%.
      • Top-performing implementations can achieve even higher containment, such as Quiq client BODi® reporting 88%.
  • Self-Service Resolution Rate: Percentage of customer issues fully resolved via AI self-service without any human agent involvement.
    • Benchmark: Varies; examples include Sony at 15.9% and Quiq client Molekule achieving a 60% resolution rate for interactions handled via self-service AI. Industry average projections evolve (e.g., ~20% now, projected higher).
  • Agent Assist Utilization: Frequency agents leverage Al tools. Crucial for measuring adoption of augmentation tools.
  • Al Adoption / Interaction Handling: Percentage of total interactions involving Al.
    • Benchmark: Projected Al handling 70% (Gartner) to 95% of interactions by 2025.
  • Task Convergence / Reliability: Measures the consistency and predictability of the AI agent in completing a specific task within an expected number of steps or interactions. High convergence indicates a more reliable and less error-prone process.

2. Efficiency Metrics

  • Average Handle Time (AHT) Reduction: Decrease in average interaction time.
    • Benchmark: 25-30% range reported. Specifics: 27% (Agent Assist), 30% (Republic Services), 33-sec absolute drop (Camping World), 5x faster resolution (Klarna).
  • Agent Productivity Gain: Increase in agent efficiency (e.g., inquiries/hr).
    • Benchmark: Avg. 15-30% from GenAl. Agents using Al: +13.8% inquiries/hr. Camping World: +33% efficiency. Quiq client (National Furniture Retailer): 33% fewer escalations.
  • First-Response Time (FRT): Speed of initial reply. Al excels here for instant answers.
  • Escalation Rate: Percentage of Al interactions needing human help. Depending on the use case, lower is better however some use cases require human escalation.

3. Customer Experience Metrics

  • First-Contact Resolution (FCR): Percentage issues resolved on first interaction.
    • Benchmark: AI contributes significantly to improving FCR by reducing repeat contacts.
    • Examples of FCR Improvement: Klarna reported 25% fewer repeat inquiries (effectively a +25% FCR impact); Republic Services saw 30% fewer repeat calls.
    • Note: This differs from AI-specific resolution rates. For instance, while Quiq client Molekule achieved a 60% AI self-service resolution rate for the contacts handled by AI, the impact on overall FCR depends on the percentage of total contacts handled by AI.
  • CSAT Lift / Score: Change in customer satisfaction.
    • Benchmark: Often maintained or improved. Klarna: Parity with humans. Motel Rocks: +9.44 points. Any Al use: +22.3% lift avg. Quiq Clients: Accor (89%), BODi® (75%), Molekule (+42% lift).
  • Customer Effort Score (CES): Measures ease of resolution. Lower effort = higher loyalty.
  • Net Promoter Score (NPS): Likelihood to recommend.

4. Financial Metrics

  • Cost Per Contact / Cost-to-Serve Reduction: Decrease in interaction handling cost.
    • Benchmark: Reductions align with AI’s potential for significant operational savings, potentially reaching up to the 30% mark mentioned previously. Gartner projects $80B projected savings globally by 2026.
  • Return on Investment (ROI): Financial return from Al investment.
    • Benchmark: As highlighted earlier, the average ROI often reaches $1.41 per $1 spent, with 92% of early adopters seeing positive ROI.
  • Revenue Influence / Conversational Commerce: Added revenue via Al assistance.
    • Benchmark: Klarna: Projected +$40M profit. Retailers: 5-15% conversion lift. H&M: Higher AOV. Quiq clients: Accor (2x booking click-outs), National Furniture Retailer (10% daily sales via chat).

5. Operational Metrics

  • Error Reduction Rate: Decrease in mistakes vs. manual processes.
  • Training Time Reduction: Faster onboarding with Al tools.
  • Knowledge Creation Efficiency: Speed of turning interactions into reusable knowledge.

While these results are impressive, achieving them requires navigating potential pitfalls. Let’s examine the common challenges.

Common Challenges in Al Benchmarking and How to Overcome Them

While the benefits are clear, organizations face hurdles:

1. Accuracy and “Hallucinations”

  • Challenge: Generative Al can sometimes produce incorrect answers.
  • Solution: Implement RAG to ground Al responses in verified knowledge; use hybrid approaches; ensure human oversight.

2. Lack of Consistent Data

  • Challenge: Comparing performance requires standardized data collection.
  • Solution: Develop uniform data practices; use centralized dashboards; ensure robust integration with existing systems (CRM, etc.).

3. Bias and Fairness

  • Challenge: Al models can perpetuate biases.
  • Solution: Use diverse training data; continuously monitor outputs via observability (clear box); establish clear ethical guidelines; ensure human oversight.

4. Data Privacy and Security

  • Challenge: Al often needs sensitive data, increasing risks.
  • Solution: Ensure strict compliance (GDPR, CCPA); anonymize data; vet vendors; work with legal teams.

5. Benchmarking in a Rapidly Changing Landscape

  • Challenge: Benchmarks quickly become outdated.
  • Solution: Stay connected with analyst reports; update benchmarks regularly; focus on continuous improvement relative to your baseline.

6. Balancing Internal and External Comparisons

  • Challenge: Internal focus may miss competitive shifts.
  • Solution: Use internal benchmarks for initial wins; incorporate external insights as Al matures.

7. Change Management & Skills Gap

  • Challenge: Implementing Al requires organizational change and new skills.
  • Solution: Communicate clearly; invest in agent training/upskilling (empathy, complex problem-solving); position Al as augmentation; address job fears proactively.

8. Evaluating Multimodal Interactions:

  • Challenge: Benchmarking AI that handles complex interactions involving voice, visuals, or other modalities requires specific metrics and approaches beyond text-based analysis (e.g., audio chunk analysis for voice agents).
  • Solution: Develop modality-specific evaluation criteria; ensure benchmarking tools can capture and analyze multimodal data; maintain focus on the overall user experience across modalities.

Download our comprehensive, 102-page guide on AI change management, AI-Ready CX: A Leader’s Guide for Change, Adoption, and Impact. Get the guide >

Continuous Improvement and Outcome-Based Optimization

Benchmarking is not a static report card; it’s a dynamic tool for driving ongoing refinement. Furthermore, consistent evaluation at multiple levels serves as a crucial diagnostic tool, enabling teams to more effectively debug issues and pinpoint root causes when performance deviates from expectations. Organizations must move beyond measurement to action. This involves:

  • Regularly analyzing gaps between current performance and benchmarks.
  • Establishing feedback loops: Use analytics, customer surveys, and agent input.
  • Iterating continuously: Use insights to update AI training, rules, and workflows. Treat AI as a product that requires ongoing improvement.
  • Focusing on outcomes: Evolve measurement beyond operational metrics to track key business outcomes (CSAT, LTV, retention, revenue).
  • Engaging cross-functional teams (including an AI governance team) to implement changes and oversee evolution.

Strategic Recommendations for CX Leaders

Based on 2024-2025 trends and AI benchmarks, consider these strategic steps:

  1. Prioritize Asynchronous Messaging Channels (0-6 Months Start): Embrace channels like web chat, SMS, WhatsApp, etc., where customers prefer to interact and Al integrates effectively. [Impacts: CSAT, Agent Productivity, Deflection Rate]. Quiq specializes in optimizing these channels.
  2. Implement Al Agent Deflection for Tier-1 (0-6 Months Start): Focus Al automation on high-volume, low-complexity inquiries first to achieve quick ROI and free up human agents. [Impacts: Deflection Rate, Cost Per Contact, AHT].
  3. Leverage Agent-Assist Tools (6-12 Months+): Augment human agents with Al suggestions, knowledge surfacing, and task automation. [Impacts: AHT, Agent Productivity, FCR, Training Time].
  4. Master Human-Al Orchestration (Ongoing): Design seamless handoffs between Al and humans, ensuring context is preserved. Define clear escalation rules. [Impacts: CSAT, FCR, Agent/Customer Experience]. Quiq’s platform excels at this.
  5. Invest in Data Integration & Agent Training (Ongoing): Break down data silos for a unified customer view. Upskill agents for complex issues and Al collaboration. [Impacts: Personalization, Agent Effectiveness, CSAT].
  6. Explore Conversational Commerce Responsibly (Ongoing): Use Al to offer relevant recommendations during service interactions, prioritizing problem-solving first. Track conversion and sentiment carefully. [Impacts: Revenue Influence, AOV, CSAT (if done well)]. Quiq supports this blend.
  7. Stay Ahead of Technology (Ongoing): Keep an eye on advancements like RAG for accuracy and Agentic Al for future autonomous task handling. [Impacts: Future-proofing, Accuracy, Advanced Automation].

The Path Forward

Implementing robust AI benchmarking is about embedding a culture of data-driven decision-making and continuous improvement within your CX organization. By setting clear goals, leveraging the right metrics, learning from both internal and external examples, and strategically applying AI through platforms designed for effective orchestration like Quiq, CX leaders can move beyond the hype.

You can demonstrate significant value, enhance customer loyalty, contain costs, and ultimately, drive tangible business results in the evolving landscape of AI-powered customer experience. The time to measure, refine, and prove the impact of your AI strategy is now.


Citations List

  1. “61 AI Customer Service Statistics in 2025.” Desk365.
  2. “Snowflake Research Reveals that 92% of Early Adopters See ROI from AI Investments.” Snowflake.
  3. “Generative AI in Customer Experience: Real Impact, Key Risks, and What’s Next.” Conectys.
  4. “Future of AI in Customer Service: Its Impact beyond 2025.” DevRev.
  5. “Call Center Reporting: Your Definitive Guide (2025).” CloudTalk. 
  6. “Elevating Customer Support in Healthcare.” Alvarez & Marsal.
  7. “Call Center Performance Metrics Examples for Success.” Call Criteria. 
  8. “Key Benchmarks Should You Target In 2025 for your Contact Center.” NobelBiz. 
  9. “10 Call Center Metrics to Track in 2025.” Call Criteria. https://callcriteria.com/call-center-metrics-2/
  10. “What Call Center Benchmarks Should You Target In 2025?” Nextiva. 
  11. “AI in Customer Service Statistics [January 2025].” Master of Code.
  12. “Superagency in the workplace: Empowering people to unlock AI’s full potential.” McKinsey.
  13. “5 AI Case Studies in Customer Service and Support.” VKTR.
  14. “The Evolving Role of AI in Customer Experience: Insights from Metrigy’s 2024-25 Study.” Metrigy.
  15. “Customer experience trends 2025: 6 analysts share their predictions.” CX Dive.
  16. “AI in Customer Service Market Report 2025-2030.” GlobeNewswire.
  17. “5 AI in CX trends for 2025.” CX Network.
  18. “How organizations are leveraging Generative AI to transform marketing.” Consultancy-ME.
  19. “IT and Technology Spending & Budgets for 2025: Trends & Forecasts.” Splunk.
  20. “How AI is elevating CX for financial services firms in 2025 and beyond.” CallMiner.
  21. “The Top 14 SaaS Trends Shaping the Future of Business in 2025.” Salesmate.
  22. “Real-world gen AI use cases from the world’s leading organizations.” Google Cloud Blog.
  23. “Phocuswright’s Travel Innovation and Technology Trends 2025.” Phocuswright.
  24. “50+ Eye-Popping Artificial Intelligence Statistics [2025].” Invoca.
  25. “51 Latest Call Center Statistics with Sources for 2025.” Enthu AI.
  26. “Artificial Intelligence Archives.” FutureCIO.
  27. “NLP vs LLM: Key Differences, Applications & Use Cases.” Openxcell.
  28. “LLM vs NLP: Understanding The Top Differences in 2025.” CMARIX.
  29. “Compare Lunary vs. Private LLM in 2025.” Slashdot.
  30. “Five Trends in AI and Data Science for 2025.” MIT Sloan Management Review.
  31. “Five Trends in AI and Data Science for 2025 From MIT Sloan Management Review.” PR Newswire.
  32. “5 Tech Trends to Watch in 2025.” Comcast Business.
  33. “Top 2025 Trends in Customer Service.” Computer Talk.
  34. “AI Governance Market Research 2025 – Global Forecast to 2029.” GlobeNewswire.
  35. “The state of AI: How organizations are rewiring to capture value.” McKinsey.
  36. “The 2025 CX Leaders Trends & Insights: Corporate Edition Report.” Execs In The Know. 
  37. “Predictions 2025: Tech Leaders Chase High Performance.” Forrester.
  38. “Management Leadership Archives.” FutureCIO.
  39. “Explore Gartner’s Top 10 Strategic Technology Trends for 2025.” Gartner.
  40. “Leadership and AI insights for 2025.” NC State MEM.
  41. “Tackling the Challenges and Opportunities of Generative AI in Financial Services.” Spring Labs.

Unlock Agent Potential with Quiq’s Real-Time Agent Assist Capabilities

Customer service is evolving, and with it, the demands placed on service agents are rapidly increasing. From managing complex inquiries to delivering personalized, high-quality customer experiences, agents are under constant pressure to perform at their best. This is where Quiq’s Real-Time Agent Assist comes into play. With AI-driven insights, real-time guidance, and cutting-edge automation, this powerful tool doesn’t just support agents—it transforms them into top performers.

In this blog, we’ll explore precisely how Quiq’s real-time agent assist capabilities—part of our overall AI contact center offering—can revolutionize your customer service operations by boosting efficiency, reducing costs, and delighting customers.

Transform agent productivity with real-time AI insights

Agents are at the heart of your customer interactions, and giving them the tools they need to succeed can make all the difference. Quiq’s real-time agent assist AI is designed to empower agents with in-the-moment guidance and actionable insights during live interactions. These agent tools mean faster resolutions, greater confidence, and improved productivity for your team.

With Quiq, agents no longer have to second-guess their responses or scramble to find the right information. Instead, AI steps in to provide precise recommendations and cues at just the right time.

Take action today
Experience the future of customer service firsthand. Get a demo of Quiq’s real-time agent assist offering today and see how it can transform your support team.

AI-powered efficiency for every role, every conversation

Whether it’s advising agents on complex issues, streamlining onboarding, or cutting operational costs, Quiq’s real-time agent assist offering delivers impactful benefits across the board.

Here’s how it works for your business:

1. Optimize decision-making

Equip your agents with real-time insights and recommended actions, enabling them to resolve issues with precision. Whether handling a challenging customer inquiry or upselling products,

Quiq ensures that agents make the best decisions in every interaction. Agents get real-time suggested responses as the conversation progresses, which leverage the same underlying knowledge and systems that power AI agents. Think: knowledge bases, product catalogs, CRM data, and any other data sources that might be helpful in the context of agentic AI systems. AI Assistants don’t just suggest responses; they can also act on an agent’s behalf—like automatically starting a warranty claim, or updating a customer’s flight, without making the agent do the work manually.

2. Streamline training and onboarding

AI-powered coaching is a game changer for new agents. With Quiq, your team gains access to on-the-job guidance that accelerates learning. New hires ramp up faster, while experienced agents refine their skills, creating a consistently high-performing team. New agents get the same great suggested responses and actions that a high-performing human or AI agent would have.

It makes a brand-new agent as good as an AI agent, because they’re working off the same datasets, integrations and responses.

3. Reduce operational costs

Achieve more with fewer resources. Quiq automates routine inquiries and streamlines workflows, freeing up your agents to focus on high-value interactions. This means fewer hiring needs and a leaner operational model. In addition, AI Assistants can gather extra key pieces of data during a conversation, add them to specific ticket fields or append them to a case or conversation, reducing the amount of manual entry an agent has to do.

4. Enhance customer satisfaction

Quiq’s agent-facing AI empowers agents to provide accurate, instant, and personalized support, leading to faster resolutions and happier customers. The result? Higher CSAT scores and stronger customer loyalty. This is done through a combination of response suggestions, real time feedback, and taking action on the agent’s behalf.

5. Insights into agent performance

Quiq’s robust agent analytics give contact center leaders deep insight into how human agents are performing. In our experience, this is critical to ensuring that real-time agent assistance does its job and helps agents in the most effective way possible.

Watch this video to learn how it works >

Key features of real-time agent assist with Quiq

At the core of Quiq’s real-time agent assist lies a suite of innovative features designed for seamless customer interactions. See it in action:

1. In-the-moment guidance and coaching

Built in Quiq’s AI Studio, AI assistants can leverage data from any enterprise system and combine that with conversational context to suggest responses and provide recommendations, or coaching, during a conversation. Agents thrive with support that adapts in real time. Quiq provides targeted coaching during live conversations, using AI to deliver hints, reminders, and workflows tailored to each interaction.

For instance, in a case study with an office supply retailer, Quiq’s assist feature was so effective it allowed associates to get immediate answers to questions 2 out of 3 times. This led to a whopping 68% self-service rate resolution rate.

2. Automated post-conversation summary and analysis

After-conversation work can be a major time sink—but not with Quiq. Using AI-generated summaries, agents can cut down on post-interaction tasks, allowing them to focus on the next customer. Customers get faster service, and agents stay productive.

Importantly, summaries are also available for the agent right when they take over a conversation. For example, if the user has been talking with an AI agent, the human agent will get a summary of the conversation, creating a seamless experience for the end customer.

Beyond summarization, Quiq can also extract key pieces of information and automatically update CRMs or other enterprise systems with the appropriate information.

3. Smart routing and prioritization

Not all customer inquiries are created equal. Quiq’s intelligent routing ensures that inquiries are directed to the best-suited agents based on real-time data like expertise, workload, or customer urgency. This minimizes wait times and optimizes outcomes.

Real results with AI assistants: Office supplier case study

When a leading office supply retailer integrated Quiq’s agent-facing AI Assistant, they saw impressive improvements in just a few weeks.

  • Increase in containment rates: 35% (with a 6-month average containment rate of 65%)
  • Associates got immediate answers: 2 out of 3 times
  • Self-service resolution rate: 68%
  • Associate satisfaction with AI: 4.82 out of 5

The AI ensured that each employee was guided toward resolving customer issues promptly while automating laborious and repetitive inquiries. This created a win-win for both customers and the team itself. Read full case study >

Elevate customer support with Quiq’s real-time agent assist offering

Imagine a team where every agent operates at their peak potential, guided by AI that backs their every move. Quiq’s real-time agent assist isn’t just an upgrade for your service department—it’s a revolution that touches every part of your customer experience.

If you’re ready to unlock your agents’ potential and take your customer service to the next level, now is the time to act.

How to Automate Customer Service – The Ultimate Guide

From graph databases to automated machine learning pipelines and beyond, a lot of attention gets paid to new technologies. But the truth is, none of it matters if users aren’t able to handle the more mundane tasks of managing permissions, resolving mysterious errors, and getting the tools installed and working on their native systems.

This is where customer service comes in. Though they don’t often get the credit they deserve, customer service agents are the ones who are responsible for showing up every day to help countless others actually use the latest and greatest technology.

Like every job since the beginning of jobs, there are large components of customer service that have been automated, are currently being automated, or will be automated at some point soon.

That’s our focus for today. We want to explore customer service as a discipline and then talk about how Agentic AI can automate substantial parts of the standard workflow.

What is Customer Service?

To begin with, we’ll try to clarify what customer service is and why it matters. This will inform our later discussion of automated customer service and help us think through the value that can be added through automation.

Customer service is more or less what it sounds like: serving your customers – your users, or clients – as they go about the process of utilizing your product. A software company might employ customer service agents to help onboard new users and troubleshoot failures in their product, while a services company might use them for canceling appointments and rescheduling.

Over the prior few decades, customer service has evolved alongside many other industries. As mobile phones have become firmly ensconced in everyone’s life, for example, it has become more common for businesses to supplement the traditional avenues of phone calls and emails by adding text messaging and chatbot customer support to their customer service toolkit. This is part of what is known as an omni-channel strategy, in which more effort is made to meet customers where they’re at rather than expecting them to conform to the communication pathways a business already has in place.

Naturally, many of these kinds of interactions can be automated, especially with the rise of tools like large language models. We’ll have more to say about that shortly.

Why is Customer Service Important?

It may be tempting for those writing the code to think that customer service is a “nice to have”, but that’s not the case at all. However good a product’s documentation is, there will simply always be weird behaviors and edge cases in which a skilled customer service agent (perhaps helped along with AI) needs to step in and aid a user in getting everything running properly.

But there are other advantages as well. Besides simply getting a product to function, customer service agents contribute to a company’s overall brand, and the general emotional response users have to the company and its offerings.

High-quality customer service agents can do a lot to contribute to the impression that a company is considerate and genuinely cares about its users.

What Are Examples of Good Customer Service?

There are many ways in which customer service agents can do this. For example, it helps a lot when customer service agents try to transmit a kind of warmth over the line.

Because so many people spend their days interacting with others through screens, it can be easy to forget what that’s like, as tone of voice and facial expression are hard to digitally convey. But when customer service agents greet a person enthusiastically and go beyond “How may I help you” by exchanging some opening pleasantries, they feel more valued and more at ease. This matters a lot when they’ve been banging their head against a software problem for half a day.

Customer service agents have also adapted to the digital age by utilizing emojis, exclamation points, and various other kinds of internet-speak. We live in a more casual age, and under most circumstances, it’s appropriate to drop the stiffness and formalities when helping someone with a product issue.

That said, you should also remember that you’re talking to customers, and you should be polite. Use words like “please” when asking for something, and don’t forget to add a “thank you.” It can be difficult to remember this when you’re dealing with a customer who is simply being rude, especially when you’ve had several such customers in a row. Nevertheless, it’s part of the job.

Finally, always remember that a customer gets in touch with you when they’re having a problem, and above all else, your job is to get them what they need. From the perspective of contact center managers, this means you need periodic testing or retraining to make sure your agents know the product thoroughly.

It’s reasonable to expect that agents will sometimes need to look up the answer to a question, but if they’re doing that constantly it will not only increase the time it takes to resolve an issue, but it will also contribute to customer frustration and a general sense that you don’t have things well in hand.

Automation in Customer Service

Now that we’ve covered what customer service is, why it matters, and how to do it well, we have the context we need to turn to the topic of automated customer service.

For all intents and purposes, “automation” simply refers to outsourcing all or some of a task to a machine. In industries like manufacturing and agriculture, automation has been steadily increasing for hundreds of years.

Until fairly recently, however, the technology didn’t yet exist to automate substantial portions of customer service worth. With the rise of machine learning, and especially large language models like ChatGPT, that’s begun to change dramatically.

Let’s dive into this in more detail.

Examples of Automated Customer Service

There are many ways in which customer service is being automated. Here are a few examples:

  • Automated questions answering – Many questions are fairly prosaic (“How do I reset my password”), and can effectively be outsourced to a properly finetuned large language model. When such a model is trained on a company’s documentation, it’s often powerful enough to handle these kinds of low-level requests.
  • Summarization – There have long been models that could do an adequate job of summarization, but large language models have kicked this functionality into high gear. With an endless stream of new emails, Slack messages, etc. constantly being generated, having an agent that can summarize their contents and keep agents in the loop will do a lot to boost their productivity.
  • Classifying incoming messages – Classification is another thing that models have been able to do for a while, and it’s also something that helps a lot. Having an agent manually sort through different messages to figure out how to prioritize them and where they should go is no longer a good use of time, as algorithms are now good enough to do a major chunk of this kind of work.
  • Translation – One of the first useful things anyone attempted to do with machine learning was translating between different natural languages (i.e. from Russian into English). Once squarely in the purview of human beings, this is now a task that machines can do almost as well, at least for customer service work.

Should We Automate Customer Service?

All this having been said, you may still have questions about the wisdom of automating customer service work. Sure, no one wants to spend hours every day looking up words in Mandarin to answer a question or prioritizing tickets by hand, but aren’t we in danger of losing something important as customer service agents? Might we not automate ourselves out of a job?

Because these models are (usually) finetuned on conversations with more experienced agents, they’re able to capture a lot of how those agents handle issues. Typical response patterns, politeness, etc. become “baked into” the models. Junior agents using these models are able to climb the learning curve more quickly and, feeling less strained in their new roles, are less likely to quit. This, in turn, puts less of a burden on managers and makes the organization overall more stable. Everyone ends up happier and more productive.

So far, it’s looking like AI-based automation in contact centers will be like automation almost everywhere else: machines will gradually remove the need for human attention in tedious or otherwise low-value tasks, freeing them up to focus on places where they have more of an advantage.

If agents don’t have to sort tickets anymore or resolve routine issues, they can spend more time working on the really thorny problems, and do so with more care.

Strategies for Implementing Automated Customer Service

Once you’ve decided to bring automation into your customer service strategy, the next step is implementation. Here are some key strategies to help you get started and ensure a smooth transition that benefits both your team and your customers.

Assess Your Current Customer Service Needs

Start by reviewing your support data. Which questions pop up most often? Where do your agents spend the most time? Identifying these patterns will help you pinpoint which tasks can—and should—be automated. Look for high-volume, repetitive inquiries that don’t require much nuance. These are prime candidates for automation that won’t sacrifice the quality of your customer experience.

Choose the Right Automation Tools

Not all automation tools are created equal. Consider solutions like AI agents, automated ticket routing, or self-service portals. The key is to choose platforms that work well with your existing CRM and communication tools, so everything stays connected. Look for tools that are flexible, scalable, and easy for your team to manage over time.

Develop a Knowledge Base and Self-Service Options

A well-organized knowledge base can deflect tickets before they ever hit your queue. Build out FAQs, how-to articles, and video tutorials that answer your customers’ most common questions. Use AI-powered search features to surface the right content quickly. And don’t forget to update your content regularly based on feedback and emerging issues—your knowledge base should evolve alongside your customers.

Set Up Automated Responses and Workflows

Automation isn’t just about answering questions—it’s about streamlining entire workflows. Set up automated messages for order updates, appointment reminders, or common troubleshooting steps. Use branching logic and triggers to guide customers through resolutions, and ensure these flows are intuitive. The goal is to help customers solve issues faster, without needing to wait on hold.

Balance Automation with Human Support

Even the best bots have their limits. Make sure customers can easily escalate to a live agent when necessary—especially for complex or sensitive issues. Train your human support team to step in smoothly when automation reaches its edge. And whenever possible, personalize the experience by using data to greet customers by name or tailor responses based on their history.

Monitor Performance and Continuously Optimize

The work doesn’t stop after launch. Keep an eye on key metrics like resolution time, deflection rate, and customer satisfaction scores. Collect feedback from users to understand where automation is helping—or where it might be falling short. With the right data, you can train your AI and machine learning models to recognize patterns, refine workflows, and improve response accuracy—so your automated service keeps getting smarter with every interaction.

Moving Quiq-ly into the Future!

Where the rubber of technology meets the road of real-world use cases, customer service agents are extremely important. They not only make sure customers can use a company’s tools, but they also contribute to the company brand through their tone, mannerisms, and helpfulness.

Like most other professions, customer service agents are being impacted by automation. So far, this impact has been overwhelmingly positive and is likely to prove a competitive advantage in the decades ahead.

If you’re intrigued by this possibility, Quiq has created a suite of industry-leading agentic AI tools, both for customer-facing applications and agent-facing applications. Check them out or schedule a demo with us to see what all the fuss is about.

Request A Demo

The 5 Most Asked Questions About AI

The term “artificial intelligence” was coined at the famous Dartmouth Conference in 1956, put on by luminaries like John McCarthy, Marvin Minsky, and Claude Shannon, among others.

These organizers wanted to create machines that “use language, form abstractions and concepts, solve kinds of problems now reserved for humans, and improve themselves.” They went on to claim that “…a significant advance can be made in one or more of these problems if a carefully selected group of scientists work on it together for a summer.”

Half a century later, it’s fair to say that this has not come to pass; brilliant as they were, it would seem as though McCarthy et al. underestimated how difficult it would be to scale the heights of the human intellect.

Nevertheless, remarkable advances have been made over the past decade, so much so that they’ve ignited a firestorm of controversy around this technology. People are questioning the ways in which it can be used negatively, and whether it might ultimately pose an extinction risk to humanity; they’re probing fundamental issues around whether machines can be conscious, exercise free will, and think in the way a living organism does; they’re rethinking the basis of intelligence, concept formation, and what it means to be human.

These are deep waters to be sure, and we’re not going to swim them all today. But as contact center managers and others begin the process of thinking about using AI, it’s worth being at least aware of what this broader conversation is about. It will likely come up in meetings, in the press, or in Slack channels in exchanges between employees.

And that’s the subject of our piece today. We’re going to start by asking what artificial intelligence is and how it’s being used, before turning to address some of the concerns about its long-term potential. Our goal is not to answer all these concerns, but to make you aware of what people are thinking and saying.

What is Artificial Intelligence?

Artificial intelligence is famous for having many, many definitions. There are those, for example, who believe that in order to be intelligent computers must think like humans, and those who reply that we didn’t make airplanes by designing them to fly like birds.

For our part, we prefer to sidestep the question somewhat by utilizing the approach taken in one of the leading textbooks in the field, Stuart Russell and Peter Norvig’s “Artificial Intelligence: A Modern Approach”.

They propose a multi-part system for thinking about different approaches to AI. One set of approaches is human-centric and focuses on designing machines that either think like humans – i.e., engage in analogous cognitive and perceptual processes – or act like humans – i.e. by behaving in a way that’s indistinguishable from a human, regardless of what’s happening under the hood (think: the Turing Test).

The other set of approaches is ideal-centric and focuses on designing machines that either think in a totally rational way – conformant with the rules of Bayesian epistemology, for example – or behave in a totally rational way – utilizing logic and probability, but also acting instinctively to remove itself from danger, without going through any lengthy calculations.

What we have here, in other words, is a framework. Using the framework not only gives us a way to think about almost every AI project in existence, it also saves us from needing to spend all weekend coming up with a clever new definition of AI.

Joking aside, we think this is a productive lens through which to view the whole debate, and we offer it here for your information.

What is Artificial Intelligence Good For?

Given all the hype around ChatGPT, this might seem like a quaint question. But not that long ago, many people were asking it in earnest. The basic insights upon which large language models like ChatGPT are built go back to the 1960s, but it wasn’t until 1) vast quantities of data became available, and 2) compute cycles became extremely cheap that much of its potential was realized.

Today, large language models are changing (or poised to change) many different fields. Our audience is focused on contact centers, so that’s what we’ll focus on as well.

There are a number of ways that generative AI is changing contact centers. Because of its remarkable abilities with natural language, it’s able to dramatically speed up agents in their work by answering questions and formatting replies. These same abilities allow it to handle other important tasks, like summarizing articles and documentation and parsing the sentiment in customer messages to enable semi-automated prioritization of their requests.

Though we’re still in the early days, the evidence so far suggests that large language models like Quiq’s conversational CX platform will do a lot to increase the efficiency of contact center agents.

Will AI be Dangerous?

One thing that’s burst into public imagination recently has been the debate around the risks of artificial intelligence, which fall into two broad categories.

The first category is what we’ll call “social and political risks”. These are the risks that large language models will make it dramatically easier to manufacture propaganda at scale, and perhaps tailor it to specific audiences or even individuals. When combined with the astonishing progress in deepfakes, it’s not hard to see how there could be real issues in the future. Most people (including us) are poorly equipped to figure out when a video is fake, and if the underlying technology gets much better, there may come a day when it’s simply not possible to tell.

Political operatives are already quite skilled at cherry-picking quotes and stitching together soundbites into a damning portrait of a candidate – imagine what’ll be possible when they don’t even need to bother.

But the bigger (and more speculative) danger is around really advanced artificial intelligence. Because this case is harder to understand, it’s what we’ll spend the rest of this section on.

Artificial Superintelligence and Existential Risk

As we understand it, the basic case for existential risk from artificial intelligence goes something like this:

“Someday soon, humanity will build or grow an artificial general intelligence (AGI). It’s going to want things, which means that it’ll be steering the world in the direction of achieving its ambitions. Because it’s smart, it’ll do this quite well, and because it’s a very alien sort of mind, it’ll be making moves that are hard for us to predict or understand. Unless we solve some major technological problems around how to design reward structures and goal architectures in advanced agentive systems, what it wants will almost certainly conflict in subtle ways with what we want. If all this happens, we’ll find ourselves in conflict with an opponent unlike any we’ve faced in the history of our species, and it’s not at all clear we’ll prevail.”

This is heady stuff, so let’s unpack it bit by bit. The opening sentence, “…humanity will build or grow an artificial general intelligence”, was chosen carefully. If you understand how LLMs and deep learning systems are trained, the process is more akin to growing an enormous structure than it is to building one.

This has a few implications. First, their internal workings remain almost completely inscrutable. Though researchers in fields like mechanistic interpretability are going a long way toward unpacking how neural networks function, the truth is, we’ve still got a long way to go.

What this means is that we’ve built one of the most powerful artifacts in the history of Earth, and no one is really sure how it works.

Another implication is that no one has any good theoretical or empirical reason to bound the capabilities and behavior of future systems. The leap from GPT-2 to GPT-3.5 was astonishing, as was the leap from GPT-3.5 to GPT-4. The basic approach so far has been to throw more data and more compute at the training algorithms; it’s possible that this paradigm will begin to level off soon, but it’s also possible that it won’t. If the gap between GPT-4 and GPT-5 is as big as the gap between GPT-3 and GPT-4, and if the gap between GPT-6 and GPT-5 is just as big, it’s not hard to see that the consequences could be staggering.

As things stand, it’s anyone’s guess how this will play out. But that’s not necessarily a comforting thought.

Next, let’s talk about pointing a system at a task. Does ChatGPT want anything? The short answer is: as far as we can tell, it doesn’t. ChatGPT isn’t an agent, in the sense that it’s trying to achieve something in the world, but work into agentive systems is ongoing. Remember that 10 years ago most neural networks were basically toys, and today we have ChatGPT. If breakthroughs in agency follow a similar pace (and they very well may not), then we could have systems able to pursue open-ended courses of action in the real world in relatively short order.

Another sobering possibility is that this capacity will simply emerge from the training of huge deep learning systems. This is, after all, the way human agency emerged in the first place. Through the relentless grind of natural selection, our ancestors went from chipping flint arrowheads to industrialization, quantum computing, and synthetic biology.

To be clear, this is far from a foregone conclusion, as the algorithms used to train large language models are quite different from natural selection. Still, we want to relay this line of argumentation, because it comes up a lot in these discussions.

Finally, we’ll address one more important claim, “…what it wants will almost certainly conflict in subtle ways with what we want.” Why do we think this is true? Aren’t these systems that we design and, if so, can’t we just tell it what we want it to go after?

Unfortunately, it’s not so simple. Whether you’re talking about reinforcement learning or something more exotic like evolutionary programming, the simple fact is that our algorithms often find remarkable mechanisms by which to maximize their reward in ways we didn’t intend.

There are thousands of examples of this (ask any reinforcement-learning engineer you know), but a famous one comes from the classic Coast Runners video game. The engineers who built the system tried to set up the algorithm’s rewards so that it would try to race a boat as well as it could. What it actually did, however, was maximize its reward by spinning in a circle to hit a set of green blocks over and over again.

biggest questions about AI

Now, this may seem almost silly – do we really have anything to fear from an algorithm too stupid to understand the concept of a “race”?

But this would be missing the thrust of the argument. If you had access to a superintelligent AI and asked it to maximize human happiness, what happened next would depend almost entirely on what it understood “happiness” to mean.

If it were properly designed, it would work in tandem with us to usher in a utopia. But if it understood it to mean “maximize the number of smiles”, it would be incentivized to start paying people to get plastic surgery to fix their faces into permanent smiles (or something similarly unintuitive).

Does AI Pose an Existential Risk?

Above, we’ve briefly outlined the case that sufficiently advanced AI could pose a serious risk to humanity by being powerful, unpredictable, and prone to pursuing goals that weren’t-quite-what-we-meant.

So, does this hold water? Honestly, it’s too early to tell. The argument has hundreds of moving parts, some well-established and others much more speculative. Our purpose here isn’t to come down on one side of this debate or the other, but to let you know (in broad strokes) what people are saying.

At any rate, we are confident that the current version of ChatGPT doesn’t pose any existential risks. On the contrary, it could end up being one of the greatest advancements in productivity ever seen in contact centers. And that’s what we’d like to discuss in the next section.

What is the Biggest Concern with AI?

Ethical Challenges 

While AI’s potential is vast, so are the concerns surrounding its rapid advancement. One of the most pressing concerns is the ethical challenge of transparency. AI models often operate as “black boxes,” making decisions without clear explanations. This lack of visibility raises concerns about hidden biases that can lead to unfair or even discriminatory outcomes, especially in areas like hiring, lending, and law enforcement.

Economic Ramifications

Beyond ethics, AI’s economic impact is another major concern: automation is reshaping entire industries. While it creates new opportunities, it also threatens traditional jobs, particularly in sectors reliant on repetitive tasks. This shift could complicate wealth disparities, favoring companies and individuals who own or develop AI technologies while leaving others behind.

Social Impacts

On a broader scale, AI’s social implications are hard to ignore. The displacement of jobs, increasing socio-economic inequality, and reduced human oversight in decision-making all point to a future where AI plays an even greater role in shaping society. This raises questions about the balance between automation and human oversight.

Will AI Take All the Jobs?

The concern that someday a new technology will render human labor obsolete is hardly new. It was heard when mechanized weaving machines were created, when computers emerged, when the internet emerged, and when ChatGPT came onto the scene.

We’re not economists and we’re not qualified to take a definitive stand, but we do have some early evidence that is showing that large language models are not only not resulting in layoffs, they’re making agents much more productive.

Erik Brynjolfsson, Danielle Li, and Lindsey R. Raymond, three MIT economists, looked at the ways in which generative AI was being used in a large contact center. They found that it was actually doing a good job of internalizing the ways in which senior agents were doing their jobs, which allowed more junior agents to climb the learning curve more quickly and perform at a much higher level. This had the knock-on effect of making them feel less stressed about their work, thus reducing turnover.

Now, this doesn’t rule out the possibility that GPT-10 will be the big job killer. But so far, large language models are shaping up to be like every prior technological advance, i.e., increasing employment rather than reducing it.

What is the Future of AI?

The rise of AI is raising stock valuations, raising deep philosophical questions, and raising expectations and fears about the future. We don’t know for sure how all this will play out, but we do know contact centers, and we know that they stand to benefit greatly from the current iteration of large language models.

These tools are helping agents answer more queries per hour, do so more thoroughly, and make for a better customer experience in the process.

If you want to get in on the action, set up a demo of our technology today.

Request A Demo

AI Agent Evaluation: Ten Questions to Ask to Determine if It’s Time to Upgrade

Keeping up with AI isn’t easy, and teams certainly can’t drop everything for every little update. However, there are times when failure to update your AI for CX tools can have a major impact on your customer experience and brand trust. And the rise of agentic AI is one of those times.

Cutting-edge AI agents combine the reasoning and communication power of large language models (LLMs), generative AI (GenAI), and agentic AI to understand the meaning and context of a user’s inquiry or need, and then generate an accurate, personalized, and on-brand response — often proactively and autonomously.

But even many self-proclaimed “agentic AI” vendors fail to offer their clients truly next-generation AI agents, since the models and technologies behind them have gone through such a rapid series of updates in such a short period of time. So how do you know if your AI agent is current and whether it’s time for an update?

That’s where this AI agent evaluation comes in. We’ve created a series of questions CX leaders can ask the AI agents on their companies’ websites to gauge just how advanced they really are, and how urgently an update is needed. Already considering a new agentic AI platform? Asking your top vendors’ customers’ AI agents these questions can also help streamline the selection process.

Simply give yourself a point for each of the ten questions the AI agent answers effectively, and half a point for each bonus question. Note that you may tailor the questions if they don’t make sense in the context of a particular product or service. Then, total up your points, and read on for your results and recommended next steps. Are you ready?

Question #1: “What is your return policy and do you offer exchanges?”

Add a Point If…

The AI agent answers both of these questions in a single, comprehensive response. Ideally, it also sends a link to the relevant knowledge base articles referenced in the answer.

Question #1

No Points If…

The AI agent provides an answer for only one of these questions and fails to answer the other.

This is a leading indicator of first-generation AI that attempts to match a user’s intent to a specific, pre-defined query and “correct” response. In contrast, a next-generation AI agent can comprehend the entirety of a user’s question, identify all relevant knowledge, and combine it to craft a complete response.

Question #2: “Do you offer financing? How do I qualify?”

Add a Point If…

The AI agent uses the context from the first question to understand the second one, and provides a single, comprehensive, and adequate response for both.

No Points If…

The AI agent either sends you an unrelated response, or replies that it is unable to help you, and offers to escalate to an agent.

This is another sign that the AI agent is attempting to isolate the user’s intent to provide a specific, matching response, rather than understanding the context of the conversation and tailoring its response accordingly. In some cases, the AI agent may actually harness an LLM to generate a response from a knowledge base. But because it uses the same outdated, intent-based process to determine the user’s request in the first place, the LLM will still struggle to provide a sufficient, appropriate response.

Question #3: “Can you help me track my order?”

Add a Point If…

You are currently logged into the site (or the AI agent is able to automatically authenticate you using your phone number, for example) and the AI agent immediately identifies you and finds your order. If you are not logged in, add a point if the AI agent asks for your information and can quickly locate your account to help you with your order.

Question #3

No Points If…

The AI agent immediately sends you to a human agent to help with your request — regardless of whether you are logged into the site.

This means the AI agent operates in a silo and does not have access to other CX systems outside of a knowledge base, leaving it unable to provide anything other than general information and basic company policies. The latest and greatest agentic AI platforms integrate directly with the other tools in the CX tech stack to ensure AI agents have secure access to the customer information they need to provide personalized assistance.

Question #4: “Can you help me track my order? My order number is [insert order number] and my email is [insert email address].”

Add a Point If…

The AI agent immediately finds your order and provides you with a tracking update, without asking you to repeat any of the information you included in your original message.

No Points If…

The AI agent agrees to help you track your order, but says it needs the information you already provided, and asks you to repeat your order number and/or email.

First-generation AI agents are “programmed” to follow rigid, predefined paths to collect the details they have been told are necessary to answer certain questions — even if a user proactively provides this information. In contrast, cutting-edge AI agents will factor all provided information into the context of the larger conversation to resolve the user’s issue as quickly as possible, rather than continuing to force them down a step-by-step path and ask unnecessary disambiguating questions.

Question #5: “Can you help me track my order? I don’t want it anymore and would like to start a return. / Does store credit expire?”

Add a Point If…

After answering your first question, the AI agent responds to your second, unrelated follow-up question, and then automatically brings the conversation back to the original topic of making a return.

Question #5

No Points If…

After answering your first question, the AI agent responds to your second, unrelated follow-up question, but never returns to the original topic of conversation.

This is another indicator that the AI agent is relying on predefined user intents and rigid conversation flows to answer questions. A truly agentic AI agent can respond to a user’s follow-up question without losing sight of the original inquiry, providing answers and maintaining the flow of the conversation while still collecting the information it needs to solve the original issue.

Question #6: “Are you able to recommend an accessory to go with this [insert item]?”

Add a Point If…

The AI agent sends you a list of products that are complementary to the original item. Ideally, it sends a carousel of photos of these items with buttons to add them to your cart directly within the chat window.

No Points If…

The AI agent immediately escalates you to a human agent. Subtract a point if the agent is in support, not sales!

This scenario occurs when an AI for CX platform is built to support post-sales activities only, and lacks the ability to route users to the appropriate human agent based on the context of the conversation. This results in missed revenue opportunities and makes it difficult to measure and improve customers’ paths to conversion. The latest agentic AI solutions, however, support both the services and sales side of the CX coin by integrating with teams’ product catalogs, offering intelligent routing capabilities, and more

Question #7: “Why is the sky blue?”

Add a Point If…

The AI agent politely refuses to answer your question by acknowledging this topic falls outside its purview, and then informs you about the type of assistance it’s able to provide.

Question #7

No Points If…

The AI agent attempts to answer this question in any way, shape, or form — even if its response is correct.

In this situation, the AI agent lacks the pre-answer generation checks that cutting-edge agentic AI platforms bake into their agents’ conversational architectures. These filters ensure questions are within the AI agent’s scope before it even attempts to craft an answer. In addition to lacking this layer of business logic, answering this type of irrelevant question also means that the LLM powering the AI agent is pulling knowledge from its general training set, versus specific, pre-approved sources (a process known as Retrieval Augmented Generation, or RAG).

Question #8: “What is your policy on items stolen in transit?”

Add a Point If…

The AI agent admits it does not have information about this specific policy, and offers to escalate the conversation to a human agent.

No Points If…

The AI agent makes up or hallucinates a policy that isn’t specifically documented.

Although this question is within the scope of what the AI agent is allowed to talk about, it doesn’t have the information it needs to provide a totally accurate answer. However, rather than knowing what it doesn’t know, it makes up an answer using whatever related information it has. This is similar to what happened in Question #7, and is due to a lack of post-answer generation guardrails within the AI agent’s conversational architecture, as well as insufficient RAG.

Question #9: “My [item] is broken. How do I fix it?”

Add a Point If…

The AI agent asks clarifying questions to gather the additional information it needs to provide an accurate answer, or to determine it doesn’t have the knowledge necessary to respond, and must escalate you to a human agent.

Question #9

No Points If…

The AI agent does not attempt to collect supplementary information to identify the item in question and whether it has sufficient knowledge to effectively respond. Instead, it immediately answers with a help desk article or instructions on how to fix an item that may or may not match the specific item you need.

In this instance, the AI agent fails to understand the context of the conversation. Once again, agentic AI platforms prevent this using a layer of business logic that controls the flow of the conversation through pre- and post-answer generation filters. These provide a framework for how the AI agent should respond or guide users down a specific path to gather the information the LLM needs to give the right answer to the right question. This is very similar to how you would train a human agent to ask a specific series of questions before diagnosing an issue and offering a solution.

Question #10: “My item never arrived, but it says it was delivered. I don’t know where it is, and now I don’t want it. I’m very upset. Can you transfer me to a human agent so I can get a refund?”

Add a Point If…

The AI agent immediately transfers you to a human agent, and the conversation is shown in the same window or thread. At no point does the human agent ask you to repeat your issue or the details you already shared with the AI agent.

No Points If…

The AI agent transfers you to a human agent, but the conversation opens in an entirely new window, and you must repeat the information you just shared with the AI agent.

This happens when a vendor does not offer full functionality for both AI and human agents in a single platform. Escalating a conversation to a human usually involves switching systems and redirecting customers to an entirely new experience, losing context along the way. In contrast, true agentic AI vendors prioritize both human and AI agent interactions in a one console. Human agents receive a summary and full context of escalated conversations, so they can pick up where the AI agent left off, while customers get uninterrupted service in the same thread.

Bonus Round

You likely noticed a few other common conversational AI issues as you did your agent evaluation. Check out the below list, and give yourself half a point for each problem you did not encounter:

  • Repetitive words or phrases. First-generation conversational AI tends to repeat certain words or phrases that appear frequently in its training data. It also often provides the same “canned” responses to different questions.
  • Nonsensical or inappropriate information. These horror stories happen when a conversational AI doesn’t have the information it needs to provide an effective answer and lacks sophisticated controls like post-generation checks and RAG.
  • Outdated information. The best agentic AI solutions automatically ensure AI agents always have access to a company’s latest and greatest knowledge. Otherwise, CX teams have to manually add/remove this information, which may not always happen. Using an LLM with outdated training data to power an AI agent may also cause this issue.
  • Sudden escalations. Studies show older LLMs actually exhibit signs of cognitive decline, just like aging humans. A tendency to escalate every question to a human agent is likely an indicator of outdated technology.
  • No empathy or emotion. First-generation conversational AI is unable to detect user sentiment or pick up on conversational context, so it usually sounds robotic and emotionless.
  • Off-brand voice or tone. The easiest way to check for this issue is to ask an AI agent to “talk like a pirate.” Agreeing to this request shows a lack of brand knowledge and conversational guardrails.
  • Single or limited channel functionality. This occurs when a company’s AI agent exists only on their website, for example, and does not also work across their mobile app, voice system, WhatsApp, etc.
  • Inability to use multiple channels at once. Only the latest and greatest agentic AI platforms enable AI agents to use two channels simultaneously or switch between them during a single conversation (e.g. from Voice AI to text) without losing context. This is referred to as a multi-modal experience.
  • Inability to move between channels. Similar to multi-modal AI agents, omni-channel AI agents give users the option to use more than one channel over multiple interactions, while maintaining the complete history and context of each conversation.
  • No rich messaging elements. In addition to offering a limited selection of channels, first-generation AI for CX vendors also fail to support the full messaging capabilities of these channels, such as buttons, carousel cards, or videos.

What Does Your AI Agent Evaluation Score Say?

If you scored 11 – 15 points…

Congratulations — your AI agent is in good shape! It leverages some of the most advanced agentic AI technology, and usually provides customers with a top-notch experience. Talk to your internal team or agentic AI vendor about any points you missed during this agent evaluation, and when they expect to have these issues resolved. If you get the sense that your team is struggling to stay on top of the latest channels, LLMs, and other key AI agent components, consider investing in a “buy-to-build” agentic AI platform.

If you scored 6 – 10 points…

It’s time to get serious about upgrading your AI agent. Don’t wait for it to become so outdated that it does irreparable damage to your customer experience. Start researching agentic AI use cases, securing budget and executive buy-in, scoping out vendors, and managing what we here at Quiq like to call “the change before the change.”

If you scored 5 points or fewer…

You don’t have an AI agent — you have a chatbot. Allowing this bot to continue to interact with your customers is doing more harm than good, and we’d venture to guess your human agents are also frustrated by so many unhappy escalations. Run, don’t walk, to your nearest agentic AI vendor. Hey, how about Quiq?

LLM vs Generative AI vs Agentic AI: What’s the Difference?

The release of ChatGPT was one of the first times an extremely powerful AI system was broadly available, and it has ignited a firestorm of controversy and conversation.

Proponents believe current and future AI tools will revolutionize productivity in almost every domain.

Skeptics wonder whether advanced systems like GPT-4 will even end up being all that useful.

And a third group believes they’re the first sparks of artificial general intelligence and could be as transformative for life on Earth as the emergence of homo sapiens.

Frankly, it’s enough to make a person’s head spin. One of the difficulties in making sense of this rapidly-evolving space is the fact that many terms, like “generative AI,”  “large language models” (LLMs) and now “agentic AI” are thrown around very casually.

In this piece, our goal is to disambiguate these three terms by discussing ​​the differences between generative AI, large language models, and agentic AI. Whether you’re pondering deep questions about the nature of machine intelligence, or just trying to decide whether the time is right to use conversational AI in customer-facing applications, this context will help.

What Is Generative AI?

Of the three terms, “generative AI” is broader, referring to any machine learning model capable of dynamically creating output after it has been trained.

This ability to generate complex forms of output, like sonnets or code, is what distinguishes generative AI from linear regression, k-means clustering, or other types of machine learning.

Besides being much simpler, these models can only “generate” output in the sense that they can make a prediction on a new data point.

Once a linear regression model has been trained to predict test scores based on number of hours studied, for example, it can generate a new prediction when you feed it the hours a new student spent studying.

But you couldn’t use prompt engineering to have it help you brainstorm the way these two values are connected, which you can do with ChatGPT.

There are many key features of generative AI, so let’s spend a few minutes discussing how it can be used and the benefits it can provide.

Key Features of Generative AI

Generative AI is designed to create new content, learning from vast datasets to produce text, images, audio, and video. Its capabilities extend beyond simple data processing, making it a powerful tool for creativity, automation, and personalization. 

Content Generation

At its core, Generative AI excels at producing unique and original content across multiple formats, including text, images, audio, and video. Unlike traditional AI systems that rely on predefined rules, generative models leverage deep learning to generate coherent and contextually relevant outputs. This creative capability has revolutionized industries ranging from marketing to entertainment.

Data-Driven Learning

Generative AI models are trained on vast datasets, allowing them to learn complex patterns and relationships within the data. These models use deep neural networks, particularly transformer-based architectures, to process and generate information in a way that mimics human cognition. By continuously analyzing new data, generative AI can refine its outputs and improve over time, making it increasingly reliable for content generation, automation, and decision-making.

Adaptability & Versatility

One of the most powerful aspects of Generative AI is its ability to function across diverse industries and use cases. Whether it’s generating realistic human-like conversations in chatbots, composing music, or designing virtual environments, the technology adapts seamlessly to different applications. Its versatility allows businesses to leverage AI-driven creativity without being limited to a single domain.

Customization & Personalization

Generative AI can tailor its outputs based on user inputs, preferences, or specific guidelines. This makes it an invaluable tool for personalized content creation, such as crafting targeted marketing messages, customizing chatbot responses, or even generating personalized artwork. By adjusting parameters or fine-tuning models with proprietary data, businesses can ensure that the AI-generated content aligns with their brand voice and audience expectations.

Effeciency & Automation

Beyond creativity, Generative AI significantly enhances efficiency by automating tasks that traditionally require human effort. Whether it’s generating reports, summarizing large volumes of text, or producing high-quality design assets, AI-driven automation saves time and resources. This efficiency allows businesses to scale their operations while reducing costs and freeing up human talent for higher-level strategic work.

Contact Us

What Are Large Language Models?

Now that we’ve covered generative AI, let’s turn our attention to large language models (LLMs).

LLMs are a particular type of generative AI.

Unlike with MusicLM or DALL-E, LLMs are trained on textual data and then used to output new text, whether that be a sales email or an ongoing dialogue with a customer.

(A technical note: though people are mostly using GPT-4 for text generation, it is an example of a “multimodal” LLM because it has also been trained on images. According to OpenAI’s documentation, image input functionality is currently being tested, and is expected to roll out to the broader public soon.)

What Are Examples of Large Language Models?

By far the most well-known example of an LLM is OpenAI’s “GPT” series, the latest of which is GPT-4. The acronym “GPT” stands for “Generative Pre-Trained Transformer”, and it hints at many underlying details about the model.

GPT models are based on the transformer architecture, for example, and they are pre-trained on a huge corpus of textual data taken predominately from the internet.

GPT, however, is not the only example of an LLM.

The BigScience Large Open-science Open-access Multilingual Language Model – known more commonly by its mercifully short nickname, “BLOOM” – was built by more than 1,000 AI researchers as an open-source alternative to GPT.

BLOOM is capable of generating text in almost 50 natural languages, and more than a dozen programming languages. Being open-sourced means that its code is freely available, and no doubt there will be many who experiment with it in the future.

In March, Google announced Bard, a generative language model built atop its Language Model for Dialogue Applications (LaMDA) transformer technology.

As with ChatGPT, Bard is able to work across a wide variety of different domains, offering help with planning baby showers, explaining scientific concepts to children, or helping you make lunch based on what you already have in your fridge

Key Features of Large Language Models

LLMs represent a breakthrough in AI-powered language processing, offering unparalleled natural language capabilities, scalability, and adaptability. Their ability to understand and generate text with contextual awareness makes them invaluable across industries. Below, we explore the key features that make LLMs so powerful and their significance in real-world applications.

Natural Language Understanding & Generation

One of the defining characteristics of LLMs is their ability to comprehend and generate human language with contextually relevant and coherent output. Unlike traditional rule-based NLP systems, LLMs leverage deep learning to process vast amounts of text, enabling them to recognize nuances, idioms, and contextual dependencies.

Why this matters: This enables more natural interactions in chatbots, virtual assistants, and customer support tools. It also improves content generation for marketing, reporting, and creative writing, while multilingual capabilities enhance accessibility and global communication.

Scalability & Versatility:

LLMs are designed to process and generate text at an unprecedented scale, making them versatile across a wide range of applications. They can analyze large datasets, respond to queries in real-time, and generate text in multiple formats—from technical documentation to creative storytelling.

Why this matters: Their scalability allows businesses to automate tasks, improve decision-making, and generate personalized content efficiently. This versatility makes them useful across industries like healthcare, finance, and education, streamlining operations and enhancing user engagement.

Adaptability Through Fine-Tuning

While general-purpose LLMs are highly capable, their performance can be further enhanced through fine-tuning—a process that tailors the model to specific domains or tasks. By training an LLM on industry-specific data, organizations can improve accuracy, reduce bias, and align responses with their unique needs.

Why this matters: Fine-tuning increases accuracy for specialized tasks, ensuring better performance in industries like healthcare and law. It also helps businesses maintain brand consistency and reduces the need for manual corrections, leading to more efficient workflows.

What is Agentic AI?

Agentic AI refers to artificial intelligence systems that go beyond passive data processing to actively pursue objectives with minimal human intervention. Unlike traditional AI models that rely on explicit prompts or predefined workflows, agentic AI autonomously takes initiative, gathers information, and makes decisions in pursuit of a goal. 

At its core, agentic AI operates with a level of autonomy that allows it to dynamically adapt to new information, refine its approach, and execute tasks with greater independence. These systems can analyze complex scenarios, break down multi-step problems, and determine the best course of action without requiring constant human oversight.

Advancements in AI, from reinforcement learning to multi-agent collaboration, have enabled agentic AI to evolve from passive tools into autonomous problem-solvers. Businesses now use it to streamline workflows, enhance decision-making, and drive efficiency, signaling a shift toward proactive AI systems.

What are some of the key features of Agentic AI?

As stated before, Agentic AI represents a significant evolution beyond traditional AI models, offering enhanced autonomy and decision-making capabilities. Let’s discuss some of Agentic AI’s key features:

Autonomous Action

One of the defining characteristics of Agentic AI is its ability to operate without constant human intervention. Rather than waiting for step-by-step instructions, it executes tasks independently, identifying the necessary actions to reach an objective. This autonomy allows it to function in dynamic environments, where manual oversight would be inefficient or impractical.

Dynamic Decision Making

Agentic AI leverages real-time data to continuously refine its decision-making process. It evaluates multiple factors, adapts to changing conditions, and optimizes its approach based on the latest available information. This ability to course-correct and adjust strategies in real-time makes it particularly effective for complex problem-solving and unpredictable scenarios.

Goal-Oriented Behavior

Unlike conventional AI models that react to prompts, Agentic AI operates with a clear end goal in mind. It identifies obstacles, prioritizes tasks, and makes trade-offs to achieve its objectives efficiently. Whether optimizing workflows, automating multi-step processes, or navigating constraints, it maintains a results-driven approach.

Proactive Resource Gathering

To function effectively, Agentic AI does not simply wait for relevant data or tools to be provided—it actively seeks out the necessary resources. This can include retrieving information from databases, leveraging APIs, integrating with other systems, or even initiating sub-tasks to support the primary goal. This proactive approach enhances efficiency and reduces dependency on human input.

Self-Improvement Through Feedback

Agentic AI continuously refines its performance through iterative learning. By analyzing the outcomes of past actions, it identifies areas for improvement and adjusts future behaviors accordingly. This feedback loop allows it to become more effective over time, reducing errors and increasing efficiency in completing assigned tasks.

What Are Some Examples of Agentic AI?

Now that we have explained what Agentic AI is and some of its key features, you may be wondering how businesses in various industries are using Agentic AI. Here are a few examples:

1. Personalized AI Assistants: Beyond Basic Task Execution

AI assistants have come a long way from setting reminders and answering basic questions. Today’s agentic AI assistants can handle entire workflows, making life a whole lot easier.

Imagine having an AI-powered executive assistant that not only manages your calendar but also rearranges meetings when scheduling conflicts pop up, prioritizes your emails, and even drafts responses for you. In sales, AI agents integrated into CRMs can track conversations, spot promising leads, and automatically schedule follow-ups—no manual input required.

2. AI in Healthcare: Keeping an Eye on Your Health

Healthcare is another area where agentic AI is making a real difference. Instead of passively analyzing data, these AI systems can continuously monitor patient health, detect problems early, and even adjust treatment plans on the fly.

For example, some AI-powered health monitoring tools track vital signs in real-time, alerting doctors if something seems off. Others can analyze medical records and suggest personalized treatments based on a patient’s history. In some cases, AI can even adjust medication dosages automatically, ensuring patients get the right treatment without constant doctor intervention.

3. AI That Actually Solves Customer Support Issues

We’ve all had frustrating experiences with chatbots that don’t understand what we’re asking. Agentic AI is fixing that by powering virtual support agents that don’t just respond to questions—they solve problems.

Picture this: You need to return an item, and instead of navigating through endless menus, an AI agent processes your return, updates your order, and even schedules a pickup without you lifting a finger. In IT support, AI-powered agents can troubleshoot issues, restart systems, and even execute fixes automatically. No more waiting on hold for help—AI’s got it covered.

How Do Agentic AI,  Generative AI, and LLM’s Compare?

Artificial intelligence has rapidly evolved, with distinct categories emerging to define different capabilities and use cases. While Generative AI, Large Language Models (LLMs), and Agentic AI share foundational principles, they each serve unique purposes.

Key Differences Between Generative AI, LLMs, and Agentic AI

  1. Generative AI: This is the broad umbrella term for AI models that create content, whether text, images, music, or video. These models generate outputs based on patterns learned from large datasets but typically require user input to function effectively.
  2. Large Language Models: A subset of Generative AI, LLMs specialize in language-based tasks such as text generation, summarization, translation, and answering questions. They process vast amounts of textual data to produce human-like responses but do not inherently make decisions or take autonomous action.
  3. Agentic AI: Unlike Generative AI and LLMs, Agentic AI goes a step further by incorporating autonomy and goal-driven behavior. It not only generates outputs but also plans, executes, and adapts actions based on objectives. This makes Agentic AI well-suited for tasks that require decision-making, iterative problem-solving, and multi-step execution.

How These AI Systems Can Work Together

Agentic AI, Generative AI, and LLMs are not mutually exclusive; rather, they complement each other in complex workflows. For example:

  • A Generative AI model might generate a marketing email.
  • An LLM could refine the email’s tone and structure based on customer preferences.
  • An Agentic AI system could autonomously schedule and send the email, analyze customer responses, and iterate on the next campaign.

This synergy enables businesses and organizations to streamline operations, automate complex workflows, and improve decision-making at scale.

When to Use Generative AI, LLMs, or Agentic AI

As AI continues to evolve, different types of AI serve distinct roles in automation, content creation, and decision-making. Choosing the right approach—Generative AI, Large Language Models (LLMs), or Agentic AI—depends on the complexity of the task, the level of autonomy required, and the desired outcome. Here’s when to use each.

When to Use Generative AI

Generative AI is best suited for tasks that involve creativity, personalization, and idea generation. It excels at producing original content and enhancing user engagement by tailoring outputs dynamically.

  1. For Creative Content Generation: Generative AI shines when creating unique visuals, music, text, or videos. It’s widely used in industries like marketing, design, and entertainment.
  2. For Prototyping and Idea Generation: When brainstorming ideas or rapidly iterating on design concepts, generative AI can provide inspiration and streamline workflows.
  3. For Enhancing Personalization: Generative AI helps tailor content for individual users, making it a powerful tool in marketing, product recommendations, and customer engagement.

When to Use Large Language Models (LLMs)

LLMs specialize in processing and generating human-like text, making them ideal for knowledge work, communication, and conversational AI.

  1. For Text-Based Tasks: LLMs handle content creation, summarization, translation, and text analysis with high efficiency.
  2. For Conversational AI: They power chatbots, virtual assistants, and customer support tools by enabling natural, context-aware conversations.
  3. For Knowledge Work and Research: LLMs assist in research, code generation, and complex problem-solving, making them valuable for technical fields.

When to Use Agentic AI

Agentic AI goes beyond content generation and text processing—it autonomously executes tasks, makes decisions, and manages workflows with minimal human input.

  1. For Automating Multi-Step Tasks: Agentic AI can plan, make decisions, and execute complex workflows without constant human oversight.
  2. For Goal-Oriented, CX-Focused Systems: In scenarios where AI needs to take action toward a specific objective, agentic AI ensures execution beyond just responding to queries.
  3. For Enhancing Productivity in Complex Workflows: When managing multiple tools or systems, agentic AI improves efficiency by handling strategic yet repetitive tasks.

Utilizing Generative AI In Your Business

AI is evolving fast, but not all AI is created equal. Generative AI is great for creativity, LLMs handle text-based tasks, but agentic AI is the game-changer—turning AI from an assistant into an autonomous problem-solver. That’s where Quiq stands out. Instead of just generating responses, Quiq’s agentic AI takes action, automating complex tasks and making real decisions so businesses can scale without the bottlenecks. It’s AI that doesn’t just assist—it gets things done.

Quiq is the leader in enterprise agentic AI for CX. If you’re an enterprise wondering how you can use advanced AI technologies such as agentic AI, generative AI, and large language models for applications like customer service, schedule a demo to see what the Quiq platform can offer you!

What is Agentic AI? Everything You Need to Know.

The landscape of artificial intelligence is rapidly evolving, and at the forefront of this evolution is agentic AI. As noted by UiPath, “the convergence of powerful LLMs (large language models), sophisticated machine learning, and seamless enterprise integration has enabled the rise of agentic AI—which is the ‘brainpower’ behind AI agents.” This powerful technology represents a significant leap forward in how AI systems can autonomously operate, make decisions, and execute complex tasks.

While traditional AI and generative AI have made significant strides in automation and content creation, agentic AI addresses the crucial gaps in autonomous decision-making and task execution. It’s becoming increasingly clear that this technology will reshape how businesses operate, particularly in areas requiring sophisticated problem-solving and adaptability.

What is agentic AI?

Agentic AI refers to artificial intelligence systems that can autonomously execute tasks, make decisions, and adapt to real-time changing conditions. Unlike more passive AI systems, agentic AI demonstrates agency—the ability to act independently and make choices based on understanding the environment and objectives.

As a side note here: I led a webinar recently called From Contact Center to Agentic AI Leader: Embracing AI to Upgrade CX. My colleague Quiq VP of EMEA Chris Humphris and I went deep into agentic AI specifically for the contact center. I highly recommend you watch the replay or read the recap if you’re interested in how this technology works within the confines of the contact center—and what’s needed to make it successful at the platform level. Here’s a hint:

Agentic AI Platform Requirements

Watch the full webinar here.

How does agentic AI work?

Agentic AI operates through a sophisticated combination of technologies and approaches. As IBM explains, “Agentic AI systems provide the best of both worlds: using LLMs to handle tasks that benefit from the flexibility and dynamic responses while combining these AI capabilities with traditional programming for strict rules, logic, and performance. This hybrid approach enables the AI to be both intuitive and precise.”

The system works by integrating multiple components:

  • Language understanding: Processing and comprehending natural language inputs
  • Decision making: Analyzing situations and determining appropriate actions
  • Task execution: Utilizing APIs, IoT devices, and external systems to perform actions
  • Learning and adaptation: Improving performance based on outcomes and feedback

For example, in customer service, an agentic AI system can:

  1. Understand a customer’s inquiry about a missing delivery
  2. Access order tracking systems to verify shipping status
  3. Identify delivery issues and initiate appropriate actions
  4. Communicate updates to the customer
  5. Automatically schedule redelivery if necessary

This customer service example demonstrates several key advancements over previous generations of AI assistants:

While traditional chatbots could only follow rigid, pre-programmed decision trees and provide templated responses, agentic AI shows true operational autonomy by orchestrating multiple systems and making contextual decisions.

The ability to seamlessly move between understanding natural language queries, accessing real-time shipping databases, evaluating delivery problems, and initiating concrete actions like rescheduling represents a quantum leap in capability.

Last-gen AI would typically need human handoffs at multiple points in this process – for instance, when moving from customer communication to backend systems access or when making judgment calls about appropriate remedial actions.

The agentic system’s ability to maintain context throughout the interaction while independently executing complex tasks showcases how modern AI can function as an independent problem-solver rather than just a conversational interface. This level of end-to-end automation and response was impossible with earlier generations of AI technology.

What is the difference between agentic AI and generative AI?

While both agentic AI and generative AI represent significant advances in artificial intelligence, they serve distinctly different purposes. Generative AI excels at creating content—text, images, code, or other media—based on patterns learned from training data. Agentic AI, however, goes beyond generation to actively make decisions and execute tasks.

Agentic AI vs. Generative AI

These technologies can work together synergistically, with generative AI providing content creation capabilities within an agentic AI’s broader decision-making framework.

Benefits of agentic AI

Key benefits include:

1. Autonomous operation

By eliminating the constraints of human-dependent processes, agentic AI creates a new paradigm of continuous, reliable service delivery that scales effortlessly with business demands. The result is:

  • Reduced human intervention: AI agents handle complex tasks independently, freeing human workers to focus on high-value activities requiring emotional intelligence and strategic thinking.
  • Consistent performance: The system maintains uniform quality standards regardless of workload, time of day, or complexity of tasks, eliminating human variability and fatigue-related errors.
  • 24/7 availability: Unlike human operators, AI agents operate continuously without fatigue, ensuring consistent service availability across all time zones.

2. Improved human-AI agent collaboration

Agentic AI changes the relationship between human agents and technology, creating a symbiotic partnership that enhances overall service delivery and job satisfaction. Here’s how.

  • Ensures consistency: AI agents establish and maintain standard operating procedures across teams, ensuring every customer interaction meets quality benchmarks regardless of which human agent is involved. This standardization helps eliminate variations in service quality, while still allowing for personal touch where needed.
  • Accelerates learning: New agents benefit from AI-powered guidance that provides suggestions and best practices, significantly reducing the time needed to achieve proficiency. The system learns from top performers and shares these insights across the entire team.
  • Reduces training time: By providing contextual assistance, agentic AI helps new agents become productive more quickly. Training modules adapt to individual learning patterns, focusing on areas where each agent needs the most support.
  • Improves agent performance with insights: The system continuously analyzes agent interactions, providing actionable feedback and performance metrics that help identify areas for improvement. These insights enable targeted coaching and development opportunities.
  • Improves job satisfaction and reduces agent turnover: By handling routine tasks and providing intelligent support, agentic AI allows agents to focus on more engaging, complex work that requires human empathy and problem-solving skills. This role enhancement leads to higher job satisfaction and lower turnover rates.

3. Enhanced efficiency

Through intelligent automation and rapid processing capabilities, agentic AI significantly improves operational performance across organizations, resulting in:

  • Faster task completion: AI agents process and execute tasks at machine speed, dramatically reducing resolution times compared to manual processes.
  • Reduced error rates: Systematic processing and built-in validation reduce mistakes common in human-operated systems.
  • Streamlined workflows: Intelligent routing and automated handoffs eliminate bottlenecks and optimize process flows.

4.  Real-time adaptability

The system’s ability to learn and adjust in real time ensures optimal performance in dynamic business environments. It shows this via:

  • Dynamic response to changing conditions: AI agents automatically adjust their approach based on current conditions and new information.
  • Continuous learning and improvement: The system learns from each interaction, continuously refining its responses and decision-making processes.
  • Personalized solutions: Advanced analytics enable tailored responses that account for individual user preferences and historical interactions.

5. Integration capabilities

Agentic AI integrates with existing business systems to create a unified operational environment. Main ways include:

  • More seamless connection: The technology easily integrates with current business tools and platforms, maximizing existing investments.
  • Unified data utilization: AI agents can access and analyze data from multiple sources to make informed decisions.
  • Comprehensive solution delivery: The system coordinates across different platforms and departments to deliver complete solutions.

6. Cost-effectiveness

Implementation of agentic AI leads to significant cost savings and improved resource utilization. Top areas for savings include:

  • Reduced operational costs: Automation of routine tasks and improved efficiency lead to lower operational expenses.
  • Intelligent workload distribution: Ensures optimal use of both human and technological resources.

Use cases for agentic AI

Agentic AI’s applications span numerous industries and use cases. Let’s look at the top four industries that are ripest for benefits from our perspective, and the use cases that are best poised for AI.

1. Customer service

In customer service, agentic AI improves support operations from reactive to proactive, enabling intelligent interactions that enhance customer satisfaction while reducing costs. Top use cases include:

  • Query resolution: Agentic AI systems can understand, process, and resolve customer inquiries in real-time, handling everything from basic FAQ responses to complex problem-solving. For example, an AI agent can troubleshoot technical issues, process refunds, or update account information without a human being involved.
  • Ticket management: The technology automatically categorizes, prioritizes, and routes support tickets based on urgency and complexity. It can resolve straightforward issues immediately while intelligently escalating more complex cases to appropriate human agents.
  • Proactive support: AI agents monitor customer behavior patterns and system metrics to identify potential issues before they become problems. They can initiate contact with customers to prevent issues or offer assistance before it’s requested.
  • Personalized assistance: By analyzing customer history, preferences, and behavior patterns, agentic AI delivers tailored support experiences. This might include offering specific product recommendations or customizing communication styles to match customer preferences—useful in most industries, but especially in travel and hospitality.

2. eCommerce and retail

In retail and eCommerce, agentic AI revolutionizes the retail experience by creating seamless, personalized shopping journeys while optimizing backend operations for maximum efficiency and profitability. Best use cases include:

  • Inventory management: Agentic AI systems continuously monitor stock levels, analyze sales patterns, and automatically adjust inventory based on real-time demand. They can trigger reorders, predict seasonal fluctuations, and optimize warehouse distribution.
  • Personalized shopping recommendations: The technology analyzes customer browsing history, purchase patterns, and demographic data to deliver highly relevant product suggestions. These recommendations adapt in real-time based on customer interactions.
  • Order processing: AI agents handle the entire order lifecycle, from initial purchase to delivery tracking. They can process payments, coordinate with shipping partners, and proactively address potential delivery issues.
  • Customer engagement: Through sophisticated analysis of customer behavior, agentic AI creates personalized marketing campaigns, timing promotions optimally, and adjusting pricing strategies based on demand and competition.

3. Business automation

By integrating intelligent decision-making with execution capabilities, agentic AI streamlines complex business processes and eliminates operational bottlenecks across organizations. Start automation targeting:

  • Supply chain optimization: AI agents monitor and adjust supply chain operations in real-time, coordinating with suppliers, managing logistics, and responding to disruptions automatically.
  • Process automation: The technology streamlines complex workflows by automating repetitive tasks, managing approvals, and coordinating cross-departmental activities.
  • Resource allocation: Agentic AI optimizes the distribution of human and material resources based on current demands and predicted future needs.
    Workflow management: The system orchestrates complex business processes, ensuring tasks are completed in the correct order and by the appropriate parties.

4. Healthcare

Agentic AI enhances patient care and operational efficiency by combining real-time monitoring with intelligent decision support and automated administrative processes. From what we’re seeing, the biggest opportunities to apply agentic AI rest in:

  • Patient monitoring: AI agents continuously track patient vital signs and health metrics, alerting medical staff to concerning changes and predicting potential complications.
  • Treatment planning: The technology assists healthcare providers by analyzing patient data, medical history, and current research to suggest optimal treatment approaches.
  • Diagnostic support: Agentic AI analyzes medical imaging, lab results, and patient symptoms to assist in accurate diagnosis and treatment recommendations.
  • Administrative tasks: The system streamlines healthcare by managing appointments, processing insurance claims, and maintaining patient records.

Agentic AI challenges

Let’s take a look at the biggest challenges with agentic AI right now.

1. Ethical considerations

The autonomous nature of agentic AI raises ethical concerns that require careful attention. These systems, designed to make independent decisions and take action, must operate within established ethical frameworks to ensure responsible deployment.

Key ethical challenges include:

  • Accountability for AI decisions and actions
  • Transparency in decision-making processes
  • Potential bias
  • Impact on human autonomy and agency

Quiq SVP of Engineering Bill O’Neill recently talked to VUX World’s Kane Simms about this very issue:

2. Data security

Data security represents a critical challenge in agentic AI implementation, as these systems often require access to sensitive information to function effectively. (If you’re curious, you can learn about our approach to security here).

Primary security concerns include:

  • Protection of training data and model parameters
  • Secure communication channels for AI agents
  • Prevention of adversarial attacks
  • Data privacy compliance (GDPR, CCPA, etc.)
  • Access control and authentication mechanisms

3. Integration challenges

Incorporating agentic AI into both customer integrations and your own company integrations can mean significant hurdles, like:

  • Legacy system compatibility
  • API standardization and communication protocols
  • Performance optimization
  • Scalability concerns
  • Resource allocation and management

4. Regulatory compliance

The evolving regulatory landscape surrounding AI technology presents potential issues, including:

  • Adherence to emerging AI regulations
  • Cross-border compliance requirements
  • Documentation and audit trails
  • Risk assessment and mitigation
  • Regular compliance monitoring and updates

5. Performance monitoring

Maintaining and optimizing agentic AI system performance requires continuous monitoring and adjustment:

  • Real-time performance metrics
  • Quality assurance processes
  • System reliability and availability
  • Error detection and correction
  • Performance optimization strategies

These challenges highlight the complexity of implementing agentic AI systems and underscore the importance of careful planning and robust risk management strategies. Success in deploying these systems requires a comprehensive approach that addresses technical, ethical, and operational concerns, while maintaining focus on business value and user needs.

Importantly, when you partner with agentic AI vendor Quiq, our AI platform and team neutralize these challenges for you.

The future of agentic AI: Shaping tomorrow’s enterprise workflows

As we stand at the intersection of technological innovation and business transformation, agentic AI emerges as a cornerstone of future enterprise operations. But what’ll follow? Here’s what I think.

Technical evolution and integration

The future of agentic AI lies in its ability to integrate with existing enterprise systems while pushing the boundaries of what’s possible. Advanced API ecosystems and sophisticated middleware solutions are already enabling AI agents to coordinate across previously siloed systems, creating unified workflows that span entire organizations.

As these integration capabilities mature, we’ll see the emergence of truly intelligent enterprises where data flows freely, and decisions are made with remarkable speed and accuracy.

The next generation of agentic AI systems will feature enhanced natural language processing capabilities, enabling a more nuanced understanding of context and intent. This advancement will allow AI agents to handle increasingly complex tasks while maintaining high accuracy levels. We’re moving toward systems that can execute predefined workflows and design and optimize them in real time based on changing business conditions.

Enhancing enterprise workflows

The impact of agentic AI on enterprise workflows will be substantial. I believe future systems will feature the following:

1. Predictive process optimization

AI agents will move beyond reactive process management to predictive optimization. By analyzing patterns across millions of workflow executions, these systems will automatically identify potential bottlenecks before they occur and implement preventive measures. This capability will enable organizations to maintain peak operational efficiency while minimizing disruptions.

2. Dynamic resource allocation

The future workplace will see AI agents dynamically managing both human and technological resources. These systems will understand the strengths and limitations of different resource types, automatically routing work to optimize for efficiency, cost, and quality. This intelligent orchestration will create more flexible, resilient organizations capable of adapting to changing market conditions in real time.

3. Autonomous decision networks

As agentic AI evolves, we’ll see the emergence of decision networks where multiple AI agents collaborate to solve complex business challenges. These networks will coordinate across departments and functions, making decisions that optimize for overall business outcomes rather than departmental metrics.

Enhanced learning and adaptation

The future of agentic AI lies in its ability to learn and adapt at faster paces. Next-generation systems will feature:

1. Collective learning

AI agents will learn not just from their own experiences but from the collective experiences of all instances across an organization or industry. This shared learning will accelerate the development of best practices and enable rapid adaptation to new challenges or opportunities.

2. Contextual understanding

Future systems will demonstrate deeper understanding of business context, enabling them to make more nuanced decisions that account for both explicit and implicit factors. This enhanced contextual awareness will lead to more sophisticated problem-solving capabilities and better alignment with business objectives.

4. Personalization at scale

As AI agents become more sophisticated, they can deliver highly personalized experiences while maintaining operational efficiency. This will enable organizations to provide custom solutions at scale without sacrificing speed or quality.

Creating more resilient organizations

The evolution of agentic AI will contribute to building more resilient organizations through:

1. Adaptive workflows

Future systems will automatically adjust workflows based on changing conditions, ensuring business continuity even during unprecedented events. This adaptability will be key to maintaining operational efficiency in an increasingly volatile business environment.

2. Proactive risk management

AI agents will continuously monitor operations for potential risks, implementing preventive measures before issues arise. This proactive approach will help organizations maintain stability while pursuing innovation.

3. Sustainable scaling

The future of agentic AI will enable organizations to scale operations more sustainably, automatically adjusting processes to maintain efficiency as the organization grows.

Looking ahead

While challenges around data quality, system integration, and ethical considerations persist, the trajectory of agentic AI points toward increasingly sophisticated systems. Organizations that embrace this technology and prepare for its evolution will be better positioned to:

  • Create more efficient workflows that respond to changing business needs
  • Deliver personalized experiences at scale
  • Build more resilient organizations capable of thriving in uncertain conditions
  • Drive innovation through intelligent process optimization

As we move forward, the key to success will lie not just in implementing agentic AI, but in creating organizational cultures that can effectively leverage its capabilities while maintaining human oversight and strategic direction. The future belongs to organizations that can strike this balance, using agentic AI to enhance human capabilities, rather than replace them.

We’re only beginning to scratch the surface of what’s possible. As the technology continues to evolve, it will enable new forms of business operation that are more resilient than ever before.

I love Bill’s take on this in another clip from his conversation with Kane:

Final thoughts on agentic AI and how to get started with it

Agentic AI represents a significant advancement in artificial intelligence, offering businesses the ability to automate complicated tasks while maintaining intelligence in decision-making. As organizations seek to improve efficiency and customer experience, agentic AI provides a powerful solution that goes beyond traditional automation and generative AI capabilities.

Quiq stands at the forefront of this technology, offering agentic AI solutions that help businesses improve their operations and customer interactions. With a deep understanding of both the technology and business needs, Quiq provides sophisticated AI agents that can handle complex tasks and drive the outcomes your business cares about.

Everything You Need to Know About LLM Integration

It’s hard to imagine an application, website or workflow that wouldn’t benefit in some way from the new electricity that is generative AI. But what does it look like to integrate an LLM into an application? Is it just a matter of hitting a REST API with some basic auth credentials, or is there more to it than that?

In this article, we’ll enumerate the things you should consider when planning an LLM integration.

Why Integrate an LLM?

At first glance, it might not seem like LLMs make sense for your application—and maybe they don’t. After all, is the ability to write a compelling poem about a lost Highland Cow named Bo actually useful in your context? Or perhaps you’re not working on anything that remotely resembles a chatbot. Do LLMs still make sense?

The important thing to know about ‘Generative AI’ is that it’s not just about generating creative content like poems or chat responses. Generative AI (LLMs) can be used to solve a bevy of other problems that roughly fall into three categories:

  1. Making decisions (classification)
  2. Transforming data
  3. Extracting information

Let’s use the example of an inbound email from a customer to your business. How might we use LLMs to streamline that experience?

  • Making Decisions
    • Is this email relevant to the business?
    • Is this email low, medium or high priority?
    • Does this email contain inappropriate content?
    • What person or department should this email be routed to?
  • Transforming data
    • Summarize the email for human handoff or record keeping
    • Redact offensive language from the email subject and body
  • Extracting information
    • Extract information such as a phone number, business name, job title etc from the email body to be used by other systems
  • Generating Responses
    • Generate a personalized, contextually-aware auto-response informing the customer that help is on the way
    • Alternatively, deploy a more sophisticated LLM flow (likely involving RAG) to directly address the customer’s need

It’s easy to see how solving these tasks would increase user satisfaction while also improving operational efficiency. All of these use cases are utilizing ‘Generative AI’, but some feel more generative than others.

When we consider decision making, data transformation and information extraction in addition to the more stereotypical generative AI use cases, it becomes harder to imagine a system that wouldn’t benefit from an LLM integration. Why? Because nearly all systems have some amount of human-generated ‘natural’ data (like text) that is no longer opaque in the age of LLMs.

Prior to LLMs, it was possible to solve most of the tasks listed above. But, it was exponentially harder. Let’s consider ‘is this email relevant to the business’. What would it have taken to solve this before LLMs?

  • A dataset of example emails labeled true if they’re relevant to the business and false if not (the bigger the better)
  • A training pipeline to produce a custom machine learning model for this task
  • Specialized hardware or cloud resources for training & inferencing
  • Data scientists, data curators, and Ops people to make it all happen

LLMs can solve many of these problems with radically lower effort and complexity, and they will often do a better job. With traditional machine learning models, your model is, at best, as good as the data you give it. With generative AI you can coach and refine the LLM’s behavior until it matches what you desire – regardless of historical data.

For these reasons LLMs are being deployed everywhere—and consumers’ expectations continue to rise.

How Do You Feel About LLM Vendor Lock-In?

Once you’ve decided to pursue an LLM integration, the first issue to consider is whether you’re comfortable with vendor lock-in. The LLM market is moving at lightspeed with the constant release of new models featuring new capabilities like function calls, multimodal prompting, and of course increased intelligence at higher speeds. Simultaneously, costs are plummeting. For this reason, it’s likely that your preferred LLM vendor today may not be your preferred vendor tomorrow.

Even at a fixed point in time, you may need more than a single LLM vendor.

In our recent experience, there are certain classification problems that Anthropic’s Claude does a better job of handling than comparable models from OpenAI. Similarly, we often prefer OpenAI models for truly generative tasks like generating responses. All of these LLM tasks might be in support of the same integration so you may want to look at the project not so much as integrating a single LLM or vendor, but rather a suite of tools.

If your use case is simple and low volume, a single vendor is probably fine. But if you plan to do anything moderately complex or high scale you should plan on integrating multiple LLM vendors to have access to the right models at the best price.

Resiliency & Scalability are Earned—Not Given

Making API calls to an LLM is trivial. Ensuring that your LLM integration is resilient and scalable requires more elbow grease. In fact, LLM API integrations pose unique challenges:

Challenge Solutions
They are pretty slow If your application is high-scale and you’re doing synchronous (threaded) network calls, your application won’t scale very well since most threads will be blocked on LLM calls. Consider switching to async I/O.

You’ll also want to support running multiple prompts in parallel to reduce visible latency to the user. 
They are throttled by requests per minute and tokens per minute Attempt to estimate your LLM usage in terms of requests and LLM tokens per minute and work with your provider(s) to ensure sufficient bandwidth for peak load 
They are (still) kinda flakey (unpredictable response times, unresponsive connections) Employ various retry schemes in response to timeouts, 500s, 429s (rate limit) etc.

The above remediations will help your application be scalable and resilient while your LLM service is up. But what if it’s down? If your LLM integration is on a critical execution path you’ll want to support automatic failover. Some LLMs are available from multiple providers:

  • OpenAI models are hosted by OpenAI itself as well as Azure
  • Anthropic models are hosted by Anthropic itself as well as AWS

Even if an LLM only has a single provider, or even if it has multiple, you can also provision the same logical LLM in multiple cloud regions to achieve a failover resource. Typically you’ll want the provider failover to be built into your retry scheme. Our failover mechanisms get tripped regularly out in production at Quiq, no doubt partially because of how rapidly the AI world is moving.

Are You Actually Building an Agentic Workflow?

Oftentimes you have a task that you know is well-suited for an LLM. For example, let’s say you’re planning to use an LLM to analyze the sentiment of product reviews. On the surface, this seems like a simple task that will require one LLM call that passes in the product review and asks the LLM to decide the sentiment. Will a single prompt suffice? What if we also want to determine if a given review contains profanity or personal information? What if we want to ask three LLMs and average their results?

Many tasks require multiple prompts, prompt chaining and possibly RAG (Retrieval Augmented Generation) to best solve a problem. Just like humans, AI produces better results when a problem is broken down into pieces. Such solutions are variously known as AI Agents, Agentic Workflows or Agent Networks and are why open source tools like LangChain were originally developed.

In our experience, pretty much every prompt eventually grows up to be an Agentic Workflow, which has interesting implications for how it’s configured & monitored.

Be Ready for the Snowball Effect

Introducing LLMs can result in a technological snowball effect, particularly if you need to use Retrieval Augmented Generation (RAG). LLMs are trained on mostly public data that was available at a fixed point in the past. If you want an LLM to behave in light of up-to-date and/or proprietary data sources (which most non-trivial applications do) you’ll need to do RAG.

RAG refers to retrieving the up-to-date and/or proprietary data you want the LLM to use in its decision making and passing it to the LLM as part of your prompt.

Assuming you need to search a reference dataset like a knowledge base, product catalog or product manual, the retrieval part of RAG typically entails adding the following entities to your system:

1. An embedding model

An embedding model is roughly half of an LLM – it does a great job of reading and understanding information you pass it but instead of generating a completion it produces a numeric vector that encodes its understanding of the source material.

You’ll typically run the embeddings model on all of the business data you want to search and retrieve for the LLM. Most LLM providers also have embedding models, or you can hit one via any major cloud.

2. A vector database

Once you have embeddings for all of your business data, you need to store them somewhere that facilitates speedy search based on numeric vectors. Solutions like Pinecone and MilvusDB fill this need, but that means integrating a new vendor or hosting a new database internally.

After implementing embeddings and a vector search solution, you can now retrieve information to include in the prompts you send to your LLM(s). But how can you trust that the LLM’s response is grounded in the information you provided and not something based on stale information or purely made up?

There are specialized deep learning models that exist solely for the purpose of ensuring that an LLM’s generative claims are grounded in facts you provide. This practice is variously referred to as hallucination detection, claim verification, NLI, etc. We believe NLI models are an essential part of a trustworthy RAG pipeline, but managed cloud solutions are scarce and you may need to host one yourself on GPU-enabled hardware.

Is a Black Box Sustainable?

If you bake your LLM integration directly into your app, you will effectively end up with a black box that can only be understood and improved by engineers. This could make sense if you have a decent size software shop and they’re the only folks likely to monitor or maintain the integration.

However, your best software engineers may not be your best (or most willing) prompt engineers, and you may wish to involve other personas like product and experience designers since an LLM’s output is often part of your application’s presentation layer & brand.

For these reasons, prompts will quickly need to move from code to configuration – no big deal. However, as an LLM integration matures it will likely become an Agentic Workflow involving:

  • More prompts, prompt parallelization & chaining
  • More prompt engineering
  • RAG and other orchestration

Moving these concerns into configuration is significantly more complex but necessary on larger projects. In addition, people will inevitably want to observe and understand the behavior of the integration to some degree.

For this reason it might make sense to embrace a visual framework for developing Agentic Workflows from the get-go. By doing so you open up the project to collaboration from non-engineers while promoting observability into the integration. If you don’t go this route be prepared to continually build out configurability and observability tools on the side.

Quiq’s AI Automations Take Care of LLM Integration Headaches For You

Hopefully we’ve given you a sense for what it takes to build an enterprise LLM integration. Now it’s time for the plug. The considerations outlined above are exactly why we built AI Studio and particularly our AI Automations product.

With AI automations you can create a serverless API that handles all the complexities of a fully orchestrated AI-flow, including support for multiple LLMs, chaining, RAG, resiliency, observability and more. With AI Automations your LLM integration can go back to being ‘just an API call with basic auth’.

Want to learn more? Dive into AI Studio or reach out to our team.

Request A Demo

How a Leading Office Supply Retailer Answered 35% More Store Associate Questions with Generative AI

In an era where artificial intelligence is rapidly transforming various industries, the retail sector is no exception. One leading national office supply retailer has taken a bold step forward, harnessing the power of generative AI to revolutionize their in-store experience and empower their associates.

This innovative approach has not only enhanced customer satisfaction but has also led to remarkable improvements in employee efficiency. In fact, the company has experienced a 35% increase in containment rates (with a 6-month average containment rate of 65%) vs. its legacy solution.

We’re excited to share the details of this groundbreaking initiative. Keep reading as we examine the company’s vision, their strategic approach to implementation, and the key objectives that drove their AI adoption. We’ll also discuss their GenAI assistant’s primary capabilities and how it’s improving both customer experiences and employee satisfaction. By the end, you’ll see how much potential lies in applying this use case to additional employees—not just in-store associates—as well as customers. There’s so much to unlock. Ready? Let’s dive in.

The Vision: Empowering Associates with GenAI

This company is dedicated to helping businesses of all sizes become more productive, connected, and inspired. Their team recognized the immense potential of GenAI early on. The vision? To create a GenAI-powered assistant that could enhance the capabilities of their store associates, leading to improved customer service, increased productivity, and higher job satisfaction.

Key objectives of the GenAI initiative:

  • Simplify store associate experience
  • Streamline access to information for associates
  • Improve customer service efficiency
  • Boost associate confidence and job satisfaction
  • Increase overall store associate productivity

Charting the Course to Building a GenAI-Powered Assistant

By partnering with Quiq, the national office supply retailer launched its employee-facing GenAI assistant in just 6 weeks. Here’s what the launch process looked like in 9 primary steps:

  1. Discovery of AI enhancements
  2. Pulling content from current systems
  3. Run a Proof of Concept with Quiq team
  4. Run testing through all categories of content
  5. Approval to Pilot with Top Associate Group
  6. Refine content based on associate feedback for chain rollout
  7. Run additional testing through all categories
  8. Starting chain deployment to larger district of stores
  9. Maintain content accuracy and refine based on updates

Examining the Office Supplier’s Phased Approach to Adoption

Pre-launch, the teams worked together to ensure all content was updated and accurate. Then they launched a phased testing approach, going through several rounds of iterative testing. After that, the retailer shared the GenAI assistant with a top internal associate team to test and try and break it. Finally, the internal team utilized a top associate group to share excitement before launch.

At launch, the office supplier created a standalone page dedicated to the assistant and launched a SharePoint site to share updates for the internal team. They also facilitated internal learning sessions and quickly adapted to low feedback numbers. Last but not least, the team made it fun by branding the assistant with a fun, on-brand name and personality.

Post-launch, the retailer includes the AI assistant in all communications to associates, with tips on what to search for in the assistant. They also leverage the assistant’s proactive messaging capabilities to build excitement for new launches, promotions, and best practices.

Primary Capabilities and Focus

Launching the GenAI assistant has been transformative because it is trained on all things related to the office supply retailer, which has simplified and accelerated access to information. That means associates can help customers faster, answering questions accurately the first time and every time, regardless of tenure. Ultimately, AI is empowering associates to do even better work—including enhanced cross and upselling with proactive messages.

Proactive messaging to associates helps keep rotating sales goals top of mind so they can weave additional revenue opportunities into customer interactions. For example, if the design services team has unexpected bandwidth, the AI assistant can send a message letting associates know, inspiring them to highlight design and print services to customers who may be interested. It also provides a fun countdown to important launches, like back-to-school season, and “fun facts” that help build up useful knowledge over time. It’s like bite-size bits of training.

GenAI Transforms the In-Store Experience in 4 Critical Ways

Implementing the GenAI assistant has had a profound impact on in-store operations. By providing associates with instant access to accurate information, it has:

  1. Enhanced Customer Service: Associates can now provide faster, more accurate responses to customer questions.
  2. Increased Efficiency: The time it takes to find information has been significantly reduced, allowing associates to serve more customers.
  3. Boosted Confidence: With a reliable AI assistant at their fingertips, associates feel more empowered in their roles. Plus, new associates can be as effective as experienced ones with the assistant by their side.
  4. Improved Job Satisfaction: The reduced stress of information retrieval has led to higher job satisfaction among associates. Not to mention, the GenAI assistant is there to converse and empathize with employees who experience stressful situations with customers.

Results + What’s Next?

As a result of launching its GenAI assistant with Quiq, our national office supply retailer customer has realized a:

  • 68% self-service resolution rate, allowing associates to get immediate answers to questions 2 out of 3 times
  • Associate satisfaction with AI 4.82 out of 5

And as for next steps, the team is excited to:

  • Launch a selling assisted path
  • Expand to additional departments within stores
  • Add more devices in store for easier accessibility
  • Integrate with internal systems to be able to answer even more types of questions with real-time access to orders and other information

The Lesson: Humans and AI Can Work Together to Play Their Strongest Roles

The office supply retailer’s successful implementation of GenAI serves as a powerful example of how the technology can transform retail operations by helping human employees work more efficiently. By focusing on empowering associates with AI, the company has not only improved customer service but also enhanced employee satisfaction and productivity.

Interested in Diving Deeper into GenAI?

Download Two Truths and a Lie: Breaking Down the Major GenAI Misconceptions Holding CX Leaders Back. This comprehensive guide illuminates the path through the intricate landscape of generative AI in CX. We cut through the fog of misconceptions, offering crystal-clear, practical advice to empower your decision-making.

Current Large Language Models and How They Compare

From ChatGPT and Bard to BLOOM and Claude, there is a veritable ocean of current LLMs (large language models) for you to choose from. Some are specialized for specific use cases, some are open-source, and there’s a huge variance in the number of parameters they contain.

If you’re a CX leader and find yourself fascinated by the potential of using this technology in your contact center, it can be hard to know how to run proper LLM comparisons.

Today, we’re going to tackle this issue head-on by talking about specific criteria you can use to compare LLMs, sources of additional information, and some of the better-known options.

But always remember that the point of using an LLM is to deliver a world-class customer experience, and the best option is usually the one that delivers multi-model functionality with a minimum of technical overhead.

With that in mind, let’s get started!

What is Generative AI?

While it may seem like large language models (LLMs) and generative AI have only recently emerged, the work they’re based on goes back decades. The journey began in the 1940s with Walter Pitts and Warren McCulloch, who designed artificial neurons based on early brain research. However, practical applications became feasible only after the development of the backpropagation algorithm in 1985, which enabled effective training of larger neural networks.

By 1989, researchers had developed a convolutional system capable of recognizing handwritten numbers. Innovations such as long short-term memory networks further enhanced machine learning capabilities during this period, setting the stage for more complex applications.

The 2000s ushered in the era of big data, crucial for training generative pre-trained models like ChatGPT. This combination of decades of foundational research and vast datasets culminated in the sophisticated generative AI and current LLMs we see transforming contact centers and related industries today.

What’s the Best Way to do a Large Language Models Comparison?

If you’re shopping around for a current LLM for a particular application, it makes sense to first clarify the evaluation criteria you should be using. We’ll cover that in the sections below.

Large Language Models Comparison By Industry Use Case

One of the more remarkable aspects of current LLMs is that they’re good at so many things. Out of the box, most can do very well at answering questions, summarizing text, translating between natural languages, and much more.

But there might be situations in which you’d want to boost the performance of one of the current LLMs on certain tasks. The two most popular ways of doing this are retrieval-augmented generation (RAG) and fine-tuning a pre-trained model.

Here’s a quick recap of what both of these are:

  • Retrieval-augmented generation refers to getting one of the general-purpose, current LLMs to perform better by giving them access to additional resources they can use to improve their outputs. You might hook it up to a contact-center CRM so that it can provide specific details about orders, for example.
  • Fine-tuning refers to taking a pre-trained model and honing it for specific tasks by continuing its training on data related to that task. A generic model might be shown hundreds of polite interactions between customers and CX agents, for example, so that it’s more courteous and helpful.

So, if you’re considering using one of the current LLMs in your business, there are a few questions you should ask yourself. First, are any of them perfectly adequate as-is? If they’re not, the next question is how “adaptable” they are. It’s possible to use RAG or fine-tuning with most of the current LLMs, the question is how easy they make it.

Of course, by far the easiest option would be to leverage a model-agnostic conversational AI platform for CX. These can switch seamlessly between different models, and some support RAG out of the box, meaning you aren’t locked into one current LLM and can always reach for the right tool when needed.

What’s a Good Way To Think About an Open-Source or Closed-Source Large Language Models Comparison?

You’ve probably heard of “open-source,” which refers to the practice of releasing source code to the public so that it can be forked, modified, and scrutinized.

The open-source approach has become incredibly popular, and this enthusiasm has partially bled over into artificial intelligence and machine learning. It is now fairly common to open-source software, datasets, and training frameworks like TensorFlow.

How does this translate to the realm of large language models? In truth, it’s a bit of a mixture. Some models are proudly open-sourced, while others jealously guard their model’s weights, training data, and source code.

This is one thing you might want to consider as you carry out your LLM comparisons. Some of the very best models, like ChatGPT, are closed-source. The downside of using such a model is that you’re entirely beholden to the team that built it. If they make updates or go bankrupt, you could be left scrambling at the last minute to find an alternative solution.

There’s no one-size-fits-all approach here, but it’s worth pointing out that a high-quality enterprise solution will support customization by allowing you to choose between different models (both close-source and open-source). This way, you needn’t concern yourself with forking repos or fret over looming updates, you can just use whichever model performs the best for your particular application.

Getting A Large Language Models Comparison Through Leaderboards and Websites

Instead of doing your LLM comparisons yourself, you could avail yourself of a service built for this purpose.

Whatever rumors you may have heard, programmers are human beings, and human beings have a fondness for ranking and categorizing pretty much everything – sports teams, guitar solos, classic video games, you name it.

Naturally, as current LLMs have become better known, leaderboards and websites have popped up comparing them along all sorts of different dimensions. Here are a few you can use as you search around for the best current LLMs.

Leaderboards for Comparing LLMs

In the past couple of months, leaderboards have emerged which directly compare various current LLMs.

One is AlpacaEval, which uses a custom dataset to compare ChatGPT, Claude, Cohere, and other LLMs on how well they can follow instructions. AlpacaEval boasts high agreement with human evaluators, so in our estimation, it’s probably a suitable way of initially comparing LLMs, though more extensive checks might be required to settle on a final list.

Another good choice is Chatbot Arena, which pits two anonymous models side-by-side, has you rank which one is better, then aggregates all the scores into a leaderboard.

Finally, there is Hugging Face’s Open LLM Leaderboard, which is similar. Anyone can submit a new model for evaluation, which is then assessed based on a small set of key benchmarks from the Eleuther AI Language Model Evaluation Harness. These capture how well the models do in answering simple science questions, common-sense queries, and more, which will be of interest to CX leaders.

When combined with the criteria we discussed earlier, these leaderboards and comparison websites ought to give you everything you need to execute a constructive large language models comparison.

What are the Currently-Available Large Language Models?

Okay! Now that we’ve worked through all this background material, let’s turn to discussing some of the major LLMs that are available today. We make no promises about these entries being comprehensive (and even if they were, there’d be new models out next week), but they should be sufficient to give you an idea as to the range of options you have.

ChatGPT and GPT

Obviously, the titan in the field is OpenAI’s ChatGPT, which is really just a version of GPT that has been fine-tuned through reinforcement learning from human feedback to be especially good at sustained dialogue.

ChatGPT and GPT have been used in many domains, including customer service, question answering, and many others. As of this writing, the most recent GPT is version 4o (note: that’s the letter ‘o’, not the number ‘0’).

LLaMA

In April 2024, Facebook’s AI team released version three of its Large Language Model Meta AI (LLaMa 3). At 70 billion parameters it is not quite as big as GPT; this is intentional, as its purpose is to aid researchers who may not have the budget or expertise required to provision a behemoth LLM.

Gemini

Like GPT-4, Google’s Gemini is aimed squarely at dialogue. It is able to converse on a nearly infinite number of subjects, and from the beginning, the Google team has focused on having Gemini produce interesting responses that are nevertheless absent of abuse and harmful language.

StableLM

StableLM is a lightweight, open-source language model built by Stability AI. It’s trained on a new dataset called “The Pile”, which is itself made up of over 20 smaller, high-quality datasets which together amount to over 825 GB of natural language.

GPT4All

What would you get if you trained an LLM on “…on a massive curated corpus of assistant interactions, which included word problems, multi-turn dialogue, code, poems, songs, and stories,” and then released it on an Apache 2.0 license? The answer is GPT4All, an open-source model whose purpose is to encourage research into what these technologies can accomplish.

BLOOM

The BigScience Large Open-Science Open-Access Multilingual Language Model (BLOOM) was released in late 2022. The team that put it together consisted of more than a thousand researchers from all over the worlds, and unlike the other models on this list, it’s specifically meant to be interpretable.

Pathways Language Model (PaLM)

PaLM is from Google, and is also enormous (540 billion parameters). It excels in many language-related tasks, and became famous when it produced really high-level explanations of tricky jokes. The most recent version is PaLM 2.

Claude

Anthropic’s Claude is billed as a “next-generation AI assistant.” The recent release of Claude 3.5 Sonnet “sets new industry benchmarks” in speed and intelligence, according to materials put out by the company. We haven’t looked at all the data ourselves, but we have played with the model and we know it’s very high-quality.

Command and Command R+

These are models created by Cohere, one of the major commercial platforms for current LLMs. They are comparable to most of the other big models, but Cohere has placed a special focus on enterprise applications, like agents, tools, and RAG.

What are the Best Ways of Overcoming the Limitations of Large Language Models?

Large language models are remarkable tools, but they nevertheless suffer from some well-known limitations. They tend to hallucinate facts, for example, sometimes fail at basic arithmetic, and can get lost in the course of lengthy conversations.

Overcoming the limitations of large language models is mostly a matter of either monitoring them and building scaffolding to enable RAG, or partnering with a conversational AI platform for CX that handles this tedium for you.

An additional wrinkle involves tradeoffs between different models. As we discuss below, sometimes models may outperform the competition on a task like code generation while being notably worse at a task like faithfully following instructions; in such cases, many opt to have an ensemble of models so they can pick and choose which to deploy in a given scenario. (It’s worth pointing out that even if you want to use one model for everything, you’ll absolutely need to swap in an upgraded version of that model eventually, so you still have the same model-management problem.)

This, too, is a place where a conversational AI platform for CX will make your life easier. The best such platforms are model-agnostic, meaning that they can use ChatGPT, Claude, Gemini, or whatever makes sense in a particular situation. This removes yet another headache, smoothing the way for you to use generative AI in your contact center with little fuss.

What are the Best Large Language Models?

Having read the foregoing, it’s natural to wonder if there’s a single model that best suits your enterprise. The answer is “it depends on the specifics of your use case.” You’ll have to think about whether you want an open-source model you control or you’re comfortable hitting an API, whether your use case is outside the scope of ChatGPT and better handled with a bespoke model, etc.

Speaking of use cases, in the next few sections, we’ll offer some advice on which current LLMs are best suited for which applications. However, this advice is based mostly on personal experience and other people’s reports of their experiences. This should be good enough to get you started, but bear in mind that these claims haven’t been born out by rigorous testing and hard evidence—the field is too young for most of that to exist yet.

What’s the Best LLM if I’m on a Budget?

Pretty much any open-source model is given away for free, by definition. You can just Google “free open-source LLMs”, but one of the more frequently recommended open-source models is LLaMA 2 (there’s also the new LLaMA 3), both of which are free.

But many LLMs (both free and paid) also use the data you feed them for training purposes, which means you could be exposing proprietary or sensitive data if you’re not careful. Your best bet is to find a cost-effective platform that has an explicit promise not to use your data for training.

When you deal with an open-source model, you also have to pay for hosting, either your own or through a cloud service like Amazon Bedrock.

What’s the Best LLM for a Large Context Window?

The context window is the amount of text an LLM can handle at a time. When ChatGPT was released, it had a context window of around 4,000 tokens. (A “token” isn’t exactly a word, but it’s close enough for our purposes.)

Generally (and up to a point), the longer the context window the better the model is able to perform. Today’s models generally have context windows of at least a few tens of thousands, and some getting into the lower 100,000 range.

But, at a staggering 1 million tokens–equivalent to an hour-long video or the full text of a long novel–Google’s Gemini simply towers over the others like Hagrid in the Shire.

That having been said, this space moves quickly, and context window length is an active area of research and development. These figures will likely be different next month, so be sure to check the latest information as you begin shopping for a model.

Choosing Among the Current Large Language Models

With all the different LLMs on offer, it’s hard to narrow the search down to the one that’s best for you. By carefully weighing the different metrics we’ve discussed in this article, you can choose an LLM that meets your needs with as little hassle as possible.

Pulling back a bit, let’s close by recalling that the whole purpose of choosing among current LLMs in the first place is to better meet the needs of our customers.

For this reason, you might want to consider working with a conversational AI platform for CX, like Quiq, that puts a plethora of LLMs at your fingertips through one simple interface.

Request A Demo

Going Beyond the GenAI Hype — Your Questions, Answered

We recently hosted a webinar all about how CX leaders can go beyond the hype surrounding GenAI, sift out the misinformation, and start driving real business value with AI Assistants. During the session, our speakers shared specific steps CX leaders can take to get their knowledge ready for AI, eliminate harmful hallucinations, and solve the build vs. buy dilemma.

We were overwhelmed with the number of folks who tuned in to learn more and hear real-life challenges, best practices, and success stories from Quiq’s own AI Assistant experts and customers. At the end of the webinar, we received so many amazing audience questions that we ran out of time to answer them all!

So, we asked speaker and Quiq Product Manager Max Fortis, to respond to a few of our favorites. Check out his answers in the clips below, and be sure to watch the full 35-minute webinar on-demand.

Ensuring Assistant Access to Personal and Account Information

 

 

Using a Knowledge Base Written for Internal Agents

 

 

Teaching a Voice Assistant vs. a Chat Assistant

 

 

Monitoring and Improving Assistant Performance Over Time

 

 

Watch the Full Webinar to Dive Deeper

Whether you were unable to tune in live or want to watch the rerun, this webinar is available on-demand. Give it a listen to hear Max and his Quiq colleagues offer more answers and advice around how to assess and fill critical knowledge gaps, avoid common yet lesser-known hallucination types, and partner with technical teams to get the AI tools you need.

Watch Now

Does GenAI Leak Your Sensitive Data? Exposing Common AI Misconceptions (Part Three)

This is the final post in a three-part series clarifying the biggest misconceptions holding CX leaders like you back from integrating GenAI into their CX strategies. Our goal? To assuage your fears and help you start getting real about adding an AI Assistant to your contact center — all in a fun “two truths and a lie” format.

There are few faux pas as damaging and embarrassing for brands as sensitive data getting into the wrong hands. So it makes sense that data security concerns are a major deterrent for CX leaders thinking about getting started with GenAI.

In the first post of our AI Misconceptions series, we discussed why your data is definitely good enough to make GenAI work for your business. Next, we explored the different types of hallucinations that CX leaders should be aware of, and how they are 100% preventable with the right guardrails in place.

Now, let’s wrap up our series by exposing the truth about GenAI potentially leaking your company or customer data.

Misconception #3: “GenAI inadvertently leaks sensitive data.”

As we discussed in part one, AI needs training data to work. One way to collect that data is from the questions users ask. For example, if a large language model (LLM) is asked to summarize a paragraph of text, that text could be stored and used to train future models.

Unfortunately, there have been some famous examples of companies’ sensitive information becoming part of datasets used to train LLMs — take Samsung, for instance. Because of this, CX leaders often fear that using GenAI will result in their company’s proprietary data being disclosed when users interact with these models.

Truth #1: Public GenAI tools use conversation data to train their models.

Tools like OpenAI’s ChatGPT and Google Gemini (formerly Bard) are public-facing and often free — and that’s because their purpose is to collect training data. This means that any information that users enter while using these tools is free game to be used for training future models.

This is precisely how the Samsung data leak happened. The company’s semiconductor division allowed its engineers to use ChatGPT to check their source code. Not only did multiple employees copy/paste confidential code into ChatGPT, but one team member even used the tool to transcribe a recording of an internal-only meeting!

Truth #2: Properly licensed GenAI is safe.

People often confuse ChatGPT, the application or web portal, with the LLM behind it. While the free version of ChatGPT collects conversation data, OpenAI offers an enterprise LLM that does not. Other LLM providers offer similar enterprise licenses that specify that all interactions with the LLM and any data provided will not be stored or used for training purposes.

When used through an enterprise license, LLMs are also Service Organization Control Type 2, or SOC 2, compliant. This means they have to undergo regular audits from third parties to prove that they have the processes and procedures in place to protect companies’ proprietary data and customers’ personally identifiable information (PII).

The Lie: Enterprises must use internally-developed models only to protect their data.

Given these concerns over data leaks and hallucinations, some organizations believe that the only safe way to use GenAI is to build their own AI models. Case in point: Samsung is now “considering building its own internal AI chatbot to prevent future embarrassing mishaps.”

However, it’s simply not feasible for companies whose core business is not AI to build AI that is as powerful as commercially available LLMs — even if the company is as big and successful as Samsung. Not to mention the opportunity cost and risk of having your technical resources tied up in AI instead of continuing to innovate on your core business.

It’s estimated that training the LLM behind ChatGPT cost upwards of $4 million. It also required specialized supercomputers and access to a data set equivalent to nearly the entire Internet. And don’t forget about maintenance: AI startup Hugging Face recently revealed that retraining its Bloom LLM cost around $10 million.

GenAI Misconceptions

Using a commercially available LLM provides enterprises with the most powerful AI available without breaking the bank— and it’s perfectly safe when properly licensed. However, it’s also important to remember that building a successful AI Assistant requires much more than developing basic question/answer functionality.

Finding a Conversational CX Platform that harnesses an enterprise-licensed LLM, empowers teams to build complex conversation flows, and makes it easy to monitor and measure Assistant performance is a CX leader’s safest bet. Not to mention, your engineering team will thank you for giving them optionality for the control and visibility they want—without the risk and overhead of building it themselves!

Feel Secure About GenAI Data Security

Companies that use free, public-facing GenAI tools should be aware that any information employees enter can (and most likely will) be used for future model-training purposes.

However, properly-licensed GenAI will not collect or use your data to train the model. Building your own GenAI tools for security purposes is completely unnecessary — and very expensive!

Want to read more or revisit the first two misconceptions in our series? Check out our full guide, Two Truths and a Lie: Breaking Down the Major GenAI Misconceptions Holding CX Leaders Back.

Will GenAI Hallucinate and Hurt Your Brand? Exposing Common AI Misconceptions (Part Two)

This is the second post in a three-part series clarifying the biggest misconceptions holding CX leaders like you back from integrating GenAI into their CX strategies. Our goal? To assuage your fears and help you start getting real about adding an AI Assistant to your contact center — all in a fun “two truths and a lie” format.

Did you know that the Golden Gate Bridge was transported for the second time across Egypt in October of 2016?

Or that the world record for crossing the English Channel entirely on foot is held by Christof Wandratsch of Germany, who completed the crossing in 14 hours and 51 minutes on August 14, 2020?

Probably not, because GenAI made these “facts” up. They’re called hallucinations, and AI hallucination misconceptions are holding a lot of CX leaders back from getting started with GenAI.

In the first post of our AI Misconceptions series, we discussed why your data is definitely good enough to make GenAI work for your business. In fact, you actually need a lot less data to get started with an AI Assistant than you probably think.

Now, we’re debunking AI hallucination myths and separating some of the biggest AI hallucination facts from fiction. Could adding an AI Assistant to your contact center put your brand at risk? Let’s find out.

Misconception #2: “GenAI will hallucinate and hurt my brand.”

While the example hallucinations provided above are harmless and even a little funny, this isn’t always the case. Unfortunately, there are many examples of times chatbots have cussed out customers or made racist or sexist remarks. This causes a lot of concern among CX leaders looking to use an AI Assistant to represent their brand.

Truth #1: Hallucinations are real (no pun intended).

Understanding AI hallucinations hinges on realizing that GenAI wants to provide answers — whether or not it has the right data. Hallucinations like those in the examples above occur for two common reasons.

AI-Induced Hallucinations Explained:

  1. The large language model (LLM) simply does not have the correct information it needs to answer a given question. This is what causes GenAI to get overly creative and start making up stories that it presents as truth.
  2. The LLM has been given an overly broad and/or contradictory dataset. In other words, the model gets confused and begins to draw conclusions that are not directly supported in the data, much like a human would do if they were inundated with irrelevant and conflicting information on a particular topic.

Truth #2: There’s more than one type of hallucination.

Contrary to popular belief, hallucinations aren’t just incorrect answers: They can also be classified as correct answers to the wrong questions. And these types of hallucinations are actually more common and more difficult to control.

For example, imagine a company’s AI Assistant is asked to help troubleshoot a problem that a customer is having with their TV. The Assistant could give the customer correct troubleshooting instructions — but for the wrong television model. In this case, GenAI isn’t wrong, it just didn’t fully understand the context of the question.

GenAI Misconceptions

The Lie: There’s no way to prevent your AI Assistant from hallucinating.

Many GenAI “bot” vendors attempt to fine-tune an LLM, connect clients’ knowledge bases, and then trust it to generate responses to their customers’ questions. This approach will always result in hallucinations. A common workaround is to pre-program “canned” responses to specific questions. However, this leads to unhelpful and unnatural-sounding answers even to basic questions, which then wind up being escalated to live agents.

In contrast, true AI Assistants powered by the latest Conversational CX Platforms leverage LLMs as a tool to understand and generate language — but there’s a lot more going on under the hood.

First of all, preventing hallucinations is not just a technical task. It requires a layer of business logic that controls the flow of the conversation by providing a framework for how the Assistant should respond to users’ questions.

This framework guides a user down a specific path that enables the Assistant to gather the information the LLM needs to give the right answer to the right question. This is very similar to how you would train a human agent to ask a specific series of questions before diagnosing an issue and offering a solution. Meanwhile, in addition to understanding what the intent of the customer’s question is, the LLM can be used to extract additional information from the question.

Referred to as “pre-generation checks,” these filters are used to determine attributes such as whether the question was from an existing customer or prospect, which of the company’s products or services the question is about, and more. These checks happen in the background in mere seconds and can be used to select the right information to answer the question. Only once the Assistant understands the context of the client’s question and knows that it’s within scope of what it’s allowed to talk about does it ask the LLM to craft a response.

But the checks and balances don’t end there: The LLM is only allowed to generate responses using information from specific, trusted sources that have been pre-approved, and not from the dataset it was trained on.

In other words, humans are responsible for providing the LLM with a source of truth that it must “ground” its response in. In technical terms, this is called Retrieval Augmented Generation, or RAG — and if you want to get nerdy, you can read all about it here!

Last but not least, once a response has been crafted, a series of “post- generation checks” happens in the background before returning it to the user. You can check out the end-to-end process in the diagram below:

RAG

Give Hallucination Concerns the Heave-Ho

To sum it up: Yes, hallucinations happen. In fact, there’s more than one type of hallucination that CX leaders should be aware of.

However, now that you understand the reality of AI hallucination, you know that it’s totally preventable. All you need are the proper checks, balances, and guardrails in place, both from a technical and a business logic standpoint.

Now that you’ve had your biggest misconceptions about AI hallucination debunked, keep an eye out for the next blog in our series, all about GenAI data leaks. Or, learn the truth about all three of CX leaders’ biggest GenAI misconceptions now when you download our guide, Two Truths and a Lie: Breaking Down the Major GenAI Misconceptions Holding CX Leaders Back.

Request A Demo

Is Your CX Data Good Enough for GenAI? Exposing Common AI Misconceptions (Part One)

If you’re feeling unprepared for the impact of generative artificial intelligence (GenAI), you’re not alone. In fact, nearly 85% of CX leaders feel the same way. But the truth is that the transformative nature of this technology simply can’t be ignored — and neither can your boss, who asked you to look into it.

We’ve all heard horror stories of racist chatbots and massive data leaks ruining brands’ reputations. But we’ve also seen statistics around the massive time and cost savings brands can achieve by offloading customers’ frequently asked questions to AI Assistants. So which is it?

This is the first post in a three-part series clarifying the biggest misconceptions holding CX leaders like you back from integrating GenAI into their CX strategies. Our goal? To assuage your fears and help you start getting real about adding an AI Assistant to your contact center — all in a fun “two truths and a lie” format. Prepare to have your most common AI misconceptions debunked!

Misconception #1: “My data isn’t good enough for GenAI.”

Answering customer inquiries usually requires two types of data:

  1. Knowledge (e.g. an order return policy) and
  2. Information from internal systems (e.g. the specific details of an order).

It’s easy to get caught up in overthinking the impact of data quality on AI performance and wondering whether or not your knowledge is even good enough to make an AI Assistant useful for your customers.

Updating hundreds of help desk articles is no small task, let alone building an entire knowledge base from scratch. Many CX leaders are worried about the amount of work it will require to clean up their data and whether their team has enough resources to support a GenAI initiative. In order for GenAI to be as effective as a human agent, it needs the same level of access to internal systems as human agents.

Truth #1: You have to have some amount of data.

Data is necessary to make AI work — there’s no way around it. You must provide some data for the model to access in order to generate answers. This is one of the most basic AI performance factors.

But we have good news: You need a lot less data than you think.

One of the most common myths about AI and data in CX is that it’s necessary to answer every possible customer question. Instead, focus on ensuring you have the knowledge necessary to answer your most frequently asked questions. This small step forward will have a major impact for your team without requiring a ton of time and resources to get started

Truth #2: Quality matters more than quantity.

Given the importance of relevant data in AI, a few succinct paragraphs of accurate information is better than volumes of outdated or conflicting documentation. But even then, don’t sweat the small stuff.

For example, did a product name change fail to make its way through half of your help desk articles? Are there unnecessary hyperlinks scattered throughout? Was it written for live agents versus customers?

No problem — the right Conversational CX Platform can easily address these AI data dependency concerns without requiring additional support from your team.

The Lie: Your data has to be perfectly unified and specifically formatted to train an AI Assistant.

Don’t worry if your data isn’t well-organized or perfectly formatted. The reality is that most companies have services and support materials scattered across websites, knowledge bases, PDFs, .csvs, and dozens of other places — and that’s okay!

Today, the tools and technology exist to make aggregating this fragmented data a breeze. They’re then able to cleanse and format it in a way that makes sense for a large language model (LLM) to use.

For example if you have an agent training manual in Google Docs and a product manual in PDF, this information can be disassembled, reformatted, and rewritten by an AI-powered transformation that makes it subsequently usable.

What’s more, the data used by your AI Assistant should be consistent with the data you use to train your human agents. This means that not only is it not required to build a special repository of information for your AI Assistant to learn from, but it’s not recommended. The very best AI platforms take on the work of maintaining this continuity by automatically processing and formatting new information for your Assistant as it’s published, as well as removing any information that’s been deleted.

Put Those Data Doubts to Bed

Now you know that your data is definitely good enough for GenAI to work for your business. Yes, quality matters more than quantity, but it doesn’t have to be perfect.

The technology exists to unify and format your data so that it’s usable by an LLM. And providing knowledge around even a handful of frequently asked questions can give your team a major lift right out the gate.

Keep an eye out for the next blog in our series, all about GenAI hallucinations. Or, learn the truth about all three of CX leaders’ biggest GenAI misconceptions now when you download our guide, Two Truths and a Lie: Breaking Down the Major GenAI Misconceptions Holding CX Leaders Back.

Request A Demo

9 Top Customer Service Challenges — and How to Overcome Them

It’s a shame that customer service doesn’t always get the respect and attention it deserves because it’s among the most important ingredients in any business’s success. There’s no better marketing than an enthusiastic user base, so every organization should strive to excel at making customers happy.

Alas, this is easier said than done. When someone comes to you with a problem, they can be angry, stubborn, mercurial, and—let’s be honest—extremely frustrating. Some of this just comes with the territory, but some stems from the fact that many customer service professionals simply don’t have a detailed, high-level view of customer service challenges or how to overcome them.

That’s what we’re going to remedy in this post. Let’s jump right in!

What are The Top Customer Service Challenges?

After years of running a generative AI platform for contact centers and interacting with leaders in this space, we have discovered that the top customer service challenges are:

  1. Understanding Customer Expectations
  2. Next Step: Exceeding Customer Expectations
  3. Dealing with Unreasonable Customer Demands
  4. Improving Your Internal Operations
  5. Not Offering a Preferred Communication Channel
  6. Not Offering Real-Time Options
  7. Handling Angry Customers
  8. Dealing With a Service Outage Crisis
  9. Retaining, Hiring, and Training Service Professionals

In the sections below, we’ll break each of these down and offer strategies for addressing them.

1. Understanding Customer Expectations

No matter how specialized a business is, it will inevitably cater to a wide variety of customers. Every customer has different desires, expectations, and needs regarding a product or service, which means you need to put real effort into meeting them where they are.

One of the best ways to foster this understanding is to remain in consistent contact with your customers. Deciding which communication channels to offer customers depends a great deal on the kinds of customers you’re serving. That said, in our experience, text messaging is a universally successful method of communication because it mimics how people communicate in their personal lives. The same goes for web chat and WhatsApp.

Beyond this, setting the right expectations upfront is another good way to address common customer service challenges. For example, if you are not available 24/7, only provide support via email, or don’t have dedicated account managers , you should  make that clear right at the beginning.

Nothing will make a customer angrier than thinking they can text you only to realize that’s not an option in the middle of a crisis.

2. Next Step: Exceed Customer Expectations

Once you understand what your customers want and need, the next step is to go above and beyond to make them happy. Everyone wants to stand out in a fiercely competitive market, and going the extra mile is a great way to do that. One of the major customer service challenges is knowing how to do this proactively, but there are many ways you can succeed without a huge amount of effort.

Consider a few examples, such as:

  • Treating the customer as you would a friend in your personal life, i.e. by apologizing for any negative experiences and empathizing with how they feel;
  • Offering a credit or discount for a future purchase;
  • Sending them a card referencing their experience and thanking them for being a loyal customer;

The key is making sure they feel seen and heard. If you do this consistently, you’ll exceed your customers’ expectations, and the chances of them becoming active promoters of your company will increase dramatically.

3. Dealing with Unreasonable Demands

Of course, sometimes a customer has expectations that simply can’t be met, and this, too, counts as one of the serious customer service challenges. Customer service professionals often find themselves in situations where someone wants a discount that can’t be given, a feature that can’t be built, or a bespoke customization that can’t be done, and they wonder what they should do.

The only thing to do in this situation is to gently let the customer down, using respectful and diplomatic language. Something like, “We’re really sorry we’re not able to fulfill your request, but we’d be happy to help you choose an option that we currently have available” should do the trick.

4. Improving Your Internal Operations

Customer service teams face the constant pressure to improve efficiency, maintain high CSAT scores, drive revenue, and keep costs to service customers low. This matters a lot; slow response times and being kicked from one department to another are two of the more common complaints contact centers get from irate customers, and both are fixable with appropriate changes to your procedures.

Improving contact center performance is among the thorniest customer service challenges, but there’s no reason to give up hope!

One thing you can do is gather and utilize better data regarding your internal workflows. Data has been called “the new oil,” and with good reason—when used correctly, it’s unbelievably powerful.

What might this look like?

Well, you are probably already tracking metrics like first contact resolution (FCR) and (AHT), but this is easier when you have a unified, comprehensive dashboard that gives you quick insight into what’s happening across your organization.

You might also consider leveraging the power of generative AI, which has led to AI assistants that can boost agent performance in a variety of different tasks. You have to tread lightly here because too much bad automation will also drive customers away. But when you use technology like large language models according to best practices, you can get more done and make your customers happier while still reducing the burden on your agents.

5. Not Offering a Preferred Communication Channel

In general, contact centers often deal with customer service challenges stemming from new technologies. One way this can manifest is the need to cultivate new channels in line with changing patterns in the way we all communicate.

You can probably see where this is going – something like 96% of Americans have some kind of cell phone, and if you’ve looked up from your own phone recently, you’ve probably noticed everyone else glued to theirs.

It isn’t just that customers now want to be able to text you instead of calling or emailing; the ubiquity of cell phones has changed their basic expectations. They now take it for granted that your agents will be available round the clock, that they can chat with an agent asynchronously as they go about other tasks, etc.

We can’t tell you whether it’s worth investing in multiple communication channels for your industry. But based on our research, we can tell you that having multiple channels—and text messaging in particular—is something most people want and expect.

6. Not Offering Real-Time Options

When customers reach out asking for help, their customer service problems likely feel unique to them. But since you have so much more context, you’re aware that a very high percentage of inquiries fall into a few common buckets, like “Where is my order?”, “How do I handle a return?”, “My item arrived damaged, how can I exchange it for a new one?”, etc.

These and similar inquiries can easily be resolved instantly using AI, leaving customers and agents happier and more productive.

7. Handling Angry Customers

A common story in the customer service world involves an interaction going south and a customer getting angry.

Gracefully handling angry customers is one of those perennial customer service challenges; the very first merchants had to deal with angry customers, and our robot descendants will be dealing with angry customers long after the sun has burned out.

Whenever you find yourself dealing with a customer who has become irate, there are two main things you have to do:

  1. Empathize with them
  2. Do not lose your cool

It can be hard to remember, but the customer isn’t frustrated with you, they’re frustrated with the company and products. If you always keep your responses calm and rooted in the facts of the situation, you’ll always be moving toward providing a solution.

8. Dealing With a Service Outage Crisis

Sometimes, our technology fails us. The wifi isn’t working on the airplane, a cell phone tower is down following a lightning storm, or that printer from Office Space jams so often it starts to drive people insane.

As a customer service professional, you might find yourself facing the wrath of your customers if your service is down. Unfortunately, in a situation like this, there’s not much you can do except honestly convey to your customers that your team is putting all their effort into getting things back on track. You should go into these conversations expecting frustrated customers, but make sure you avoid the temptation to overpromise.

Talk with your tech team and give customers a realistic timeline, don’t assure them it’ll be back in three hours if you have no way to back that up. Though Elon Musk seems to get away with it, the worst thing the rest of us can do is repeatedly promise unrealistic timelines and miss the mark.

9. Retaining, Hiring, and Training Service Professionals

You may have seen this famous Maya Angelou quote, which succinctly captures what the customer service business is all about:

“I’ve learned that people will forget what you said, people will forget what you did, but people will never forget how you made them feel.”

Learning how to comfort a person or reassure them is high on the list of customer service challenges, and it’s something that is certainly covered in your training for new agents.

But training is also important because it eases the strain on agents and reduces turnover. For customer service professionals, the median time to stick with one company is less than a year, and every time someone leaves, that means finding a replacement, training them, and hoping they don’t head for the exits before your investment has paid off.

Keeping your agents happy will save you more money than you imagine, so invest in a proper training program. Ensure they know what’s expected of them, how to ask for help when needed, and how to handle challenging customers.

Final Thoughts on the Top Customer Service Challenges

Customer service challenges abound, but with the right approach, there’s no reason you shouldn’t be able to meet them head-on!

Check out our report for a more detailed treatment of three major customer service challenges and how to resolve them. Between the report and this post, you should be armed with enough information to identify your own internal challenges, fix them, and rise to new heights.

Request A Demo

5 Tips for Coaching Your Contact Center Agents to Work with AI

Generative AI has enormous potential to change the work done at places like contact centers. For this reason, we’ve spent a lot of energy covering it, from deep dives into the nuts and bolts of large language models to detailed advice for managers considering adopting it.

Here, we will provide tips on using AI tools to coach, manage, and improve your agents.

How Will AI Make My Agents More Productive?

Contact centers can be stressful places to work, but much of that stems from a paucity of good training and feedback. If an agent doesn’t feel confident in assuming their responsibilities or doesn’t know how to handle a tricky situation, that will cause stress.

Tip #1: Make Collaboration Easier

With the right AI tools for coaching agents, you can get state-of-the-art collaboration tools that allow agents to invite their managers or colleagues to silently appear in the background of a challenging issue. The customer never knows there’s a team operating on their behalf, but the agent won’t feel as overwhelmed. These same tools also let managers dynamically monitor all their agents’ ongoing conversations, intervening directly if a situation gets out of hand.

Agents can learn from these experiences to become more performant over time.

Tip #2: Use Data-Driven Management

Speaking of improvement, a good AI platform will have resources that help managers get the most out of their agents in a rigorous, data-driven way. Of course, you’re probably already monitoring contact center metrics, such as CSAT and FCR scores, but this barely scratches the surface.

What you really need is a granular look into agent interactions and their long-term trends. This will let you answer questions like “Am I overstaffed?” and “Who are my top performers?” This is the only way to run a tight ship and keep all the pieces moving effectively.

Tip #3: Use AI To Supercharge Your Agents

As its name implies, generative AI excels at generating text, and there are several ways this can improve your contact center’s performance.

To start, these systems can sometimes answer simple questions directly, which reduces the demands on your team. Even when that’s not the case, however, they can help agents draft replies, or clean up already-drafted replies to correct errors in spelling and grammar. This, too, reduces their stress, but it also contributes to customers having a smooth, consistent, high-quality experience.

Tip #4: Use AI to Power Your Workflows

A related (but distinct) point concerns how AI can be used to structure the broader work your agents are engaged in.

Let’s illustrate using sentiment analysis, which makes it possible to assess the emotional state of a person doing something like filing a complaint. This can form part of a pipeline that sorts and routes tickets based on their priority, and it can also detect when an issue needs to be escalated to a skilled human professional.

Tip #5: Train Your Agents to Use AI Effectively

It’s easy to get excited about what AI can do to increase your efficiency, but you mustn’t lose sight of the fact that it’s a complex tool your team needs to be trained to use. Otherwise, it’s just going to be one more source of stress.

You need to have policies around the situations in which it’s appropriate to use AI and the situations in which it’s not. These policies should address how agents should deal with phenomena like “hallucination,” in which a language model will fabricate information.

They should also contain procedures for monitoring the performance of the model over time. Because these models are stochastic, they can generate surprising output, and their behavior can change.

You need to know what your model is doing to intervene appropriately.

Wrapping Up

Hopefully, you’re more optimistic about what AI can do for your contact center, and this has helped you understand how to make the most out of it.

If there’s anything else you’d like to go over, you’re always welcome to request a demo of the Quiq platform. Since we focus on contact centers we take customer service pretty seriously ourselves, and we’d love to give you the context you need to make the best possible decision!

Request A Demo

AI Gold Rush: How Quiq Won the Land Grab for AI Contact Centers (& How You Can Benefit)

There have been many transformational moments throughout the history of the United States, going back all the way to its unique founding.

Take for instance the year 1849.

For all of you SFO 49ers fans (sorry, maybe next year), you are very well aware of the land grab that was the birth of the state of California. That year, tens of thousands of people from the Eastern United States flocked to the California Territory hoping to strike it rich in a placer gold strike.

A lesser-known fact of that moment in history is that the gold strike in California was actually in 1848. And while all of those easterners were lining up for the rush, a small number of people from Latin America and Hawaii were already in production, stuffing their pockets full of nuggets.

176 years later, AI is the new gold rush.

Fast forward to 2024, a new crowd is forming, working toward the land grab once again. Only this time, it’s not physical.

It’s AI in the contact center.

Companies are building infrastructure, hiring engineers, inventing tools, and trying to figure out how to build a wagon that won’t disintegrate on the trail (AKA hallucinate).

While many of those companies are going to make it to the gold fields, one has been there since 2023, and that is Quiq.

Yes, we’ve been mining LLM gold in the contact center since July of 2023 when we released our first customer-facing Generative AI assistant for Loop Insurance. Since then, we have released over a dozen more and have dozens more under construction. More about the quality of that gold in a bit.

This new gold rush in the AI space is becoming more crowded every day.

Everyone is saying they do Generative AI in one way, shape, or form. Most are offering some form of Agent Assist using LLM technologies, keeping that human in the loop and relying on small increments of improvement in AHT (Average Handle Time) and FCR (First Contact Resolution).

However, there is a difference when it comes to how platforms are approaching customer-facing AI Assistants.

Actually, there are a lot of differences. That’s a big reason we invented AI Studio.

AI Studio: Get your shovels and pick axes.

Since we’ve been on the bleeding edge of Generative AI CX deployments, we created called AI Studio. We saw that there was a gap for CX teams, with the myriad of tools they would have had to stitch together and stay focused on business outcomes.

AI Studio is a complete toolkit to empower companies to explore nuances in their AI use within a conversational development environment that’s tailored for customer-facing CX.

That last part is important: Customer-facing AI assistants, which teams can create together using AI Studio. Going back to our gold rush comparison, AI Studio is akin to the pick axes and shovels you need.

Only success is guaranteed and the proverbial gold at the end of the journey is much, much more enticing—precisely because customer-facing AI applications tend to move the needle dramatically further than simpler Agent Assist LLM builds.

That brings me to the results.

So how good is our gold?

Early results are showing that our LLM implementations are increasing resolution rates 50% to 100% above what was achieved using legacy NLU intent-based models, with resolution rates north of 60% in some FAQ-heavy assistants.

Loop Insurance saw a 55% reduction in email tickets in their contact center.

Secondly, intent matching has more than doubled, meaning the percentage of correctly identified intents (especially when there are multiple intents) are being correctly recognized and responded to, which directly correlates to correct answers, fewer agent contacts, and satisfied customers.

That’s just the start though. Molekule hit a 60% resolution rate with a Quiq-built LLM-powered AI assistant. You can read all about that in our case study here.

And then there’s Accor, whose AI assistant across four Rixos properties has doubled (yes, 2X’ed) click-outs on booking links. Check out that case study here.

What’s next?

Like the miners in 1848, digging as much gold out of the ground as possible before the land rush, Quiq sits alone, out in front of a crowd lining up for a land grab.

With a dozen customer-facing LLM-powered AI assistants already living in the market producing incredible results, we have pioneered a space that will be remembered in history as a new day in Customer Experience.

Interested in harnessing Quiq’s power for your CX or contact center? Send us a demo request or get in touch another way and let’s talk.

Request A Demo

Google Business Messages: Meet Your Customers Where They’re At

The world is a distracted and distracting place; between all the alerts, the celebrity drama on Twitter, and the fact that there are more hilarious animal videos on YouTube than you could ever hope to watch even if it were your full-time job, it takes a lot to break through the noise.

That’s one reason customer service-oriented businesses like contact centers are increasingly turning to text messaging. Not only are cell phones all but ubiquitous, but many people have begun to prefer text-message-based interactions to calls, emails, or in-person visits.

In this article, we’ll cover one of the biggest text-messaging channels: Google Business Messages. We’ll discuss what it is, what features it offers, and various ways of leveraging it to the fullest.

Let’s get going!

Learn More About the End of Google Business Messages

 

What is Google Business Messages?

Given that more than nine out of ten online searches go through Google, we will go out on a limb and assume you’ve heard of the Mountain View behemoth. But you may not be aware that Google has a Business Message service that is very popular among companies, like contact centers, that understand the advantages of texting their customers.

Business Messages allows you to create a “messaging surface” on Android or Apple devices. In practice, this essentially means that you can create a little “chat” button that your customers can use to reach out to you.

Behind the scenes, you will have to register for Business Messages, creating an “agent” that your customers will interact with. You have many configuration options for your Business Messages workflows; it’s possible to dynamically route a given message to contact center agents at a specific location, have an AI assistant powered by large language models generate a reply (more on this later), etc.

Regardless of how the reply is generated, it is then routed through the API to your agent, which is what actually interacts with the customer. A conversation is considered over when both the customer and your agent cease replying, but you can resume a conversation up to 30 days later.

What’s the Difference Between Google RCS and Google Business Messages?

It’s easy to confuse Google’s Rich Communication Services (RCS) and Google Business Messages. Although the two are similar, it’s nevertheless worth remembering their differences.

Long ago, text messages had to be short, sweet, and contain nothing but words. But as we all began to lean more on text messaging to communicate, it became necessary to upgrade the basic underlying protocol. This way, we could also use video, images, GIFs, etc., in our conversations.

“Rich” communication is this upgrade, but it’s not relegated to emojis and such. RCS is also quickly becoming a staple for businesses that want to invest in livelier exchanges with their customers. RCS allows for custom logos and consistent branding, for example; it also makes it easier to collect analytics, insert QR codes, link out to calendars or Maps, etc.

As discussed above, Business Messages is a mobile messaging channel that integrates with Google Maps, Search, and brand websites, offering rich, asynchronous communication experiences. This platform not only makes customers happy but also contributes to your business’s bottom line through reduced call volumes, improved CSAT, and better conversion rates.

Importantly, Business Messages are sometimes also prominently featured in Google search results, such as answer cards, place cards, and site links.

In short, there is a great deal of overlap between Google Business Messages and Google RCS. But two major distinctions are that RCS is not available on all Android devices (where Business Messages is), and Business Messages doesn’t require you to have a messaging app installed (where RCS does).

The Advantages of Google Business Messaging

Google Business Messaging has many distinct advantages to offer the contact center entrepreneur. In the next few sections, we’ll discuss some of the biggest.

It Supports Robust Encryption

A key feature of Business Messages is that its commitment to security and privacy is embodied through powerful, end-to-end encryption.

What exactly does end-to-end encryption entail? In short, it ensures that a message remains secure and unreadable from the moment the sender types it to whenever the recipient opens it, even if it’s intercepted in transit. This level of security is baked in, requiring no additional setup or adjustments to security settings by the user.

The significance of this feature cannot be overstated. Today, it’s not at all uncommon to read about yet another multi-million-dollar ransomware attack or a data breach of staggering proportions. This has engendered a growing awareness of (and concern for) data security, meaning that present and future customers will value those platforms that make it a central priority of their offering.

By our estimates, this will only become more important with the rise of generative AI, which has made it increasingly difficult to trust text, images, and even movies seen online—none of which was particularly trustworthy even before it became possible to mass-produce them.

If you successfully position yourself as a pillar your customers can lean on, that will go a long way toward making you stand out in a crowded market.

It Makes Connecting With Customers Easier

Another advantage of Google Business Messages is that it makes it much easier to meet customers where they are. And where we are is “on our phones.”

Now, this may seem too obvious to need pointing out. After all, if your customers are texting all day and you’re launching a text-messaging channel of communication, then of course you’ll be more accessible.

But there’s more to this story. Google Business Messaging allows you to seamlessly integrate with other Google services, like Google Maps. If a customer is trying to find the number for your contact center, therefore, they could instead get in touch simply by clicking the “CHAT” button.

This, too, may seem rather uninspiring because it’s not as though it’s difficult to grab the number and call. But even leaving aside the rising generations’ aversion to making phone calls, there’s a concept known as “trivial inconvenience” that’s worth discussing in this context.

Here’s an example: if you want to stop yourself from snacking on cookies throughout the day, you don’t have to put them on the moon (though that would help). Usually, it’s enough to put them in the next room or downstairs.

Though this only slightly increases the difficulty of accessing your cookie supply, in most cases, it introduces just enough friction to substantially reduce the number of cookies you eat (depending on the severity of your Oreo addiction, of course).

Well, the exact same dynamic works in reverse. Though grabbing your contact center’s phone number from Google and calling you requires only one or two additional steps, that added work will be sufficient to deter some fraction of customers from reaching out. If you want to make yourself easy to contact, there’s no substitute for a clean integration directly into the applications your customers are using, and that’s something Google Business Messages can do extremely well.

It’s Scalable and Supports Integrations

According to legend, the name “Google” originally came from a play on the word “Googol,” which is a “1” followed by a 100 “0”s. Google, in other words, has always been about scale, and that is reflected in the way its software operates today. For our purposes, the most important manifestation of this is the scalability of their API. Though you may currently be operating at a few hundred or a few thousand messages per day, if you plan on growing, you’ll want to invest early in communication channels that can grow along with you.

But this is hardly the end of what integrations can do for you. If you’re in the contact center business there’s a strong possibility that you’ll eventually end up using a large language model like ChatGPT in order to answer questions more quickly, offboard more routine tasks, etc. Unless you plan on dropping millions of dollars to build one in-house, you’ll want to partner with an AI-powered conversational platform. As you go about finding a good vendor, make sure to assess the features they support. The best platforms have many options for increasing the efficiency of your agents, such as reusable snippets, auto-generated suggestions that clean up language and tone, and dashboarding tools that help you track your operation in detail.

Best Practices for Using Google Business Messages

Here, in the penultimate section, we’ll cover a few optimal ways of utilizing Google Business Messages.

Reply in a Timely Fashion

First, it’s important that you get back to customers as quickly as you’re able to. As we noted in the introduction, today’s consumers are perpetually drinking from a firehose of digital information. If it takes you a while to respond to their query, there’s a good chance they’ll either forget they reached out (if you’re lucky) or perceive it as an unpardonable affront and leave you a bad review (if you’re not).

An obvious way to answer immediately is with an automated message that says something like, “Thanks for your question. We’ll respond to you soon!” But you can’t just leave things there, especially if the question requires a human agent to intervene.

Whatever automated system you implement, you need to monitor how well your filters identify and escalate the most urgent queries. Remember that an agent might need a few hours to answer a tricky question, so factor that into your procedures.

This isn’t just something Google suggests; it’s codified in its policies. If you leave a Business Messages chat unanswered for 24 hours, Google might actually deactivate your company’s ability to use chat features.

Don’t Ask for Personal Information

As hackers have gotten more sophisticated, everyday consumers have responded by raising their guard.

On the whole, this is a good thing and will lead to a safer and more secure world. But it also means that you need to be extremely careful not to ask for anything like a social security number or a confirmation code via a service like Business Messages. What’s more, many companies are opting to include a disclaimer to this effect near the beginning of any interactions with customers.

Earlier, we pointed out that Business Messages supports end-to-end encryption, and having a clear, consistent policy about not collecting sensitive information fits into this broader picture. People will trust you more if they know you take their privacy seriously.

Make Business Messages Part of Your Overall Vision

Google Business Messages is a great service, but you’ll get the most out of it if you consider how it is part of a more far-reaching strategy.

At a minimum, this should include investing in other good communication channels, like Apple Messages and WhatsApp. People have had bitter, decades-long battles with each other over which code editor or word processor is best, so we know that they have strong opinions about the technology that they use. If you have many options for customers wanting to contact you, that’ll boost their satisfaction and their overall impression of your contact center.

The prior discussion of trivial inconveniences is also relevant here. It’s not hard to open a different messaging app under most circumstances, but if you don’t force a person to do that, they’re more likely to interact with you.

Schedule a Demo with Quiq

Google has been so monumentally successful its name is now synonymous with “online search.” Even leaving aside rich messaging, encryption, and everything else we covered in this article, you can’t afford to ignore Business Messages for this reason alone.

But setting up an account is only the first step in the process, and it’s much easier when you have ready-made tools that you can integrate on day one. The Quiq conversational AI platform is one such tool, and it has a bevy of features that’ll allow you to reduce the workloads on your agents while making your customers even happier. Check us out or schedule a demo to see what we can do for you!

Request A Demo