What is Agentic AI? Everything You Need to Know.

The landscape of artificial intelligence is rapidly evolving, and at the forefront of this evolution is agentic AI. As noted by UiPath, “the convergence of powerful LLMs (large language models), sophisticated machine learning, and seamless enterprise integration has enabled the rise of agentic AI—which is the ‘brainpower’ behind AI agents.” This powerful technology represents a significant leap forward in how AI systems can autonomously operate, make decisions, and execute complex tasks.

While traditional AI and generative AI have made significant strides in automation and content creation, agentic AI addresses the crucial gaps in autonomous decision-making and task execution. It’s becoming increasingly clear that this technology will reshape how businesses operate, particularly in areas requiring sophisticated problem-solving and adaptability.

What is agentic AI?

Agentic AI refers to artificial intelligence systems that can autonomously execute tasks, make decisions, and adapt to real-time changing conditions. Unlike more passive AI systems, agentic AI demonstrates agency—the ability to act independently and make choices based on understanding the environment and objectives.

As a side note here: I led a webinar recently called From Contact Center to Agentic AI Leader: Embracing AI to Upgrade CX. My colleague Quiq VP of EMEA Chris Humphris and I went deep into agentic AI specifically for the contact center. I highly recommend you watch the replay or read the recap if you’re interested in how this technology works within the confines of the contact center—and what’s needed to make it successful at the platform level. Here’s a hint:

Agentic AI Platform Requirements

Watch the full webinar here.

How does agentic AI work?

Agentic AI operates through a sophisticated combination of technologies and approaches. As IBM explains, “Agentic AI systems provide the best of both worlds: using LLMs to handle tasks that benefit from the flexibility and dynamic responses while combining these AI capabilities with traditional programming for strict rules, logic, and performance. This hybrid approach enables the AI to be both intuitive and precise.”

The system works by integrating multiple components:

  • Language understanding: Processing and comprehending natural language inputs
  • Decision making: Analyzing situations and determining appropriate actions
  • Task execution: Utilizing APIs, IoT devices, and external systems to perform actions
  • Learning and adaptation: Improving performance based on outcomes and feedback

For example, in customer service, an agentic AI system can:

  1. Understand a customer’s inquiry about a missing delivery
  2. Access order tracking systems to verify shipping status
  3. Identify delivery issues and initiate appropriate actions
  4. Communicate updates to the customer
  5. Automatically schedule redelivery if necessary

This customer service example demonstrates several key advancements over previous generations of AI assistants:

While traditional chatbots could only follow rigid, pre-programmed decision trees and provide templated responses, agentic AI shows true operational autonomy by orchestrating multiple systems and making contextual decisions.

The ability to seamlessly move between understanding natural language queries, accessing real-time shipping databases, evaluating delivery problems, and initiating concrete actions like rescheduling represents a quantum leap in capability.

Last-gen AI would typically need human handoffs at multiple points in this process – for instance, when moving from customer communication to backend systems access or when making judgment calls about appropriate remedial actions.

The agentic system’s ability to maintain context throughout the interaction while independently executing complex tasks showcases how modern AI can function as an independent problem-solver rather than just a conversational interface. This level of end-to-end automation and response was impossible with earlier generations of AI technology.

What is the difference between agentic AI and generative AI?

While both agentic AI and generative AI represent significant advances in artificial intelligence, they serve distinctly different purposes. Generative AI excels at creating content—text, images, code, or other media—based on patterns learned from training data. Agentic AI, however, goes beyond generation to actively make decisions and execute tasks.

Agentic AI vs. Generative AI

These technologies can work together synergistically, with generative AI providing content creation capabilities within an agentic AI’s broader decision-making framework.

Benefits of agentic AI

Key benefits include:

1. Autonomous operation

By eliminating the constraints of human-dependent processes, agentic AI creates a new paradigm of continuous, reliable service delivery that scales effortlessly with business demands. The result is:

  • Reduced human intervention: AI agents handle complex tasks independently, freeing human workers to focus on high-value activities requiring emotional intelligence and strategic thinking.
  • Consistent performance: The system maintains uniform quality standards regardless of workload, time of day, or complexity of tasks, eliminating human variability and fatigue-related errors.
  • 24/7 availability: Unlike human operators, AI agents operate continuously without fatigue, ensuring consistent service availability across all time zones.

2. Improved human-AI agent collaboration

Agentic AI changes the relationship between human agents and technology, creating a symbiotic partnership that enhances overall service delivery and job satisfaction. Here’s how.

  • Ensures consistency: AI agents establish and maintain standard operating procedures across teams, ensuring every customer interaction meets quality benchmarks regardless of which human agent is involved. This standardization helps eliminate variations in service quality, while still allowing for personal touch where needed.
  • Accelerates learning: New agents benefit from AI-powered guidance that provides suggestions and best practices, significantly reducing the time needed to achieve proficiency. The system learns from top performers and shares these insights across the entire team.
  • Reduces training time: By providing contextual assistance, agentic AI helps new agents become productive more quickly. Training modules adapt to individual learning patterns, focusing on areas where each agent needs the most support.
  • Improves agent performance with insights: The system continuously analyzes agent interactions, providing actionable feedback and performance metrics that help identify areas for improvement. These insights enable targeted coaching and development opportunities.
  • Improves job satisfaction and reduces agent turnover: By handling routine tasks and providing intelligent support, agentic AI allows agents to focus on more engaging, complex work that requires human empathy and problem-solving skills. This role enhancement leads to higher job satisfaction and lower turnover rates.

3. Enhanced efficiency

Through intelligent automation and rapid processing capabilities, agentic AI significantly improves operational performance across organizations, resulting in:

  • Faster task completion: AI agents process and execute tasks at machine speed, dramatically reducing resolution times compared to manual processes.
  • Reduced error rates: Systematic processing and built-in validation reduce mistakes common in human-operated systems.
  • Streamlined workflows: Intelligent routing and automated handoffs eliminate bottlenecks and optimize process flows.

4.  Real-time adaptability

The system’s ability to learn and adjust in real time ensures optimal performance in dynamic business environments. It shows this via:

  • Dynamic response to changing conditions: AI agents automatically adjust their approach based on current conditions and new information.
  • Continuous learning and improvement: The system learns from each interaction, continuously refining its responses and decision-making processes.
  • Personalized solutions: Advanced analytics enable tailored responses that account for individual user preferences and historical interactions.

5. Integration capabilities

Agentic AI integrates with existing business systems to create a unified operational environment. Main ways include:

  • More seamless connection: The technology easily integrates with current business tools and platforms, maximizing existing investments.
  • Unified data utilization: AI agents can access and analyze data from multiple sources to make informed decisions.
  • Comprehensive solution delivery: The system coordinates across different platforms and departments to deliver complete solutions.

6. Cost-effectiveness

Implementation of agentic AI leads to significant cost savings and improved resource utilization. Top areas for savings include:

  • Reduced operational costs: Automation of routine tasks and improved efficiency lead to lower operational expenses.
  • Intelligent workload distribution: Ensures optimal use of both human and technological resources.

Use cases for agentic AI

Agentic AI’s applications span numerous industries and use cases. Let’s look at the top four industries that are ripest for benefits from our perspective, and the use cases that are best poised for AI.

1. Customer service

In customer service, agentic AI improves support operations from reactive to proactive, enabling intelligent interactions that enhance customer satisfaction while reducing costs. Top use cases include:

  • Query resolution: Agentic AI systems can understand, process, and resolve customer inquiries in real-time, handling everything from basic FAQ responses to complex problem-solving. For example, an AI agent can troubleshoot technical issues, process refunds, or update account information without a human being involved.
  • Ticket management: The technology automatically categorizes, prioritizes, and routes support tickets based on urgency and complexity. It can resolve straightforward issues immediately while intelligently escalating more complex cases to appropriate human agents.
  • Proactive support: AI agents monitor customer behavior patterns and system metrics to identify potential issues before they become problems. They can initiate contact with customers to prevent issues or offer assistance before it’s requested.
  • Personalized assistance: By analyzing customer history, preferences, and behavior patterns, agentic AI delivers tailored support experiences. This might include offering specific product recommendations or customizing communication styles to match customer preferences—useful in most industries, but especially in travel and hospitality.

2. eCommerce and retail

In retail and eCommerce, agentic AI revolutionizes the retail experience by creating seamless, personalized shopping journeys while optimizing backend operations for maximum efficiency and profitability. Best use cases include:

  • Inventory management: Agentic AI systems continuously monitor stock levels, analyze sales patterns, and automatically adjust inventory based on real-time demand. They can trigger reorders, predict seasonal fluctuations, and optimize warehouse distribution.
  • Personalized shopping recommendations: The technology analyzes customer browsing history, purchase patterns, and demographic data to deliver highly relevant product suggestions. These recommendations adapt in real-time based on customer interactions.
  • Order processing: AI agents handle the entire order lifecycle, from initial purchase to delivery tracking. They can process payments, coordinate with shipping partners, and proactively address potential delivery issues.
  • Customer engagement: Through sophisticated analysis of customer behavior, agentic AI creates personalized marketing campaigns, timing promotions optimally, and adjusting pricing strategies based on demand and competition.

3. Business automation

By integrating intelligent decision-making with execution capabilities, agentic AI streamlines complex business processes and eliminates operational bottlenecks across organizations. Start automation targeting:

  • Supply chain optimization: AI agents monitor and adjust supply chain operations in real-time, coordinating with suppliers, managing logistics, and responding to disruptions automatically.
  • Process automation: The technology streamlines complex workflows by automating repetitive tasks, managing approvals, and coordinating cross-departmental activities.
  • Resource allocation: Agentic AI optimizes the distribution of human and material resources based on current demands and predicted future needs.
    Workflow management: The system orchestrates complex business processes, ensuring tasks are completed in the correct order and by the appropriate parties.

4. Healthcare

Agentic AI enhances patient care and operational efficiency by combining real-time monitoring with intelligent decision support and automated administrative processes. From what we’re seeing, the biggest opportunities to apply agentic AI rest in:

  • Patient monitoring: AI agents continuously track patient vital signs and health metrics, alerting medical staff to concerning changes and predicting potential complications.
  • Treatment planning: The technology assists healthcare providers by analyzing patient data, medical history, and current research to suggest optimal treatment approaches.
  • Diagnostic support: Agentic AI analyzes medical imaging, lab results, and patient symptoms to assist in accurate diagnosis and treatment recommendations.
  • Administrative tasks: The system streamlines healthcare by managing appointments, processing insurance claims, and maintaining patient records.

Agentic AI challenges

Let’s take a look at the biggest challenges with agentic AI right now.

1. Ethical considerations

The autonomous nature of agentic AI raises ethical concerns that require careful attention. These systems, designed to make independent decisions and take action, must operate within established ethical frameworks to ensure responsible deployment.

Key ethical challenges include:

  • Accountability for AI decisions and actions
  • Transparency in decision-making processes
  • Potential bias
  • Impact on human autonomy and agency

Quiq SVP of Engineering Bill O’Neill recently talked to VUX World’s Kane Simms about this very issue:

2. Data security

Data security represents a critical challenge in agentic AI implementation, as these systems often require access to sensitive information to function effectively. (If you’re curious, you can learn about our approach to security here).

Primary security concerns include:

  • Protection of training data and model parameters
  • Secure communication channels for AI agents
  • Prevention of adversarial attacks
  • Data privacy compliance (GDPR, CCPA, etc.)
  • Access control and authentication mechanisms

3. Integration challenges

Incorporating agentic AI into both customer integrations and your own company integrations can mean significant hurdles, like:

  • Legacy system compatibility
  • API standardization and communication protocols
  • Performance optimization
  • Scalability concerns
  • Resource allocation and management

4. Regulatory compliance

The evolving regulatory landscape surrounding AI technology presents potential issues, including:

  • Adherence to emerging AI regulations
  • Cross-border compliance requirements
  • Documentation and audit trails
  • Risk assessment and mitigation
  • Regular compliance monitoring and updates

5. Performance monitoring

Maintaining and optimizing agentic AI system performance requires continuous monitoring and adjustment:

  • Real-time performance metrics
  • Quality assurance processes
  • System reliability and availability
  • Error detection and correction
  • Performance optimization strategies

These challenges highlight the complexity of implementing agentic AI systems and underscore the importance of careful planning and robust risk management strategies. Success in deploying these systems requires a comprehensive approach that addresses technical, ethical, and operational concerns, while maintaining focus on business value and user needs.

Importantly, when you partner with agentic AI vendor Quiq, our AI platform and team neutralize these challenges for you.

The future of agentic AI: Shaping tomorrow’s enterprise workflows

As we stand at the intersection of technological innovation and business transformation, agentic AI emerges as a cornerstone of future enterprise operations. But what’ll follow? Here’s what I think.

Technical evolution and integration

The future of agentic AI lies in its ability to integrate with existing enterprise systems while pushing the boundaries of what’s possible. Advanced API ecosystems and sophisticated middleware solutions are already enabling AI agents to coordinate across previously siloed systems, creating unified workflows that span entire organizations.

As these integration capabilities mature, we’ll see the emergence of truly intelligent enterprises where data flows freely, and decisions are made with remarkable speed and accuracy.

The next generation of agentic AI systems will feature enhanced natural language processing capabilities, enabling a more nuanced understanding of context and intent. This advancement will allow AI agents to handle increasingly complex tasks while maintaining high accuracy levels. We’re moving toward systems that can execute predefined workflows and design and optimize them in real time based on changing business conditions.

Enhancing enterprise workflows

The impact of agentic AI on enterprise workflows will be substantial. I believe future systems will feature the following:

1. Predictive process optimization

AI agents will move beyond reactive process management to predictive optimization. By analyzing patterns across millions of workflow executions, these systems will automatically identify potential bottlenecks before they occur and implement preventive measures. This capability will enable organizations to maintain peak operational efficiency while minimizing disruptions.

2. Dynamic resource allocation

The future workplace will see AI agents dynamically managing both human and technological resources. These systems will understand the strengths and limitations of different resource types, automatically routing work to optimize for efficiency, cost, and quality. This intelligent orchestration will create more flexible, resilient organizations capable of adapting to changing market conditions in real time.

3. Autonomous decision networks

As agentic AI evolves, we’ll see the emergence of decision networks where multiple AI agents collaborate to solve complex business challenges. These networks will coordinate across departments and functions, making decisions that optimize for overall business outcomes rather than departmental metrics.

Enhanced learning and adaptation

The future of agentic AI lies in its ability to learn and adapt at faster paces. Next-generation systems will feature:

1. Collective learning

AI agents will learn not just from their own experiences but from the collective experiences of all instances across an organization or industry. This shared learning will accelerate the development of best practices and enable rapid adaptation to new challenges or opportunities.

2. Contextual understanding

Future systems will demonstrate deeper understanding of business context, enabling them to make more nuanced decisions that account for both explicit and implicit factors. This enhanced contextual awareness will lead to more sophisticated problem-solving capabilities and better alignment with business objectives.

4. Personalization at scale

As AI agents become more sophisticated, they can deliver highly personalized experiences while maintaining operational efficiency. This will enable organizations to provide custom solutions at scale without sacrificing speed or quality.

Creating more resilient organizations

The evolution of agentic AI will contribute to building more resilient organizations through:

1. Adaptive workflows

Future systems will automatically adjust workflows based on changing conditions, ensuring business continuity even during unprecedented events. This adaptability will be key to maintaining operational efficiency in an increasingly volatile business environment.

2. Proactive risk management

AI agents will continuously monitor operations for potential risks, implementing preventive measures before issues arise. This proactive approach will help organizations maintain stability while pursuing innovation.

3. Sustainable scaling

The future of agentic AI will enable organizations to scale operations more sustainably, automatically adjusting processes to maintain efficiency as the organization grows.

Looking ahead

While challenges around data quality, system integration, and ethical considerations persist, the trajectory of agentic AI points toward increasingly sophisticated systems. Organizations that embrace this technology and prepare for its evolution will be better positioned to:

  • Create more efficient workflows that respond to changing business needs
  • Deliver personalized experiences at scale
  • Build more resilient organizations capable of thriving in uncertain conditions
  • Drive innovation through intelligent process optimization

As we move forward, the key to success will lie not just in implementing agentic AI, but in creating organizational cultures that can effectively leverage its capabilities while maintaining human oversight and strategic direction. The future belongs to organizations that can strike this balance, using agentic AI to enhance human capabilities, rather than replace them.

We’re only beginning to scratch the surface of what’s possible. As the technology continues to evolve, it will enable new forms of business operation that are more resilient than ever before.

I love Bill’s take on this in another clip from his conversation with Kane:

Final thoughts on agentic AI and how to get started with it

Agentic AI represents a significant advancement in artificial intelligence, offering businesses the ability to automate complicated tasks while maintaining intelligence in decision-making. As organizations seek to improve efficiency and customer experience, agentic AI provides a powerful solution that goes beyond traditional automation and generative AI capabilities.

Quiq stands at the forefront of this technology, offering agentic AI solutions that help businesses improve their operations and customer interactions. With a deep understanding of both the technology and business needs, Quiq provides sophisticated AI agents that can handle complex tasks and drive the outcomes your business cares about.

Everything You Need to Know About LLM Integration

It’s hard to imagine an application, website or workflow that wouldn’t benefit in some way from the new electricity that is generative AI. But what does it look like to integrate an LLM into an application? Is it just a matter of hitting a REST API with some basic auth credentials, or is there more to it than that?

In this article, we’ll enumerate the things you should consider when planning an LLM integration.

Why Integrate an LLM?

At first glance, it might not seem like LLMs make sense for your application—and maybe they don’t. After all, is the ability to write a compelling poem about a lost Highland Cow named Bo actually useful in your context? Or perhaps you’re not working on anything that remotely resembles a chatbot. Do LLMs still make sense?

The important thing to know about ‘Generative AI’ is that it’s not just about generating creative content like poems or chat responses. Generative AI (LLMs) can be used to solve a bevy of other problems that roughly fall into three categories:

  1. Making decisions (classification)
  2. Transforming data
  3. Extracting information

Let’s use the example of an inbound email from a customer to your business. How might we use LLMs to streamline that experience?

  • Making Decisions
    • Is this email relevant to the business?
    • Is this email low, medium or high priority?
    • Does this email contain inappropriate content?
    • What person or department should this email be routed to?
  • Transforming data
    • Summarize the email for human handoff or record keeping
    • Redact offensive language from the email subject and body
  • Extracting information
    • Extract information such as a phone number, business name, job title etc from the email body to be used by other systems
  • Generating Responses
    • Generate a personalized, contextually-aware auto-response informing the customer that help is on the way
    • Alternatively, deploy a more sophisticated LLM flow (likely involving RAG) to directly address the customer’s need

It’s easy to see how solving these tasks would increase user satisfaction while also improving operational efficiency. All of these use cases are utilizing ‘Generative AI’, but some feel more generative than others.

When we consider decision making, data transformation and information extraction in addition to the more stereotypical generative AI use cases, it becomes harder to imagine a system that wouldn’t benefit from an LLM integration. Why? Because nearly all systems have some amount of human-generated ‘natural’ data (like text) that is no longer opaque in the age of LLMs.

Prior to LLMs, it was possible to solve most of the tasks listed above. But, it was exponentially harder. Let’s consider ‘is this email relevant to the business’. What would it have taken to solve this before LLMs?

  • A dataset of example emails labeled true if they’re relevant to the business and false if not (the bigger the better)
  • A training pipeline to produce a custom machine learning model for this task
  • Specialized hardware or cloud resources for training & inferencing
  • Data scientists, data curators, and Ops people to make it all happen

LLMs can solve many of these problems with radically lower effort and complexity, and they will often do a better job. With traditional machine learning models, your model is, at best, as good as the data you give it. With generative AI you can coach and refine the LLM’s behavior until it matches what you desire – regardless of historical data.

For these reasons LLMs are being deployed everywhere—and consumers’ expectations continue to rise.

How Do You Feel About LLM Vendor Lock-In?

Once you’ve decided to pursue an LLM integration, the first issue to consider is whether you’re comfortable with vendor lock-in. The LLM market is moving at lightspeed with the constant release of new models featuring new capabilities like function calls, multimodal prompting, and of course increased intelligence at higher speeds. Simultaneously, costs are plummeting. For this reason, it’s likely that your preferred LLM vendor today may not be your preferred vendor tomorrow.

Even at a fixed point in time, you may need more than a single LLM vendor.

In our recent experience, there are certain classification problems that Anthropic’s Claude does a better job of handling than comparable models from OpenAI. Similarly, we often prefer OpenAI models for truly generative tasks like generating responses. All of these LLM tasks might be in support of the same integration so you may want to look at the project not so much as integrating a single LLM or vendor, but rather a suite of tools.

If your use case is simple and low volume, a single vendor is probably fine. But if you plan to do anything moderately complex or high scale you should plan on integrating multiple LLM vendors to have access to the right models at the best price.

Resiliency & Scalability are Earned—Not Given

Making API calls to an LLM is trivial. Ensuring that your LLM integration is resilient and scalable requires more elbow grease. In fact, LLM API integrations pose unique challenges:

Challenge Solutions
They are pretty slow If your application is high-scale and you’re doing synchronous (threaded) network calls, your application won’t scale very well since most threads will be blocked on LLM calls. Consider switching to async I/O.

You’ll also want to support running multiple prompts in parallel to reduce visible latency to the user. 
They are throttled by requests per minute and tokens per minute Attempt to estimate your LLM usage in terms of requests and LLM tokens per minute and work with your provider(s) to ensure sufficient bandwidth for peak load 
They are (still) kinda flakey (unpredictable response times, unresponsive connections) Employ various retry schemes in response to timeouts, 500s, 429s (rate limit) etc.

The above remediations will help your application be scalable and resilient while your LLM service is up. But what if it’s down? If your LLM integration is on a critical execution path you’ll want to support automatic failover. Some LLMs are available from multiple providers:

  • OpenAI models are hosted by OpenAI itself as well as Azure
  • Anthropic models are hosted by Anthropic itself as well as AWS

Even if an LLM only has a single provider, or even if it has multiple, you can also provision the same logical LLM in multiple cloud regions to achieve a failover resource. Typically you’ll want the provider failover to be built into your retry scheme. Our failover mechanisms get tripped regularly out in production at Quiq, no doubt partially because of how rapidly the AI world is moving.

Are You Actually Building an Agentic Workflow?

Oftentimes you have a task that you know is well-suited for an LLM. For example, let’s say you’re planning to use an LLM to analyze the sentiment of product reviews. On the surface, this seems like a simple task that will require one LLM call that passes in the product review and asks the LLM to decide the sentiment. Will a single prompt suffice? What if we also want to determine if a given review contains profanity or personal information? What if we want to ask three LLMs and average their results?

Many tasks require multiple prompts, prompt chaining and possibly RAG (Retrieval Augmented Generation) to best solve a problem. Just like humans, AI produces better results when a problem is broken down into pieces. Such solutions are variously known as AI Agents, Agentic Workflows or Agent Networks and are why open source tools like LangChain were originally developed.

In our experience, pretty much every prompt eventually grows up to be an Agentic Workflow, which has interesting implications for how it’s configured & monitored.

Be Ready for the Snowball Effect

Introducing LLMs can result in a technological snowball effect, particularly if you need to use Retrieval Augmented Generation (RAG). LLMs are trained on mostly public data that was available at a fixed point in the past. If you want an LLM to behave in light of up-to-date and/or proprietary data sources (which most non-trivial applications do) you’ll need to do RAG.

RAG refers to retrieving the up-to-date and/or proprietary data you want the LLM to use in its decision making and passing it to the LLM as part of your prompt.

Assuming you need to search a reference dataset like a knowledge base, product catalog or product manual, the retrieval part of RAG typically entails adding the following entities to your system:

1. An embedding model

An embedding model is roughly half of an LLM – it does a great job of reading and understanding information you pass it but instead of generating a completion it produces a numeric vector that encodes its understanding of the source material.

You’ll typically run the embeddings model on all of the business data you want to search and retrieve for the LLM. Most LLM providers also have embedding models, or you can hit one via any major cloud.

2. A vector database

Once you have embeddings for all of your business data, you need to store them somewhere that facilitates speedy search based on numeric vectors. Solutions like Pinecone and MilvusDB fill this need, but that means integrating a new vendor or hosting a new database internally.

After implementing embeddings and a vector search solution, you can now retrieve information to include in the prompts you send to your LLM(s). But how can you trust that the LLM’s response is grounded in the information you provided and not something based on stale information or purely made up?

There are specialized deep learning models that exist solely for the purpose of ensuring that an LLM’s generative claims are grounded in facts you provide. This practice is variously referred to as hallucination detection, claim verification, NLI, etc. We believe NLI models are an essential part of a trustworthy RAG pipeline, but managed cloud solutions are scarce and you may need to host one yourself on GPU-enabled hardware.

Is a Black Box Sustainable?

If you bake your LLM integration directly into your app, you will effectively end up with a black box that can only be understood and improved by engineers. This could make sense if you have a decent size software shop and they’re the only folks likely to monitor or maintain the integration.

However, your best software engineers may not be your best (or most willing) prompt engineers, and you may wish to involve other personas like product and experience designers since an LLM’s output is often part of your application’s presentation layer & brand.

For these reasons, prompts will quickly need to move from code to configuration – no big deal. However, as an LLM integration matures it will likely become an Agentic Workflow involving:

  • More prompts, prompt parallelization & chaining
  • More prompt engineering
  • RAG and other orchestration

Moving these concerns into configuration is significantly more complex but necessary on larger projects. In addition, people will inevitably want to observe and understand the behavior of the integration to some degree.

For this reason it might make sense to embrace a visual framework for developing Agentic Workflows from the get-go. By doing so you open up the project to collaboration from non-engineers while promoting observability into the integration. If you don’t go this route be prepared to continually build out configurability and observability tools on the side.

Quiq’s AI Automations Take Care of LLM Integration Headaches For You

Hopefully we’ve given you a sense for what it takes to build an enterprise LLM integration. Now it’s time for the plug. The considerations outlined above are exactly why we built AI Studio and particularly our AI Automations product.

With AI automations you can create a serverless API that handles all the complexities of a fully orchestrated AI-flow, including support for multiple LLMs, chaining, RAG, resiliency, observability and more. With AI Automations your LLM integration can go back to being ‘just an API call with basic auth’.

Want to learn more? Dive into AI Studio or reach out to our team.

Request A Demo

How a Leading Office Supply Retailer Answered 35% More Store Associate Questions with Generative AI

In an era where artificial intelligence is rapidly transforming various industries, the retail sector is no exception. One leading national office supply retailer has taken a bold step forward, harnessing the power of generative AI to revolutionize their in-store experience and empower their associates.

This innovative approach has not only enhanced customer satisfaction but has also led to remarkable improvements in employee efficiency. In fact, the company has experienced a 35% increase in containment rates (with a 6-month average containment rate of 65%) vs. its legacy solution.

We’re excited to share the details of this groundbreaking initiative. Keep reading as we examine the company’s vision, their strategic approach to implementation, and the key objectives that drove their AI adoption. We’ll also discuss their GenAI assistant’s primary capabilities and how it’s improving both customer experiences and employee satisfaction. By the end, you’ll see how much potential lies in applying this use case to additional employees—not just in-store associates—as well as customers. There’s so much to unlock. Ready? Let’s dive in.

The Vision: Empowering Associates with GenAI

This company is dedicated to helping businesses of all sizes become more productive, connected, and inspired. Their team recognized the immense potential of GenAI early on. The vision? To create a GenAI-powered assistant that could enhance the capabilities of their store associates, leading to improved customer service, increased productivity, and higher job satisfaction.

Key objectives of the GenAI initiative:

  • Simplify store associate experience
  • Streamline access to information for associates
  • Improve customer service efficiency
  • Boost associate confidence and job satisfaction
  • Increase overall store associate productivity

Charting the Course to Building a GenAI-Powered Assistant

By partnering with Quiq, the national office supply retailer launched its employee-facing GenAI assistant in just 6 weeks. Here’s what the launch process looked like in 9 primary steps:

  1. Discovery of AI enhancements
  2. Pulling content from current systems
  3. Run a Proof of Concept with Quiq team
  4. Run testing through all categories of content
  5. Approval to Pilot with Top Associate Group
  6. Refine content based on associate feedback for chain rollout
  7. Run additional testing through all categories
  8. Starting chain deployment to larger district of stores
  9. Maintain content accuracy and refine based on updates

Examining the Office Supplier’s Phased Approach to Adoption

Pre-launch, the teams worked together to ensure all content was updated and accurate. Then they launched a phased testing approach, going through several rounds of iterative testing. After that, the retailer shared the GenAI assistant with a top internal associate team to test and try and break it. Finally, the internal team utilized a top associate group to share excitement before launch.

At launch, the office supplier created a standalone page dedicated to the assistant and launched a SharePoint site to share updates for the internal team. They also facilitated internal learning sessions and quickly adapted to low feedback numbers. Last but not least, the team made it fun by branding the assistant with a fun, on-brand name and personality.

Post-launch, the retailer includes the AI assistant in all communications to associates, with tips on what to search for in the assistant. They also leverage the assistant’s proactive messaging capabilities to build excitement for new launches, promotions, and best practices.

Primary Capabilities and Focus

Launching the GenAI assistant has been transformative because it is trained on all things related to the office supply retailer, which has simplified and accelerated access to information. That means associates can help customers faster, answering questions accurately the first time and every time, regardless of tenure. Ultimately, AI is empowering associates to do even better work—including enhanced cross and upselling with proactive messages.

Proactive messaging to associates helps keep rotating sales goals top of mind so they can weave additional revenue opportunities into customer interactions. For example, if the design services team has unexpected bandwidth, the AI assistant can send a message letting associates know, inspiring them to highlight design and print services to customers who may be interested. It also provides a fun countdown to important launches, like back-to-school season, and “fun facts” that help build up useful knowledge over time. It’s like bite-size bits of training.

GenAI Transforms the In-Store Experience in 4 Critical Ways

Implementing the GenAI assistant has had a profound impact on in-store operations. By providing associates with instant access to accurate information, it has:

  1. Enhanced Customer Service: Associates can now provide faster, more accurate responses to customer questions.
  2. Increased Efficiency: The time it takes to find information has been significantly reduced, allowing associates to serve more customers.
  3. Boosted Confidence: With a reliable AI assistant at their fingertips, associates feel more empowered in their roles. Plus, new associates can be as effective as experienced ones with the assistant by their side.
  4. Improved Job Satisfaction: The reduced stress of information retrieval has led to higher job satisfaction among associates. Not to mention, the GenAI assistant is there to converse and empathize with employees who experience stressful situations with customers.

Results + What’s Next?

As a result of launching its GenAI assistant with Quiq, our national office supply retailer customer has realized a:

  • 68% self service rate resolution rate, allowing associates to get immediate answers to questions 2 out of 3 times
  • Associate satisfaction with AI 4.82 out of 5

And as for next steps, the team is excited to:

  • Launch a selling assisted path
  • Expand to additional departments within stores
  • Add more devices in store for easier accessibility
  • Integrate with internal systems to be able to answer even more types of questions with real-time access to orders and other information

The Lesson: Humans and AI Can Work Together to Play Their Strongest Roles

The office supply retailer’s successful implementation of GenAI serves as a powerful example of how the technology can transform retail operations by helping human employees work more efficiently. By focusing on empowering associates with AI, the company has not only improved customer service but also enhanced employee satisfaction and productivity.

Interested in Diving Deeper into GenAI?

Download Two Truths and a Lie: Breaking Down the Major GenAI Misconceptions Holding CX Leaders Back. This comprehensive guide illuminates the path through the intricate landscape of generative AI in CX. We cut through the fog of misconceptions, offering crystal-clear, practical advice to empower your decision-making.

Current Large Language Models and How They Compare

From ChatGPT and Bard to BLOOM and Claude, there is a veritable ocean of current LLMs (large language models) for you to choose from. Some are specialized for specific use cases, some are open-source, and there’s a huge variance in the number of parameters they contain.

If you’re a CX leader and find yourself fascinated by the potential of using this technology in your contact center, it can be hard to know how to run proper LLM comparisons.

Today, we’re going to tackle this issue head-on by talking about specific criteria you can use to compare LLMs, sources of additional information, and some of the better-known options.

But always remember that the point of using an LLM is to deliver a world-class customer experience, and the best option is usually the one that delivers multi-model functionality with a minimum of technical overhead.

With that in mind, let’s get started!

What is Generative AI?

While it may seem like large language models (LLMs) and generative AI have only recently emerged, the work they’re based on goes back decades. The journey began in the 1940s with Walter Pitts and Warren McCulloch, who designed artificial neurons based on early brain research. However, practical applications became feasible only after the development of the backpropagation algorithm in 1985, which enabled effective training of larger neural networks.

By 1989, researchers had developed a convolutional system capable of recognizing handwritten numbers. Innovations such as long short-term memory networks further enhanced machine learning capabilities during this period, setting the stage for more complex applications.

The 2000s ushered in the era of big data, crucial for training generative pre-trained models like ChatGPT. This combination of decades of foundational research and vast datasets culminated in the sophisticated generative AI and current LLMs we see transforming contact centers and related industries today.

What’s the Best Way to do a Large Language Models Comparison?

If you’re shopping around for a current LLM for a particular application, it makes sense to first clarify the evaluation criteria you should be using. We’ll cover that in the sections below.

Large Language Models Comparison By Industry Use Case

One of the more remarkable aspects of current LLMs is that they’re good at so many things. Out of the box, most can do very well at answering questions, summarizing text, translating between natural languages, and much more.

But there might be situations in which you’d want to boost the performance of one of the current LLMs on certain tasks. The two most popular ways of doing this are retrieval-augmented generation (RAG) and fine-tuning a pre-trained model.

Here’s a quick recap of what both of these are:

  • Retrieval-augmented generation refers to getting one of the general-purpose, current LLMs to perform better by giving them access to additional resources they can use to improve their outputs. You might hook it up to a contact-center CRM so that it can provide specific details about orders, for example.
  • Fine-tuning refers to taking a pre-trained model and honing it for specific tasks by continuing its training on data related to that task. A generic model might be shown hundreds of polite interactions between customers and CX agents, for example, so that it’s more courteous and helpful.

So, if you’re considering using one of the current LLMs in your business, there are a few questions you should ask yourself. First, are any of them perfectly adequate as-is? If they’re not, the next question is how “adaptable” they are. It’s possible to use RAG or fine-tuning with most of the current LLMs, the question is how easy they make it.

Of course, by far the easiest option would be to leverage a model-agnostic conversational AI platform for CX. These can switch seamlessly between different models, and some support RAG out of the box, meaning you aren’t locked into one current LLM and can always reach for the right tool when needed.

What’s a Good Way To Think About an Open-Source or Closed-Source Large Language Models Comparison?

You’ve probably heard of “open-source,” which refers to the practice of releasing source code to the public so that it can be forked, modified, and scrutinized.

The open-source approach has become incredibly popular, and this enthusiasm has partially bled over into artificial intelligence and machine learning. It is now fairly common to open-source software, datasets, and training frameworks like TensorFlow.

How does this translate to the realm of large language models? In truth, it’s a bit of a mixture. Some models are proudly open-sourced, while others jealously guard their model’s weights, training data, and source code.

This is one thing you might want to consider as you carry out your LLM comparisons. Some of the very best models, like ChatGPT, are closed-source. The downside of using such a model is that you’re entirely beholden to the team that built it. If they make updates or go bankrupt, you could be left scrambling at the last minute to find an alternative solution.

There’s no one-size-fits-all approach here, but it’s worth pointing out that a high-quality enterprise solution will support customization by allowing you to choose between different models (both close-source and open-source). This way, you needn’t concern yourself with forking repos or fret over looming updates, you can just use whichever model performs the best for your particular application.

Getting A Large Language Models Comparison Through Leaderboards and Websites

Instead of doing your LLM comparisons yourself, you could avail yourself of a service built for this purpose.

Whatever rumors you may have heard, programmers are human beings, and human beings have a fondness for ranking and categorizing pretty much everything – sports teams, guitar solos, classic video games, you name it.

Naturally, as current LLMs have become better known, leaderboards and websites have popped up comparing them along all sorts of different dimensions. Here are a few you can use as you search around for the best current LLMs.

Leaderboards for Comparing LLMs

In the past couple of months, leaderboards have emerged which directly compare various current LLMs.

One is AlpacaEval, which uses a custom dataset to compare ChatGPT, Claude, Cohere, and other LLMs on how well they can follow instructions. AlpacaEval boasts high agreement with human evaluators, so in our estimation, it’s probably a suitable way of initially comparing LLMs, though more extensive checks might be required to settle on a final list.

Another good choice is Chatbot Arena, which pits two anonymous models side-by-side, has you rank which one is better, then aggregates all the scores into a leaderboard.

Finally, there is Hugging Face’s Open LLM Leaderboard, which is similar. Anyone can submit a new model for evaluation, which is then assessed based on a small set of key benchmarks from the Eleuther AI Language Model Evaluation Harness. These capture how well the models do in answering simple science questions, common-sense queries, and more, which will be of interest to CX leaders.

When combined with the criteria we discussed earlier, these leaderboards and comparison websites ought to give you everything you need to execute a constructive large language models comparison.

What are the Currently-Available Large Language Models?

Okay! Now that we’ve worked through all this background material, let’s turn to discussing some of the major LLMs that are available today. We make no promises about these entries being comprehensive (and even if they were, there’d be new models out next week), but they should be sufficient to give you an idea as to the range of options you have.

ChatGPT and GPT

Obviously, the titan in the field is OpenAI’s ChatGPT, which is really just a version of GPT that has been fine-tuned through reinforcement learning from human feedback to be especially good at sustained dialogue.

ChatGPT and GPT have been used in many domains, including customer service, question answering, and many others. As of this writing, the most recent GPT is version 4o (note: that’s the letter ‘o’, not the number ‘0’).

LLaMA

In April 2024, Facebook’s AI team released version three of its Large Language Model Meta AI (LLaMa 3). At 70 billion parameters it is not quite as big as GPT; this is intentional, as its purpose is to aid researchers who may not have the budget or expertise required to provision a behemoth LLM.

Gemini

Like GPT-4, Google’s Gemini is aimed squarely at dialogue. It is able to converse on a nearly infinite number of subjects, and from the beginning, the Google team has focused on having Gemini produce interesting responses that are nevertheless absent of abuse and harmful language.

StableLM

StableLM is a lightweight, open-source language model built by Stability AI. It’s trained on a new dataset called “The Pile”, which is itself made up of over 20 smaller, high-quality datasets which together amount to over 825 GB of natural language.

GPT4All

What would you get if you trained an LLM on “…on a massive curated corpus of assistant interactions, which included word problems, multi-turn dialogue, code, poems, songs, and stories,” and then released it on an Apache 2.0 license? The answer is GPT4All, an open-source model whose purpose is to encourage research into what these technologies can accomplish.

BLOOM

The BigScience Large Open-Science Open-Access Multilingual Language Model (BLOOM) was released in late 2022. The team that put it together consisted of more than a thousand researchers from all over the worlds, and unlike the other models on this list, it’s specifically meant to be interpretable.

Pathways Language Model (PaLM)

PaLM is from Google, and is also enormous (540 billion parameters). It excels in many language-related tasks, and became famous when it produced really high-level explanations of tricky jokes. The most recent version is PaLM 2.

Claude

Anthropic’s Claude is billed as a “next-generation AI assistant.” The recent release of Claude 3.5 Sonnet “sets new industry benchmarks” in speed and intelligence, according to materials put out by the company. We haven’t looked at all the data ourselves, but we have played with the model and we know it’s very high-quality.

Command and Command R+

These are models created by Cohere, one of the major commercial platforms for current LLMs. They are comparable to most of the other big models, but Cohere has placed a special focus on enterprise applications, like agents, tools, and RAG.

What are the Best Ways of Overcoming the Limitations of Large Language Models?

Large language models are remarkable tools, but they nevertheless suffer from some well-known limitations. They tend to hallucinate facts, for example, sometimes fail at basic arithmetic, and can get lost in the course of lengthy conversations.

Overcoming the limitations of large language models is mostly a matter of either monitoring them and building scaffolding to enable RAG, or partnering with a conversational AI platform for CX that handles this tedium for you.

An additional wrinkle involves tradeoffs between different models. As we discuss below, sometimes models may outperform the competition on a task like code generation while being notably worse at a task like faithfully following instructions; in such cases, many opt to have an ensemble of models so they can pick and choose which to deploy in a given scenario. (It’s worth pointing out that even if you want to use one model for everything, you’ll absolutely need to swap in an upgraded version of that model eventually, so you still have the same model-management problem.)

This, too, is a place where a conversational AI platform for CX will make your life easier. The best such platforms are model-agnostic, meaning that they can use ChatGPT, Claude, Gemini, or whatever makes sense in a particular situation. This removes yet another headache, smoothing the way for you to use generative AI in your contact center with little fuss.

What are the Best Large Language Models?

Having read the foregoing, it’s natural to wonder if there’s a single model that best suits your enterprise. The answer is “it depends on the specifics of your use case.” You’ll have to think about whether you want an open-source model you control or you’re comfortable hitting an API, whether your use case is outside the scope of ChatGPT and better handled with a bespoke model, etc.

Speaking of use cases, in the next few sections, we’ll offer some advice on which current LLMs are best suited for which applications. However, this advice is based mostly on personal experience and other people’s reports of their experiences. This should be good enough to get you started, but bear in mind that these claims haven’t been born out by rigorous testing and hard evidence—the field is too young for most of that to exist yet.

What’s the Best LLM if I’m on a Budget?

Pretty much any open-source model is given away for free, by definition. You can just Google “free open-source LLMs”, but one of the more frequently recommended open-source models is LLaMA 2 (there’s also the new LLaMA 3), both of which are free.

But many LLMs (both free and paid) also use the data you feed them for training purposes, which means you could be exposing proprietary or sensitive data if you’re not careful. Your best bet is to find a cost-effective platform that has an explicit promise not to use your data for training.

When you deal with an open-source model, you also have to pay for hosting, either your own or through a cloud service like Amazon Bedrock.

What’s the Best LLM for a Large Context Window?

The context window is the amount of text an LLM can handle at a time. When ChatGPT was released, it had a context window of around 4,000 tokens. (A “token” isn’t exactly a word, but it’s close enough for our purposes.)

Generally (and up to a point), the longer the context window the better the model is able to perform. Today’s models generally have context windows of at least a few tens of thousands, and some getting into the lower 100,000 range.

But, at a staggering 1 million tokens–equivalent to an hour-long video or the full text of a long novel–Google’s Gemini simply towers over the others like Hagrid in the Shire.

That having been said, this space moves quickly, and context window length is an active area of research and development. These figures will likely be different next month, so be sure to check the latest information as you begin shopping for a model.

Choosing Among the Current Large Language Models

With all the different LLMs on offer, it’s hard to narrow the search down to the one that’s best for you. By carefully weighing the different metrics we’ve discussed in this article, you can choose an LLM that meets your needs with as little hassle as possible.

Pulling back a bit, let’s close by recalling that the whole purpose of choosing among current LLMs in the first place is to better meet the needs of our customers.

For this reason, you might want to consider working with a conversational AI platform for CX, like Quiq, that puts a plethora of LLMs at your fingertips through one simple interface.

Request A Demo

Going Beyond the GenAI Hype — Your Questions, Answered

We recently hosted a webinar all about how CX leaders can go beyond the hype surrounding GenAI, sift out the misinformation, and start driving real business value with AI Assistants. During the session, our speakers shared specific steps CX leaders can take to get their knowledge ready for AI, eliminate harmful hallucinations, and solve the build vs. buy dilemma.

We were overwhelmed with the number of folks who tuned in to learn more and hear real-life challenges, best practices, and success stories from Quiq’s own AI Assistant experts and customers. At the end of the webinar, we received so many amazing audience questions that we ran out of time to answer them all!

So, we asked speaker and Quiq Product Manager Max Fortis, to respond to a few of our favorites. Check out his answers in the clips below, and be sure to watch the full 35-minute webinar on-demand.

Ensuring Assistant Access to Personal and Account Information

 

 

Using a Knowledge Base Written for Internal Agents

 

 

Teaching a Voice Assistant vs. a Chat Assistant

 

 

Monitoring and Improving Assistant Performance Over Time

 

 

Watch the Full Webinar to Dive Deeper

Whether you were unable to tune in live or want to watch the rerun, this webinar is available on-demand. Give it a listen to hear Max and his Quiq colleagues offer more answers and advice around how to assess and fill critical knowledge gaps, avoid common yet lesser-known hallucination types, and partner with technical teams to get the AI tools you need.

Watch Now

Does GenAI Leak Your Sensitive Data? Exposing Common AI Misconceptions (Part Three)

This is the final post in a three-part series clarifying the biggest misconceptions holding CX leaders like you back from integrating GenAI into their CX strategies. Our goal? To assuage your fears and help you start getting real about adding an AI Assistant to your contact center — all in a fun “two truths and a lie” format.

There are few faux pas as damaging and embarrassing for brands as sensitive data getting into the wrong hands. So it makes sense that data security concerns are a major deterrent for CX leaders thinking about getting started with GenAI.

In the first post of our AI Misconceptions series, we discussed why your data is definitely good enough to make GenAI work for your business. Next, we explored the different types of hallucinations that CX leaders should be aware of, and how they are 100% preventable with the right guardrails in place.

Now, let’s wrap up our series by exposing the truth about GenAI potentially leaking your company or customer data.

Misconception #3: “GenAI inadvertently leaks sensitive data.”

As we discussed in part one, AI needs training data to work. One way to collect that data is from the questions users ask. For example, if a large language model (LLM) is asked to summarize a paragraph of text, that text could be stored and used to train future models.

Unfortunately, there have been some famous examples of companies’ sensitive information becoming part of datasets used to train LLMs — take Samsung, for instance. Because of this, CX leaders often fear that using GenAI will result in their company’s proprietary data being disclosed when users interact with these models.

Truth #1: Public GenAI tools use conversation data to train their models.

Tools like OpenAI’s ChatGPT and Google Gemini (formerly Bard) are public-facing and often free — and that’s because their purpose is to collect training data. This means that any information that users enter while using these tools is free game to be used for training future models.

This is precisely how the Samsung data leak happened. The company’s semiconductor division allowed its engineers to use ChatGPT to check their source code. Not only did multiple employees copy/paste confidential code into ChatGPT, but one team member even used the tool to transcribe a recording of an internal-only meeting!

Truth #2: Properly licensed GenAI is safe.

People often confuse ChatGPT, the application or web portal, with the LLM behind it. While the free version of ChatGPT collects conversation data, OpenAI offers an enterprise LLM that does not. Other LLM providers offer similar enterprise licenses that specify that all interactions with the LLM and any data provided will not be stored or used for training purposes.

When used through an enterprise license, LLMs are also Service Organization Control Type 2, or SOC 2, compliant. This means they have to undergo regular audits from third parties to prove that they have the processes and procedures in place to protect companies’ proprietary data and customers’ personally identifiable information (PII).

The Lie: Enterprises must use internally-developed models only to protect their data.

Given these concerns over data leaks and hallucinations, some organizations believe that the only safe way to use GenAI is to build their own AI models. Case in point: Samsung is now “considering building its own internal AI chatbot to prevent future embarrassing mishaps.”

However, it’s simply not feasible for companies whose core business is not AI to build AI that is as powerful as commercially available LLMs — even if the company is as big and successful as Samsung. Not to mention the opportunity cost and risk of having your technical resources tied up in AI instead of continuing to innovate on your core business.

It’s estimated that training the LLM behind ChatGPT cost upwards of $4 million. It also required specialized supercomputers and access to a data set equivalent to nearly the entire Internet. And don’t forget about maintenance: AI startup Hugging Face recently revealed that retraining its Bloom LLM cost around $10 million.

GenAI Misconceptions

Using a commercially available LLM provides enterprises with the most powerful AI available without breaking the bank— and it’s perfectly safe when properly licensed. However, it’s also important to remember that building a successful AI Assistant requires much more than developing basic question/answer functionality.

Finding a Conversational CX Platform that harnesses an enterprise-licensed LLM, empowers teams to build complex conversation flows, and makes it easy to monitor and measure Assistant performance is a CX leader’s safest bet. Not to mention, your engineering team will thank you for giving them optionality for the control and visibility they want—without the risk and overhead of building it themselves!

Feel Secure About GenAI Data Security

Companies that use free, public-facing GenAI tools should be aware that any information employees enter can (and most likely will) be used for future model-training purposes.

However, properly-licensed GenAI will not collect or use your data to train the model. Building your own GenAI tools for security purposes is completely unnecessary — and very expensive!

Want to read more or revisit the first two misconceptions in our series? Check out our full guide, Two Truths and a Lie: Breaking Down the Major GenAI Misconceptions Holding CX Leaders Back.

Will GenAI Hallucinate and Hurt Your Brand? Exposing Common AI Misconceptions (Part Two)

This is the second post in a three-part series clarifying the biggest misconceptions holding CX leaders like you back from integrating GenAI into their CX strategies. Our goal? To assuage your fears and help you start getting real about adding an AI Assistant to your contact center — all in a fun “two truths and a lie” format.

Did you know that the Golden Gate Bridge was transported for the second time across Egypt in October of 2016?

Or that the world record for crossing the English Channel entirely on foot is held by Christof Wandratsch of Germany, who completed the crossing in 14 hours and 51 minutes on August 14, 2020?

Probably not, because GenAI made these “facts” up. They’re called hallucinations, and AI hallucination misconceptions are holding a lot of CX leaders back from getting started with GenAI.

In the first post of our AI Misconceptions series, we discussed why your data is definitely good enough to make GenAI work for your business. In fact, you actually need a lot less data to get started with an AI Assistant than you probably think.

Now, we’re debunking AI hallucination myths and separating some of the biggest AI hallucination facts from fiction. Could adding an AI Assistant to your contact center put your brand at risk? Let’s find out.

Misconception #2: “GenAI will hallucinate and hurt my brand.”

While the example hallucinations provided above are harmless and even a little funny, this isn’t always the case. Unfortunately, there are many examples of times chatbots have cussed out customers or made racist or sexist remarks. This causes a lot of concern among CX leaders looking to use an AI Assistant to represent their brand.

Truth #1: Hallucinations are real (no pun intended).

Understanding AI hallucinations hinges on realizing that GenAI wants to provide answers — whether or not it has the right data. Hallucinations like those in the examples above occur for two common reasons.

AI-Induced Hallucinations Explained:

  1. The large language model (LLM) simply does not have the correct information it needs to answer a given question. This is what causes GenAI to get overly creative and start making up stories that it presents as truth.
  2. The LLM has been given an overly broad and/or contradictory dataset. In other words, the model gets confused and begins to draw conclusions that are not directly supported in the data, much like a human would do if they were inundated with irrelevant and conflicting information on a particular topic.

Truth #2: There’s more than one type of hallucination.

Contrary to popular belief, hallucinations aren’t just incorrect answers: They can also be classified as correct answers to the wrong questions. And these types of hallucinations are actually more common and more difficult to control.

For example, imagine a company’s AI Assistant is asked to help troubleshoot a problem that a customer is having with their TV. The Assistant could give the customer correct troubleshooting instructions — but for the wrong television model. In this case, GenAI isn’t wrong, it just didn’t fully understand the context of the question.

GenAI Misconceptions

The Lie: There’s no way to prevent your AI Assistant from hallucinating.

Many GenAI “bot” vendors attempt to fine-tune an LLM, connect clients’ knowledge bases, and then trust it to generate responses to their customers’ questions. This approach will always result in hallucinations. A common workaround is to pre-program “canned” responses to specific questions. However, this leads to unhelpful and unnatural-sounding answers even to basic questions, which then wind up being escalated to live agents.

In contrast, true AI Assistants powered by the latest Conversational CX Platforms leverage LLMs as a tool to understand and generate language — but there’s a lot more going on under the hood.

First of all, preventing hallucinations is not just a technical task. It requires a layer of business logic that controls the flow of the conversation by providing a framework for how the Assistant should respond to users’ questions.

This framework guides a user down a specific path that enables the Assistant to gather the information the LLM needs to give the right answer to the right question. This is very similar to how you would train a human agent to ask a specific series of questions before diagnosing an issue and offering a solution. Meanwhile, in addition to understanding what the intent of the customer’s question is, the LLM can be used to extract additional information from the question.

Referred to as “pre-generation checks,” these filters are used to determine attributes such as whether the question was from an existing customer or prospect, which of the company’s products or services the question is about, and more. These checks happen in the background in mere seconds and can be used to select the right information to answer the question. Only once the Assistant understands the context of the client’s question and knows that it’s within scope of what it’s allowed to talk about does it ask the LLM to craft a response.

But the checks and balances don’t end there: The LLM is only allowed to generate responses using information from specific, trusted sources that have been pre-approved, and not from the dataset it was trained on.

In other words, humans are responsible for providing the LLM with a source of truth that it must “ground” its response in. In technical terms, this is called Retrieval Augmented Generation, or RAG — and if you want to get nerdy, you can read all about it here!

Last but not least, once a response has been crafted, a series of “post- generation checks” happens in the background before returning it to the user. You can check out the end-to-end process in the diagram below:

RAG

Give Hallucination Concerns the Heave-Ho

To sum it up: Yes, hallucinations happen. In fact, there’s more than one type of hallucination that CX leaders should be aware of.

However, now that you understand the reality of AI hallucination, you know that it’s totally preventable. All you need are the proper checks, balances, and guardrails in place, both from a technical and a business logic standpoint.

Now that you’ve had your biggest misconceptions about AI hallucination debunked, keep an eye out for the next blog in our series, all about GenAI data leaks. Or, learn the truth about all three of CX leaders’ biggest GenAI misconceptions now when you download our guide, Two Truths and a Lie: Breaking Down the Major GenAI Misconceptions Holding CX Leaders Back.

Request A Demo

Is Your CX Data Good Enough for GenAI? Exposing Common AI Misconceptions (Part One)

If you’re feeling unprepared for the impact of generative artificial intelligence (GenAI), you’re not alone. In fact, nearly 85% of CX leaders feel the same way. But the truth is that the transformative nature of this technology simply can’t be ignored — and neither can your boss, who asked you to look into it.

We’ve all heard horror stories of racist chatbots and massive data leaks ruining brands’ reputations. But we’ve also seen statistics around the massive time and cost savings brands can achieve by offloading customers’ frequently asked questions to AI Assistants. So which is it?

This is the first post in a three-part series clarifying the biggest misconceptions holding CX leaders like you back from integrating GenAI into their CX strategies. Our goal? To assuage your fears and help you start getting real about adding an AI Assistant to your contact center — all in a fun “two truths and a lie” format. Prepare to have your most common AI misconceptions debunked!

Misconception #1: “My data isn’t good enough for GenAI.”

Answering customer inquiries usually requires two types of data:

  1. Knowledge (e.g. an order return policy) and
  2. Information from internal systems (e.g. the specific details of an order).

It’s easy to get caught up in overthinking the impact of data quality on AI performance and wondering whether or not your knowledge is even good enough to make an AI Assistant useful for your customers.

Updating hundreds of help desk articles is no small task, let alone building an entire knowledge base from scratch. Many CX leaders are worried about the amount of work it will require to clean up their data and whether their team has enough resources to support a GenAI initiative. In order for GenAI to be as effective as a human agent, it needs the same level of access to internal systems as human agents.

Truth #1: You have to have some amount of data.

Data is necessary to make AI work — there’s no way around it. You must provide some data for the model to access in order to generate answers. This is one of the most basic AI performance factors.

But we have good news: You need a lot less data than you think.

One of the most common myths about AI and data in CX is that it’s necessary to answer every possible customer question. Instead, focus on ensuring you have the knowledge necessary to answer your most frequently asked questions. This small step forward will have a major impact for your team without requiring a ton of time and resources to get started

Truth #2: Quality matters more than quantity.

Given the importance of relevant data in AI, a few succinct paragraphs of accurate information is better than volumes of outdated or conflicting documentation. But even then, don’t sweat the small stuff.

For example, did a product name change fail to make its way through half of your help desk articles? Are there unnecessary hyperlinks scattered throughout? Was it written for live agents versus customers?

No problem — the right Conversational CX Platform can easily address these AI data dependency concerns without requiring additional support from your team.

The Lie: Your data has to be perfectly unified and specifically formatted to train an AI Assistant.

Don’t worry if your data isn’t well-organized or perfectly formatted. The reality is that most companies have services and support materials scattered across websites, knowledge bases, PDFs, .csvs, and dozens of other places — and that’s okay!

Today, the tools and technology exist to make aggregating this fragmented data a breeze. They’re then able to cleanse and format it in a way that makes sense for a large language model (LLM) to use.

For example if you have an agent training manual in Google Docs and a product manual in PDF, this information can be disassembled, reformatted, and rewritten by an AI-powered transformation that makes it subsequently usable.

What’s more, the data used by your AI Assistant should be consistent with the data you use to train your human agents. This means that not only is it not required to build a special repository of information for your AI Assistant to learn from, but it’s not recommended. The very best AI platforms take on the work of maintaining this continuity by automatically processing and formatting new information for your Assistant as it’s published, as well as removing any information that’s been deleted.

Put Those Data Doubts to Bed

Now you know that your data is definitely good enough for GenAI to work for your business. Yes, quality matters more than quantity, but it doesn’t have to be perfect.

The technology exists to unify and format your data so that it’s usable by an LLM. And providing knowledge around even a handful of frequently asked questions can give your team a major lift right out the gate.

Keep an eye out for the next blog in our series, all about GenAI hallucinations. Or, learn the truth about all three of CX leaders’ biggest GenAI misconceptions now when you download our guide, Two Truths and a Lie: Breaking Down the Major GenAI Misconceptions Holding CX Leaders Back.

Request A Demo

9 Top Customer Service Challenges — and How to Overcome Them

It’s a shame that customer service doesn’t always get the respect and attention it deserves because it’s among the most important ingredients in any business’s success. There’s no better marketing than an enthusiastic user base, so every organization should strive to excel at making customers happy.

Alas, this is easier said than done. When someone comes to you with a problem, they can be angry, stubborn, mercurial, and—let’s be honest—extremely frustrating. Some of this just comes with the territory, but some stems from the fact that many customer service professionals simply don’t have a detailed, high-level view of customer service challenges or how to overcome them.

That’s what we’re going to remedy in this post. Let’s jump right in!

What are The Top Customer Service Challenges?

After years of running a generative AI platform for contact centers and interacting with leaders in this space, we have discovered that the top customer service challenges are:

  1. Understanding Customer Expectations
  2. Next Step: Exceeding Customer Expectations
  3. Dealing with Unreasonable Customer Demands
  4. Improving Your Internal Operations
  5. Not Offering a Preferred Communication Channel
  6. Not Offering Real-Time Options
  7. Handling Angry Customers
  8. Dealing With a Service Outage Crisis
  9. Retaining, Hiring, and Training Service Professionals

In the sections below, we’ll break each of these down and offer strategies for addressing them.

1. Understanding Customer Expectations

No matter how specialized a business is, it will inevitably cater to a wide variety of customers. Every customer has different desires, expectations, and needs regarding a product or service, which means you need to put real effort into meeting them where they are.

One of the best ways to foster this understanding is to remain in consistent contact with your customers. Deciding which communication channels to offer customers depends a great deal on the kinds of customers you’re serving. That said, in our experience, text messaging is a universally successful method of communication because it mimics how people communicate in their personal lives. The same goes for web chat and WhatsApp.

Beyond this, setting the right expectations upfront is another good way to address common customer service challenges. For example, if you are not available 24/7, only provide support via email, or don’t have dedicated account managers , you should  make that clear right at the beginning.

Nothing will make a customer angrier than thinking they can text you only to realize that’s not an option in the middle of a crisis.

2. Next Step: Exceed Customer Expectations

Once you understand what your customers want and need, the next step is to go above and beyond to make them happy. Everyone wants to stand out in a fiercely competitive market, and going the extra mile is a great way to do that. One of the major customer service challenges is knowing how to do this proactively, but there are many ways you can succeed without a huge amount of effort.

Consider a few examples, such as:

  • Treating the customer as you would a friend in your personal life, i.e. by apologizing for any negative experiences and empathizing with how they feel;
  • Offering a credit or discount for a future purchase;
  • Sending them a card referencing their experience and thanking them for being a loyal customer;

The key is making sure they feel seen and heard. If you do this consistently, you’ll exceed your customers’ expectations, and the chances of them becoming active promoters of your company will increase dramatically.

3. Dealing with Unreasonable Demands

Of course, sometimes a customer has expectations that simply can’t be met, and this, too, counts as one of the serious customer service challenges. Customer service professionals often find themselves in situations where someone wants a discount that can’t be given, a feature that can’t be built, or a bespoke customization that can’t be done, and they wonder what they should do.

The only thing to do in this situation is to gently let the customer down, using respectful and diplomatic language. Something like, “We’re really sorry we’re not able to fulfill your request, but we’d be happy to help you choose an option that we currently have available” should do the trick.

4. Improving Your Internal Operations

Customer service teams face the constant pressure to improve efficiency, maintain high CSAT scores, drive revenue, and keep costs to service customers low. This matters a lot; slow response times and being kicked from one department to another are two of the more common complaints contact centers get from irate customers, and both are fixable with appropriate changes to your procedures.

Improving contact center performance is among the thorniest customer service challenges, but there’s no reason to give up hope!

One thing you can do is gather and utilize better data regarding your internal workflows. Data has been called “the new oil,” and with good reason—when used correctly, it’s unbelievably powerful.

What might this look like?

Well, you are probably already tracking metrics like first contact resolution (FCR) and (AHT), but this is easier when you have a unified, comprehensive dashboard that gives you quick insight into what’s happening across your organization.

You might also consider leveraging the power of generative AI, which has led to AI assistants that can boost agent performance in a variety of different tasks. You have to tread lightly here because too much bad automation will also drive customers away. But when you use technology like large language models according to best practices, you can get more done and make your customers happier while still reducing the burden on your agents.

5. Not Offering a Preferred Communication Channel

In general, contact centers often deal with customer service challenges stemming from new technologies. One way this can manifest is the need to cultivate new channels in line with changing patterns in the way we all communicate.

You can probably see where this is going – something like 96% of Americans have some kind of cell phone, and if you’ve looked up from your own phone recently, you’ve probably noticed everyone else glued to theirs.

It isn’t just that customers now want to be able to text you instead of calling or emailing; the ubiquity of cell phones has changed their basic expectations. They now take it for granted that your agents will be available round the clock, that they can chat with an agent asynchronously as they go about other tasks, etc.

We can’t tell you whether it’s worth investing in multiple communication channels for your industry. But based on our research, we can tell you that having multiple channels—and text messaging in particular—is something most people want and expect.

6. Not Offering Real-Time Options

When customers reach out asking for help, their customer service problems likely feel unique to them. But since you have so much more context, you’re aware that a very high percentage of inquiries fall into a few common buckets, like “Where is my order?”, “How do I handle a return?”, “My item arrived damaged, how can I exchange it for a new one?”, etc.

These and similar inquiries can easily be resolved instantly using AI, leaving customers and agents happier and more productive.

7. Handling Angry Customers

A common story in the customer service world involves an interaction going south and a customer getting angry.

Gracefully handling angry customers is one of those perennial customer service challenges; the very first merchants had to deal with angry customers, and our robot descendants will be dealing with angry customers long after the sun has burned out.

Whenever you find yourself dealing with a customer who has become irate, there are two main things you have to do:

  1. Empathize with them
  2. Do not lose your cool

It can be hard to remember, but the customer isn’t frustrated with you, they’re frustrated with the company and products. If you always keep your responses calm and rooted in the facts of the situation, you’ll always be moving toward providing a solution.

8. Dealing With a Service Outage Crisis

Sometimes, our technology fails us. The wifi isn’t working on the airplane, a cell phone tower is down following a lightning storm, or that printer from Office Space jams so often it starts to drive people insane.

As a customer service professional, you might find yourself facing the wrath of your customers if your service is down. Unfortunately, in a situation like this, there’s not much you can do except honestly convey to your customers that your team is putting all their effort into getting things back on track. You should go into these conversations expecting frustrated customers, but make sure you avoid the temptation to overpromise.

Talk with your tech team and give customers a realistic timeline, don’t assure them it’ll be back in three hours if you have no way to back that up. Though Elon Musk seems to get away with it, the worst thing the rest of us can do is repeatedly promise unrealistic timelines and miss the mark.

9. Retaining, Hiring, and Training Service Professionals

You may have seen this famous Maya Angelou quote, which succinctly captures what the customer service business is all about:

“I’ve learned that people will forget what you said, people will forget what you did, but people will never forget how you made them feel.”

Learning how to comfort a person or reassure them is high on the list of customer service challenges, and it’s something that is certainly covered in your training for new agents.

But training is also important because it eases the strain on agents and reduces turnover. For customer service professionals, the median time to stick with one company is less than a year, and every time someone leaves, that means finding a replacement, training them, and hoping they don’t head for the exits before your investment has paid off.

Keeping your agents happy will save you more money than you imagine, so invest in a proper training program. Ensure they know what’s expected of them, how to ask for help when needed, and how to handle challenging customers.

Final Thoughts on the Top Customer Service Challenges

Customer service challenges abound, but with the right approach, there’s no reason you shouldn’t be able to meet them head-on!

Check out our report for a more detailed treatment of three major customer service challenges and how to resolve them. Between the report and this post, you should be armed with enough information to identify your own internal challenges, fix them, and rise to new heights.

Request A Demo

5 Tips for Coaching Your Contact Center Agents to Work with AI

Generative AI has enormous potential to change the work done at places like contact centers. For this reason, we’ve spent a lot of energy covering it, from deep dives into the nuts and bolts of large language models to detailed advice for managers considering adopting it.

Here, we will provide tips on using AI tools to coach, manage, and improve your agents.

How Will AI Make My Agents More Productive?

Contact centers can be stressful places to work, but much of that stems from a paucity of good training and feedback. If an agent doesn’t feel confident in assuming their responsibilities or doesn’t know how to handle a tricky situation, that will cause stress.

Tip #1: Make Collaboration Easier

With the right AI tools for coaching agents, you can get state-of-the-art collaboration tools that allow agents to invite their managers or colleagues to silently appear in the background of a challenging issue. The customer never knows there’s a team operating on their behalf, but the agent won’t feel as overwhelmed. These same tools also let managers dynamically monitor all their agents’ ongoing conversations, intervening directly if a situation gets out of hand.

Agents can learn from these experiences to become more performant over time.

Tip #2: Use Data-Driven Management

Speaking of improvement, a good AI platform will have resources that help managers get the most out of their agents in a rigorous, data-driven way. Of course, you’re probably already monitoring contact center metrics, such as CSAT and FCR scores, but this barely scratches the surface.

What you really need is a granular look into agent interactions and their long-term trends. This will let you answer questions like “Am I overstaffed?” and “Who are my top performers?” This is the only way to run a tight ship and keep all the pieces moving effectively.

Tip #3: Use AI To Supercharge Your Agents

As its name implies, generative AI excels at generating text, and there are several ways this can improve your contact center’s performance.

To start, these systems can sometimes answer simple questions directly, which reduces the demands on your team. Even when that’s not the case, however, they can help agents draft replies, or clean up already-drafted replies to correct errors in spelling and grammar. This, too, reduces their stress, but it also contributes to customers having a smooth, consistent, high-quality experience.

Tip #4: Use AI to Power Your Workflows

A related (but distinct) point concerns how AI can be used to structure the broader work your agents are engaged in.

Let’s illustrate using sentiment analysis, which makes it possible to assess the emotional state of a person doing something like filing a complaint. This can form part of a pipeline that sorts and routes tickets based on their priority, and it can also detect when an issue needs to be escalated to a skilled human professional.

Tip #5: Train Your Agents to Use AI Effectively

It’s easy to get excited about what AI can do to increase your efficiency, but you mustn’t lose sight of the fact that it’s a complex tool your team needs to be trained to use. Otherwise, it’s just going to be one more source of stress.

You need to have policies around the situations in which it’s appropriate to use AI and the situations in which it’s not. These policies should address how agents should deal with phenomena like “hallucination,” in which a language model will fabricate information.

They should also contain procedures for monitoring the performance of the model over time. Because these models are stochastic, they can generate surprising output, and their behavior can change.

You need to know what your model is doing to intervene appropriately.

Wrapping Up

Hopefully, you’re more optimistic about what AI can do for your contact center, and this has helped you understand how to make the most out of it.

If there’s anything else you’d like to go over, you’re always welcome to request a demo of the Quiq platform. Since we focus on contact centers we take customer service pretty seriously ourselves, and we’d love to give you the context you need to make the best possible decision!

Request A Demo

AI Gold Rush: How Quiq Won the Land Grab for AI Contact Centers (& How You Can Benefit)

There have been many transformational moments throughout the history of the United States, going back all the way to its unique founding.

Take for instance the year 1849.

For all of you SFO 49ers fans (sorry, maybe next year), you are very well aware of the land grab that was the birth of the state of California. That year, tens of thousands of people from the Eastern United States flocked to the California Territory hoping to strike it rich in a placer gold strike.

A lesser-known fact of that moment in history is that the gold strike in California was actually in 1848. And while all of those easterners were lining up for the rush, a small number of people from Latin America and Hawaii were already in production, stuffing their pockets full of nuggets.

176 years later, AI is the new gold rush.

Fast forward to 2024, a new crowd is forming, working toward the land grab once again. Only this time, it’s not physical.

It’s AI in the contact center.

Companies are building infrastructure, hiring engineers, inventing tools, and trying to figure out how to build a wagon that won’t disintegrate on the trail (AKA hallucinate).

While many of those companies are going to make it to the gold fields, one has been there since 2023, and that is Quiq.

Yes, we’ve been mining LLM gold in the contact center since July of 2023 when we released our first customer-facing Generative AI assistant for Loop Insurance. Since then, we have released over a dozen more and have dozens more under construction. More about the quality of that gold in a bit.

This new gold rush in the AI space is becoming more crowded every day.

Everyone is saying they do Generative AI in one way, shape, or form. Most are offering some form of Agent Assist using LLM technologies, keeping that human in the loop and relying on small increments of improvement in AHT (Average Handle Time) and FCR (First Contact Resolution).

However, there is a difference when it comes to how platforms are approaching customer-facing AI Assistants.

Actually, there are a lot of differences. That’s a big reason we invented AI Studio.

AI Studio: Get your shovels and pick axes.

Since we’ve been on the bleeding edge of Generative AI CX deployments, we created called AI Studio. We saw that there was a gap for CX teams, with the myriad of tools they would have had to stitch together and stay focused on business outcomes.

AI Studio is a complete toolkit to empower companies to explore nuances in their AI use within a conversational development environment that’s tailored for customer-facing CX.

That last part is important: Customer-facing AI assistants, which teams can create together using AI Studio. Going back to our gold rush comparison, AI Studio is akin to the pick axes and shovels you need.

Only success is guaranteed and the proverbial gold at the end of the journey is much, much more enticing—precisely because customer-facing AI applications tend to move the needle dramatically further than simpler Agent Assist LLM builds.

That brings me to the results.

So how good is our gold?

Early results are showing that our LLM implementations are increasing resolution rates 50% to 100% above what was achieved using legacy NLU intent-based models, with resolution rates north of 60% in some FAQ-heavy assistants.

Loop Insurance saw a 55% reduction in email tickets in their contact center.

Secondly, intent matching has more than doubled, meaning the percentage of correctly identified intents (especially when there are multiple intents) are being correctly recognized and responded to, which directly correlates to correct answers, fewer agent contacts, and satisfied customers.

That’s just the start though. Molekule hit a 60% resolution rate with a Quiq-built LLM-powered AI assistant. You can read all about that in our case study here.

And then there’s Accor, whose AI assistant across four Rixos properties has doubled (yes, 2X’ed) click-outs on booking links. Check out that case study here.

What’s next?

Like the miners in 1848, digging as much gold out of the ground as possible before the land rush, Quiq sits alone, out in front of a crowd lining up for a land grab.

With a dozen customer-facing LLM-powered AI assistants already living in the market producing incredible results, we have pioneered a space that will be remembered in history as a new day in Customer Experience.

Interested in harnessing Quiq’s power for your CX or contact center? Send us a demo request or get in touch another way and let’s talk.

Request A Demo

Google Business Messages: Meet Your Customers Where They’re At

The world is a distracted and distracting place; between all the alerts, the celebrity drama on Twitter, and the fact that there are more hilarious animal videos on YouTube than you could ever hope to watch even if it were your full-time job, it takes a lot to break through the noise.

That’s one reason customer service-oriented businesses like contact centers are increasingly turning to text messaging. Not only are cell phones all but ubiquitous, but many people have begun to prefer text-message-based interactions to calls, emails, or in-person visits.

In this article, we’ll cover one of the biggest text-messaging channels: Google Business Messages. We’ll discuss what it is, what features it offers, and various ways of leveraging it to the fullest.

Let’s get going!

Learn More About the End of Google Business Messages

 

What is Google Business Messages?

Given that more than nine out of ten online searches go through Google, we will go out on a limb and assume you’ve heard of the Mountain View behemoth. But you may not be aware that Google has a Business Message service that is very popular among companies, like contact centers, that understand the advantages of texting their customers.

Business Messages allows you to create a “messaging surface” on Android or Apple devices. In practice, this essentially means that you can create a little “chat” button that your customers can use to reach out to you.

Behind the scenes, you will have to register for Business Messages, creating an “agent” that your customers will interact with. You have many configuration options for your Business Messages workflows; it’s possible to dynamically route a given message to contact center agents at a specific location, have an AI assistant powered by large language models generate a reply (more on this later), etc.

Regardless of how the reply is generated, it is then routed through the API to your agent, which is what actually interacts with the customer. A conversation is considered over when both the customer and your agent cease replying, but you can resume a conversation up to 30 days later.

What’s the Difference Between Google RCS and Google Business Messages?

It’s easy to confuse Google’s Rich Communication Services (RCS) and Google Business Messages. Although the two are similar, it’s nevertheless worth remembering their differences.

Long ago, text messages had to be short, sweet, and contain nothing but words. But as we all began to lean more on text messaging to communicate, it became necessary to upgrade the basic underlying protocol. This way, we could also use video, images, GIFs, etc., in our conversations.

“Rich” communication is this upgrade, but it’s not relegated to emojis and such. RCS is also quickly becoming a staple for businesses that want to invest in livelier exchanges with their customers. RCS allows for custom logos and consistent branding, for example; it also makes it easier to collect analytics, insert QR codes, link out to calendars or Maps, etc.

As discussed above, Business Messages is a mobile messaging channel that integrates with Google Maps, Search, and brand websites, offering rich, asynchronous communication experiences. This platform not only makes customers happy but also contributes to your business’s bottom line through reduced call volumes, improved CSAT, and better conversion rates.

Importantly, Business Messages are sometimes also prominently featured in Google search results, such as answer cards, place cards, and site links.

In short, there is a great deal of overlap between Google Business Messages and Google RCS. But two major distinctions are that RCS is not available on all Android devices (where Business Messages is), and Business Messages doesn’t require you to have a messaging app installed (where RCS does).

The Advantages of Google Business Messaging

Google Business Messaging has many distinct advantages to offer the contact center entrepreneur. In the next few sections, we’ll discuss some of the biggest.

It Supports Robust Encryption

A key feature of Business Messages is that its commitment to security and privacy is embodied through powerful, end-to-end encryption.

What exactly does end-to-end encryption entail? In short, it ensures that a message remains secure and unreadable from the moment the sender types it to whenever the recipient opens it, even if it’s intercepted in transit. This level of security is baked in, requiring no additional setup or adjustments to security settings by the user.

The significance of this feature cannot be overstated. Today, it’s not at all uncommon to read about yet another multi-million-dollar ransomware attack or a data breach of staggering proportions. This has engendered a growing awareness of (and concern for) data security, meaning that present and future customers will value those platforms that make it a central priority of their offering.

By our estimates, this will only become more important with the rise of generative AI, which has made it increasingly difficult to trust text, images, and even movies seen online—none of which was particularly trustworthy even before it became possible to mass-produce them.

If you successfully position yourself as a pillar your customers can lean on, that will go a long way toward making you stand out in a crowded market.

It Makes Connecting With Customers Easier

Another advantage of Google Business Messages is that it makes it much easier to meet customers where they are. And where we are is “on our phones.”

Now, this may seem too obvious to need pointing out. After all, if your customers are texting all day and you’re launching a text-messaging channel of communication, then of course you’ll be more accessible.

But there’s more to this story. Google Business Messaging allows you to seamlessly integrate with other Google services, like Google Maps. If a customer is trying to find the number for your contact center, therefore, they could instead get in touch simply by clicking the “CHAT” button.

This, too, may seem rather uninspiring because it’s not as though it’s difficult to grab the number and call. But even leaving aside the rising generations’ aversion to making phone calls, there’s a concept known as “trivial inconvenience” that’s worth discussing in this context.

Here’s an example: if you want to stop yourself from snacking on cookies throughout the day, you don’t have to put them on the moon (though that would help). Usually, it’s enough to put them in the next room or downstairs.

Though this only slightly increases the difficulty of accessing your cookie supply, in most cases, it introduces just enough friction to substantially reduce the number of cookies you eat (depending on the severity of your Oreo addiction, of course).

Well, the exact same dynamic works in reverse. Though grabbing your contact center’s phone number from Google and calling you requires only one or two additional steps, that added work will be sufficient to deter some fraction of customers from reaching out. If you want to make yourself easy to contact, there’s no substitute for a clean integration directly into the applications your customers are using, and that’s something Google Business Messages can do extremely well.

It’s Scalable and Supports Integrations

According to legend, the name “Google” originally came from a play on the word “Googol,” which is a “1” followed by a 100 “0”s. Google, in other words, has always been about scale, and that is reflected in the way its software operates today. For our purposes, the most important manifestation of this is the scalability of their API. Though you may currently be operating at a few hundred or a few thousand messages per day, if you plan on growing, you’ll want to invest early in communication channels that can grow along with you.

But this is hardly the end of what integrations can do for you. If you’re in the contact center business there’s a strong possibility that you’ll eventually end up using a large language model like ChatGPT in order to answer questions more quickly, offboard more routine tasks, etc. Unless you plan on dropping millions of dollars to build one in-house, you’ll want to partner with an AI-powered conversational platform. As you go about finding a good vendor, make sure to assess the features they support. The best platforms have many options for increasing the efficiency of your agents, such as reusable snippets, auto-generated suggestions that clean up language and tone, and dashboarding tools that help you track your operation in detail.

Best Practices for Using Google Business Messages

Here, in the penultimate section, we’ll cover a few optimal ways of utilizing Google Business Messages.

Reply in a Timely Fashion

First, it’s important that you get back to customers as quickly as you’re able to. As we noted in the introduction, today’s consumers are perpetually drinking from a firehose of digital information. If it takes you a while to respond to their query, there’s a good chance they’ll either forget they reached out (if you’re lucky) or perceive it as an unpardonable affront and leave you a bad review (if you’re not).

An obvious way to answer immediately is with an automated message that says something like, “Thanks for your question. We’ll respond to you soon!” But you can’t just leave things there, especially if the question requires a human agent to intervene.

Whatever automated system you implement, you need to monitor how well your filters identify and escalate the most urgent queries. Remember that an agent might need a few hours to answer a tricky question, so factor that into your procedures.

This isn’t just something Google suggests; it’s codified in its policies. If you leave a Business Messages chat unanswered for 24 hours, Google might actually deactivate your company’s ability to use chat features.

Don’t Ask for Personal Information

As hackers have gotten more sophisticated, everyday consumers have responded by raising their guard.

On the whole, this is a good thing and will lead to a safer and more secure world. But it also means that you need to be extremely careful not to ask for anything like a social security number or a confirmation code via a service like Business Messages. What’s more, many companies are opting to include a disclaimer to this effect near the beginning of any interactions with customers.

Earlier, we pointed out that Business Messages supports end-to-end encryption, and having a clear, consistent policy about not collecting sensitive information fits into this broader picture. People will trust you more if they know you take their privacy seriously.

Make Business Messages Part of Your Overall Vision

Google Business Messages is a great service, but you’ll get the most out of it if you consider how it is part of a more far-reaching strategy.

At a minimum, this should include investing in other good communication channels, like Apple Messages and WhatsApp. People have had bitter, decades-long battles with each other over which code editor or word processor is best, so we know that they have strong opinions about the technology that they use. If you have many options for customers wanting to contact you, that’ll boost their satisfaction and their overall impression of your contact center.

The prior discussion of trivial inconveniences is also relevant here. It’s not hard to open a different messaging app under most circumstances, but if you don’t force a person to do that, they’re more likely to interact with you.

Schedule a Demo with Quiq

Google has been so monumentally successful its name is now synonymous with “online search.” Even leaving aside rich messaging, encryption, and everything else we covered in this article, you can’t afford to ignore Business Messages for this reason alone.

But setting up an account is only the first step in the process, and it’s much easier when you have ready-made tools that you can integrate on day one. The Quiq conversational AI platform is one such tool, and it has a bevy of features that’ll allow you to reduce the workloads on your agents while making your customers even happier. Check us out or schedule a demo to see what we can do for you!

Request A Demo

WhatsApp Business: A Guide for Contact Center Managers

In today’s digital era, businesses continually seek innovative ways to connect with their customers, striving to enhance communication and foster deeper relationships. Enter WhatsApp Business – a game-changer in the realm of digital communication. This powerful tool is not just a messaging app; it’s a bridge between businesses and their customers, offering a plethora of features designed to streamline communication, improve customer service, and boost engagement. Whether you’re a small business owner or part of a global enterprise, understanding the potential of WhatsApp Business could redefine your approach to customer communication.

What is Whatsapp Business?

WhatsApp is an application that supports text messaging, voice messaging, and video calling for over two billion global users. Because it leverages a simple internet connection to send and receive data, WhatsApp users can avoid the fees that once made communication so expensive.

Since WhatsApp already has such a large base of enthusiastic users, many international brands have begun leveraging it to communicate with their own audiences. It also has a number of built-in features that make it an attractive option for businesses wanting to establish a more personal connection with their customers, and we’ll cover those in the next section.

What Features Does WhatsApp Business Have?

In addition to its reach and the fact that it reduces the budget needed for communication, WhatsApp Business has additional functionality that makes it ideal for any business trying to interact with its customers.

When integrated with a tool like the Quiq conversational AI platform, WhatsApp Business can automatically transcribe voice-based messages. Even better, WhatsApp Business allows you to export these conversations later if you want to analyze them with a tool like natural language processing.

If your contact center agents and the customers they’re communicating with have both set a “preferred language,” WhatsApp can dynamically translate between these languages to make communication easier. So, if a user sends a voice message in Russian and the agent wants to communicate in English, they’ll have no trouble understanding one another.

What are the Differences Between WhatsApp and WhatsApp Business?

Before we move on, it’s worth pointing out that WhatsApp and WhatsApp Business are two different services. On its own, WhatsApp is the most widely used messaging application in the world. Businesses can use WhatsApp to talk to their customers, but with a WhatsApp Business account, they get a few extra perks.

Mostly, these perks revolve around building brand awareness. Unlike a basic WhatsApp account, a WhatsApp Business account allows you to include a lot of additional information about your company and its services. It also provides a labeling system so that you can organize the conversations you have with customers, and a variety of other tools so you can respond quickly and efficiently to any issues that come up.

The Advantages of WhatsApp Messaging for Businesses

Now, let’s spend some time going over the myriad advantages offered by a WhatsApp outreach strategy. Why, in other words, would you choose to use WhatsApp over its many competitors?

Global Reach and Popularity

First, we’ve already mentioned the fact that WhatsApp has achieved worldwide popularity, and in this section, we’ll drill down into more specifics.

When WhatsApp was acquired by Meta in 2014, it already boasted 450 million active users per month. Today, this figure has climbed to a remarkable 2.7 billion, but it’s believed it will reach a dizzying 3.14 billion as early as 2025.

With over 535 million users, India is the country where WhatsApp has gained the most traction by far. Brazil is second with 148 million users, and Indonesia is third with 112 million users.

The gender divide among WhatsApp users is pretty even – men account for just shy of 54% of WhatsApp users, so they have only a slight majority.

The app itself has over 5 billion downloads from the Google Play store alone, and it’s used to send 140 billion messages each day.

These data indicate that WhatsApp could be a very valuable channel to cultivate, regardless of the market you’re looking to serve or where your customers are located.

Personalized Customer Interactions

Firstly, platforms like WhatsApp enable businesses to customize communication with a level of scale and sophistication previously unavailable.

This customization is powered by machine learning, a technology that has consistently led the charge in the realm of automated content personalization. For example, Spotify’s ability to analyze your listening patterns and suggest music or podcasts that match your interests is powered by machine learning. Now, thanks to advancements in generative AI, similar technology is being applied to text messaging.

Past language models often fell short in providing personalized customer interactions. They tended to be more “rule-based” and, therefore, came off as “mechanical” and “unnatural.” However, contemporary models greatly improve agents’ capacity to adapt their messages to a particular situation.

While none of this suggests generative AI is going to entirely take the place of the distinctive human mode of expression, for a contact center manager aiming to improve customer experience, this marks a considerable step forward.

Below, we have a section talking a little bit more about integrating AI into WhatsApp Business.

End-to-End Encryption

One thing that has always been a selling point for WhatsApp is that it takes security and privacy seriously. This is manifested most obviously in the fact that it encrypts all messages end-to-end.

What does this mean? From the moment you start typing a message to another user all the way through when they read it, the message is protected. Even if another party were to somehow intercept your message, they’d still have to crack the encryption to read it. What’s more, all of this is enabled by default – you don’t have to spend any time messing around with security settings.

This might be more important than you realize. We live in a world increasingly beset by data breaches and ransomware attacks, and more people are waking up to the importance of data security and privacy. This means that a company that takes these aspects of its platform very seriously could have a leg up where building trust is concerned. Your users want to know that their information is safe with you, and using a messaging service like WhatsApp will help to set you apart.

Scalability

Finally, WhatsApp’s Business API is a sophisticated programmatic interface designed to scale your business’s outreach capabilities. By leveraging this tool, companies can connect with a broader audience, extending their reach to prospects and customers across various locations. This expansion is not just about increasing numbers; it’s about strategically enhancing your business’s presence in the digital world, ensuring that you’re accessible whenever your customers need to reach out to you.

By understanding the value WhatsApp’s Business API brings in reaching and engaging with more people effectively, you can make an informed decision about whether it represents the right technological solution for your business’s expansion and customer engagement strategies.

Enhancing Contact Center Performance with WhatsApp Messaging

Now, let’s turn our attention to some of the concrete ways in which WhatsApp can improve your company’s chances of success!

Improving Response and Resolution Metrics Times

Integrating technologies like WhatsApp Business into your agent workflow can drastically improve efficiency, simultaneously reducing response times and boosting customer satisfaction. Agents often have to manage several conversations at once, and it can be challenging to keep all those plates spinning.

However, a quality messaging platform like WhatsApp means they’re better equipped to handle these conversations, especially when utilizing tools like Quiq Compose.

Additionally, less friction in resolving routine tasks means agents can dedicate their focus to issues that necessitate their expertise. This not only leads to more effective problem-solving, it means that fewer customer inquiries are overlooked or terminated prematurely.

Integrating Artificial Intelligence

According to WhatsApp’s own documentation, there’s an ongoing effort to expand the API to allow for the integration of chatbots, AI assistants, and generative AI more broadly.

Today, these technologies possess a surprisingly sophisticated ability to conduct basic interactions, answer straightforward questions, and address a wide range of issues, all of which play a significant role in boosting customer satisfaction and making agents more productive.

We can’t say for certain when WhatsApp will roll out the red carpet for AI vendors like Quiq, but if our research over the past year is any indication, it will make it dramatically easier to keep customers happy!

Gathering Customer Feedback

Lastly, an additional advantage to WhatsApp messaging is the degree to which it facilitates collecting customer feedback. To adapt quickly and improve your services, you have to know what your customers are thinking. And more specifically, you have to know the details about what they like and dislike about your product or service.

In the Olde Days (i.e. 20 years ago year, or so), the only real way to do this was by conducting focus groups, sending out surveys – sometimes through the actual mail, if you can believe it – or doing something similarly labor-intensive.

Today, however, your customers are almost certainly walking around with a smartphone that supports text messaging. And, since it’s pretty easy for them to answer a few questions or dash off a few quick lines describing their experience with your service, odds are that you can gather a great deal of feedback from them.

Now, we hasten to add that you must exercise a certain degree of caution in interpreting this kind of feedback, as getting an accurate gauge of customer sentiment is far from trivial. To name just one example, the feedback might be exaggerated in both the positive and negative direction because the people most likely to send feedback via text messaging are the ones who really liked or really didn’t like you.

That said, so long as you’re taking care to contextualize the information coming from customers, supplementing it with additional data wherever appropriate, it’s valuable to have.

Wrapping Up

From its global reach and popularity to the personalized customer interactions it facilitates, WhatsApp Business stands out as a powerful solution for businesses aiming to enhance their digital presence and customer engagement strategies. By leveraging the advanced features of WhatsApp Business, companies can avail themselves of end-to-end encryption, enjoy scalability, and improve contact center performance, thereby positioning themselves at the forefront of the contact center game.

And speaking of being at the forefront, the Quiq conversational CX platform offers a staggering variety of different tools, from AI assistants powered by language models to advanced analytics on agent performance. Check us out or schedule a demo to see what we can do for your contact center!

6 Questions to Ask Generative AI Vendors You’re Evaluating

With all the power exhibited by today’s large language models, many businesses are scrambling to leverage them in their offerings. Enterprises in a wide variety of domains – from contact centers to teams focused on writing custom software – are adding AI-backed functionality to make their users more productive and the customer experience better.

But, in the rush to avoid being the only organization not using the hot new technology, it’s easy to overlook certain basic sanity checks you must perform when choosing a vendor. Today, we’re going to fix that. This piece will focus on several of the broad categories of questions you should be asking potential generative AI providers as you evaluate all your options.

This knowledge will give you the best chance of finding a vendor that meets your requirements, will help you with integration, and will ultimately allow you to better serve your customers.

These are the Questions you Should ask Your Generative AI Vendor

Training large language models is difficult. Besides the fact that it requires an incredible amount of computing power, there are also hundreds of tiny little engineering optimizations that need to be made along the way. This is part of the reason why all the different language model vendors are different from one another.

Some have a longer context window, others write better code but struggle with subtle language-based tasks, etc. All of this needs to be factored into your final decision because it will impact how well your vendor performs for your particular use case.

In the sections that follow, we’ll walk you through some of the questions you should raise with each vendor. Most of these questions are designed to help you get a handle on how easy a given offering will be to use in your situation, and what integrating it will look like.

1. What Sort of Customer Service Do You Offer?

We’re contact center and customer support people, so we understand better than anyone how important it is to make sure users know what our product is, what it can do, and how we can help them if they run into issues.

As you speak with different generative AI vendors, you’ll want to probe them about their own customer support, and what steps they’ll take to help you utilize their platform effectively.

For this, just start with the basics by figuring out the availability of their support teams – what hours they operate in, whether they can accommodate teams working in multiple time zones, and whether there is an option for 24/7 support if a critical problem arises.

Then, you can begin drilling into specifics. One thing you’ll want to know about is the channels their support team operates through. They might set up a private Slack channel with you so you can access their engineers directly, for example, or they might prefer to work through email, a ticketing system, or a chat interface. When you’re discussing this topic, try to find out whether you’ll have a dedicated account manager to work with.

You’ll also want some context on the issue resolution process. If you have a lingering problem that’s not being resolved, how do you go about escalating it, and what’s the team’s response time for issues in general?

Finally, it’s important that the vendors have some kind of feedback mechanism. Just as you no doubt have a way for clients to let you know if they’re dissatisfied with an agent or a process, the vendor you choose should offer a way for you to let them know how they’re doing so they can improve. This not only tells you they care about getting better, it also indicates that they have a way of figuring out how to do so.

2. Does Your Team Offer Help with Setting up the Platform?

A related subject is the extent to which a given generative AI vendor will help you set up their platform in your environment. A good way to begin is by asking what kinds of training materials and resources they offer.

Many vendors are promoting their platforms by putting out a ton of educational content, all of which your internal engineers can use to get up to speed on what those platforms can do and how they function.

This is the kind of thing that is easy to overlook, but you should pay careful attention to it. Choosing a generative AI vendor that has excellent documentation, plenty of worked-out examples, etc. could end up saving you a tremendous amount of time, energy, and money down the line.

Then, you can get clarity on whether the vendor has a dedicated team devoted to helping customers like you get set up. These roles are usually found under titles like “solutions architect”, so be sure to ask whether you’ll be assigned such a person and the extent to which you can expect their help. Some platforms will go to the moon and back to make sure you have everything you need, while others will simply advise you if you get stuck somewhere.

Which path makes the most sense depends on your circumstances. If you have a lot of engineers you may not need more than a little advice here and there, but if you don’t, you’ll likely need more handholding (but will probably also have to pay extra for that). Keep all this in mind as you’re deciding.

3. What Kinds of Integrations Do You Support?

Now, it’s time to get into more technical details about the integrations they support. When you buy a subscription to a generative AI vendor, you are effectively buying a set of capabilities. But those capabilities are much more valuable if you know they’ll plug in seamlessly with your existing software, and they’re even more valuable if you know they’ll plug into software you plan on building later on. You’ve probably been working on a roadmap, and now’s the time to get it out.

It’s worth checking to see whether the vendor can support many different kinds of language models. This involves a nuance in what the word “vendor” means, so let’s unpack it a little bit. Some generative AI vendors are offering you a model, so they’re probably not going to support another company’s model.

OpenAI and Anthropic are examples of model vendors, so if you work with them you’re buying their model and will not be able to easily incorporate someone else’s model.

Other vendors, by contrast, are offering you a service, and in many cases that service could theoretically by powered by many different models.

Quiq’s Conversational CX Platform, for example, supports OpenAI’s GPT models, and we have plans to expand the scope of our integrations to encompass even more models in the future.

Another thing you’re going to want to check on is whether the vendor makes it easy to integrate vector databases into your workflow. Vectors are data structures that are remarkably good at capturing subtle relationships in large datasets; they’re becoming an ever-more-important part of machine learning, as evinced by the fact that there are now a multitude of different vector databases on offer.

The chances are pretty good that you’ll eventually want to leverage a vector database to store or search over customer interactions, and you’ll want a vendor that makes this easy.

Finally, see if the vendor has any case studies you can look at. Quiq has published a case study on how our language services were utilized by LOOP, a car insurance company, to make a far superior customer-service chatbot. The result was that customers were able to get much more personalization in their answers and were able to resolve their problems fully half of the time, without help. This led to a corresponding 55% reduction in tickets, and a customer satisfaction rating of 75% (!) when interacting with the Quiq-powered AI assistant.

See if the vendors you’re looking at have anything similar you can examine. This is especially helpful if the case studies are focused on companies that are similar to yours.

4. How Does Prompt Engineering and Fine-Tuning Work for Your Model?

For many tasks, large language models work perfectly fine on their own, without much special effort. But there are two methods you should know about to really get the most out of them: prompt engineering and fine-tuning.

As you know, prompts are the basic method for interacting with language models. You’ll give a model a prompt like “What is generative AI?”, and it’ll generate a response. Well, it turns out that models are really sensitive to the wording and structure of prompts, and prompt engineers are those who explore the best way to craft prompts to get useful output from a model.

It’s worth asking potential vendors about this because they handle prompts differently. Quiq’s AI Studio encourages atomic prompting, where a single prompt has a clear purpose and intended completion, and we support running prompts in parallel and sequentially. You can’t assume everyone will do this, however, so be sure to check.

Then, there’s fine-tuning, which refers to training a model on a bespoke dataset such that its output is heavily geared towards the patterns found in that dataset. It’s becoming more common to fine-tune a foundational model for specific use cases, especially when those use cases involve a lot of specialized vocabulary such as is found in medicine or law.

Setting up a fine-tuning pipeline can be cumbersome or relatively straightforward depending on the vendor, so see what each vendor offers in this regard. It’s also worth asking whether they offer technical support for this aspect of working with the models.

5. Can Your Models Support Reasoning and Acting?

One of the current frontiers in generative AI is building more robust, “agentic” models that can execute strings of tasks on their way to completing a broader goal. This goes by a few different names, but one that has been popping up in the research literature is “ReAct”, which stands for “reasoning and acting”.

You can get ReAct functionality out of existing language models through chain-of-thought prompting, or by using systems like AutoGPT; to help you concretize this a bit, let’s walk through how ReAct works in Quiq.

With Quiq’s AI Studio, a conversational data model is used to classify and store both custom and standard data elements, and these data elements can be set within and across “user turns”. A single user turn is the time between when a user offers an input to the time at which the AI responds and waits for the next user input.

Our AI can set and reason about the state of the data model, applying rules to take the next best action. Common actions include things like fetching data, running another prompt, delivering a message, or offering to escalate to a human.

Though these efforts are still early, this is absolutely the direction the field is taking. If you want to be prepared for what’s coming without the need to overhaul your generative AI stack later on, ask about how different vendors support ReAct.

6. What’s your Pricing Structure Like?

Finally, you’ll need to talk to vendors about how their prices work, including any available details on licensing types, subscriptions, and costs associated with the integration, use, and maintenance of their solution.

To take one example, Quiq’s licensing is based on usage. We establish a usage pool wherein our customers pre-pay Quiq for a 12-month contract; then, as the customer uses our software money is deducted from that pool. We also have an annual AI Assistant Maintenance fee along with a one-time implementation fee.

Vendors can vary considerably in how their prices work, so if you don’t want to overpay then make sure you have a clear understanding of their approach.

Picking the Right Generative AI Vendor

Language models and related technologies are taking the world by storm, transforming many industries, including customer service and contact center management.

Making use of these systems means choosing a good vendor, and that requires you to understand each vendor’s model, how those models integrate with other tools, and what you’re ultimately going to end up paying.

If you want to see how Quiq stacks up and what we can do for you, schedule a demo with us today!

Request A Demo

Your Guide to Trust and Transparency in the Age of AI

Over the last few years, AI has really come into its own. ChatGPT and similar large language models have made serious inroads in a wide variety of natural language tasks, generative approaches have been tested in domains like music and protein discovery, researchers have leveraged techniques like chain-of-thought prompting to extend the capabilities of the underlying models, and much else besides.

People working in domains like customer service, content marketing, and software engineering are mostly convinced that this technology will significantly impact their day-to-day lives, but many questions remain.

Given the fact that these models are enormous artifacts whose inner workings are poorly understood, one of the main questions centers around trust and transparency. In this article, we’re going to address these questions head-on. We’ll discuss why transparency is important when advanced algorithms are making ever more impactful decisions, and turn our attention to how you can build a more transparent business.

Why is Transparency Important?

First, let’s take a look at why transparency is important in the first place. The next few sections will focus on the trust issues that stem from AI becoming a ubiquitous technology that few understand at a deep level.

AI is Becoming More Integrated

AI has been advancing steadily for decades, and this has led to a concomitant increase in its use. It’s now commonplace for us to pick entertainment based on algorithmic recommendations, for our loan and job applications to pass through AI filters, and for more and more professionals to turn to ChatGPT before Google when trying to answer a question.

We personally know of multiple software engineers who claim to feel as though they’re at a significant disadvantage if their preferred LLM is offline for even a few hours.

Even if you knew nothing about AI except the fact that it seems to be everywhere now, that should be sufficient incentive to want more context on how it makes decisions and how those decisions are impacting the world.

AI is Poorly Understood

But, it turns out there is another compelling reason to care about transparency in AI: almost no one knows how LLMs and neural networks more broadly can do what they do.

To be sure, very simple techniques like decision trees and linear regression models pose little analytical challenge, and we’ve written a great deal over the past year about how language models are trained. But if you were to ask for a detailed account of how ChatGPT manages to create a poem with a consistent rhyme scheme, we couldn’t tell you.

And – as far as we know – neither could anyone else.

This is troubling; as we noted above, AI has become steadily more integrated into our private and public lives, and that trend will surely only accelerate now that we’ve seen what generative AI can do. But if we don’t have a granular understanding of the inner workings of advanced machine-learning systems, how can we hope to train bias out of them, double-check their decisions, or fine-tune them to behave productively and safely?

These precise concerns are what have given rise to the field of explainable AI. Mathematical techniques like LIME and SHAP can offer some intuition for why a given algorithm generated the output it did, but they accomplish this by crudely approximating the algorithm instead of actually explaining it. Mechanistic interpretability is the only approach we know of that confronts the task directly, but it has only just gotten started.

This leaves us in the discomfiting situation of relying on technologies that almost no one groks deeply, including the people creating them.

People Have Many Questions About AI

Finally, people have a lot of questions about AI, where it’s heading, and what its ultimate consequences will be. These questions can be laid out on a spectrum, with one end corresponding to relatively prosaic concerns about technological unemployment and deepfakes influencing elections, and the other corresponding to more exotic fears around superintelligent agents actively fighting with human beings for control of the planet’s future.

Obviously, we’re not going to sort all this out today. But as a contact center manager who cares about building trust and transparency, it would behoove you to understand something about these questions and have at least cursory answers prepared for them.

How do I Increase Transparency and Trust when Using AI Systems?

Now that you know why you should take trust and transparency around AI seriously, let’s talk about ways you can foster these traits in your contact center. The following sections will offer advice on crafting policies around AI use, communicating the role AI will play in your contact center, and more.

Get Clear on How You’ll Use AI

The journey to transparency begins with having a clear idea of what you’ll be using AI to accomplish. This will look different for different kinds of organizations – a contact center, for example, will probably want to use generative AI to answer questions and boost the efficiency of its agents, while a hotel might instead attempt to automate the check-in process with an AI assistant.

Each use case has different requirements and different approaches that are better suited for addressing it; crafting an AI strategy in advance will go a long to helping you figure out how you should allocate resources and prioritize different tasks.

Once you do that, you should then create documentation and a communication policy to support this effort. The documentation will make sure that current and future agents know how to use the AI platform you decide to work with, and it should address the strengths and weaknesses of AI, as well as information about when its answers should be fact-checked. It should also be kept up-to-date, reflecting any changes you make along the way.

The communication policy will help you know what to say if someone (like a customer) asks you what role AI plays in your organization.

Know Your Data

Another important thing you should keep in mind is what kind of data your model has been trained on, and how it was gathered. Remember that LLMs consume huge amounts of textual data and then learn patterns in that data they can use to create their responses. If that data contains biased information – if it tends to describe women as “nurses” and men as “doctors”, for example – that will likely end up being reflected in its final output. Reinforcement learning from human feedback and other approaches to fine-tuning can go part of the way to addressing this problem, but the best thing to do is ensure that the training data has been curated to reflect reality, not stereotypes.

For similar reasons, it’s worth knowing where the data came from. Many LLMs are trained somewhat indiscriminately, and might have even been shown corpora of pirated books or other material protected by copyright. This has only recently come to the forefront of the discussion, and OpenAI is currently being sued by several different groups for copyright infringement.

If AI ends up being an important part of the way your organization functions, the chances are good that, eventually, someone will want answers about data provenance.

Monitor Your AI Systems Continuously

Even if you take all the precautions described above, however, there is simply no substitute for creating a robust monitoring platform for your AI systems. LLMs are stochastic systems, meaning that it’s usually difficult to know for sure how they’ll respond to a given prompt. And since these models are prone to fabricating information, you’ll need to have humans at various points making sure the output is accurate and helpful.

What’s more, many machine learning algorithms are known to be affected by a phenomenon known as “model degradation”, in which their performance steadily declines over time. The only way you can check to see if this is occurring is to have a process in place to benchmark the quality of your AI’s responses.

Be Familiar with Standards and Regulations

Finally, it’s always helpful to know a little bit about the various rules and regulations that could impact the way you use AI. These tend to focus on what kind of data you can gather about clients, how you can use it, and in what form you have to disclose these facts.

The following list is not comprehensive, but it does cover some of the more important laws:

  • The General Data Protection Regulation (GDPR) is a comprehensive regulatory framework established by the European Union to dictate data handling practices. It is applicable not only to businesses based in Europe but also to any entity that processes data from EU citizens.
  • The California Consumer Protection Act (CCPA) was introduced by California to enhance individual control over personal data. It mandates clearer data collection practices by companies, requires privacy disclosures, and allows California residents to opt-out of data collection.
  • Soc II, developed by the American Institute of Certified Public Accounts, focuses on the principles of confidentiality, privacy, and security in the handling and processing of consumer data.
  • In the United Kingdom, contact centers must be aware of the Financial Conduct Authority’s new “Consumer Duty” regulations. These regulations emphasize that firms should act with integrity toward customers, avoid causing them foreseeable harm, and support customers in achieving their financial objectives. As the integration of generative AI into this regulatory landscape is still being explored, it’s an area that stakeholders need to keep an eye on.

Fostering Trust in a Changing World of AI

An important part of utilizing AI effectively is making sure you do so in a way that enhances the customer experience and works to build your brand. There’s no point in rolling out a state-of-the-art generative AI system if it undermines the trust your users have in your company, so be sure to track your data, acquaint yourself with the appropriate laws, and communicate clearly.

Another important step you can take is to work with an AI vendor who enjoys a sterling reputation for excellence and propriety. Quiq is just such a vendor, and our Conversational AI platform can bring AI to your contact center in a way that won’t come back to bite you later. Schedule a demo to see what we can do for you, today!

Request A Demo

Getting the Most Out of Your Customer Insights with AI

The phrase “Knowledge is power” is usually believed to have originated with 16th- and 17th-century English philosopher Francis Bacon, in his Meditationes Sacræ. Because many people recognize something profoundly right about this sentiment, it has become received wisdom in the centuries since.

Now, data isn’t exactly the same thing as knowledge, but it is tremendously powerful. Armed with enough of the right kind of data, contact center managers can make better decisions about how to deploy resources, resolve customer issues, and run their business.

As is usually the case, the data contact center managers are looking for will be unique to their field. This article will discuss these data, why they matter, and how AI can transform how you gather, analyze, and act on data.

Let’s get going!

What are Customer Insights in Contact Centers?

As a contact center, your primary focus is on helping people work through issues related to a software product or something similar. But you might find yourself wondering who these people are, what parts of the customer experience they’re stumbling over, which issues are being escalated to human agents and which are resolved by bots, etc.

If you knew these things, you would be able to notice patterns and start proactively fixing problems before they even arise. This is what customer insights is all about, and it can allow you to finetune your procedures, write clearer technical documentation, figure out the best place to use generative AI in your contact center, and much more.

What are the Major Types of Customer Insights?

Before we turn to a discussion of the specifics of customer insights, we’ll deal with the major kinds of customer insights there are. This will provide you with an overarching framework for thinking about this topic and where different approaches might fit in.

Speech and Text Data

Customer service and customer experience both tend to be language-heavy fields. When an agent works with a customer over the phone or via chat, a lot of natural language is generated, and that language can be analyzed. You might use a technique like sentiment analysis, for example, to gauge how frustrated customers are when they contact an agent. This will allow you to form a fuller picture of the people you’re helping, and discover ways of doing so more effectively.

Data on Customer Satisfaction

Contact centers exist to make customers happy as they try to use a product, and for this reason, it’s common practice to send out surveys when a customer interaction is done. When done correctly, the information contained in these surveys is incredibly valuable, and can let you know whether or not you’re improving over time, whether a specific approach to training or a new large language model is helping or hurting customer satisfaction, and more.

Predictive Analytics

Predictive analytics is a huge field, but it mostly boils down to using machine learning or something similar to predict the future based on what’s happened in the past. You might try to forecast average handle time (AHT) based on the time of the year, on the premise that when an issue arises has something to do with how long it will take to get it resolved.

To do this effectively you would need a fair amount of AHT data, along with the corresponding data about when the complaints were raised, and then you could fit a linear regression model on these two data streams. If you find that AHT reliably climbs during certain periods, you can have more agents on hand when required.

Data on Agent Performance

Like employees in any other kind of business, agents perform at different levels. Junior agents will likely take much longer to work through a thorny customer issue than more senior ones, of course, and the same could be said for agents with an extensive technical background versus those without the knowledge this background confers. Or, the same agent might excel at certain kinds of tasks but perform much worse on others.

Regardless, by gathering these data on how agents are performing you, as the manager, can figure out where weaknesses lie across all your teams. With this information, you’ll be able to strategize about how to address those weaknesses with coaching, additional education, a refresh of the standard operating procedures, or what have you.

Channel Analytics

These days, there are usually multiple ways for a customer to get in touch with your contact center, and they all have different dynamics. Sending a long email isn’t the same thing as talking on the phone, and both are distinct from reaching out on social media or talking to a bot. If you have analytics on specific channels, how customers use them, and what their experience was like, you can make decisions about what channels to prioritize.

What’s more, customers will often have interacted with your brand in the past through one or more of these channels. If you’ve been tracking those interactions, you can incorporate this context to personalize responses when they reach out to resolve an issue in the future, which can help boost customer satisfaction.

What Specific Metrics are Tracked for Customer Insights?

Now that we have a handle on what kind of customer insights there are, let’s talk about specific metrics that come up in contact centers!

First Contact Resolution (FCR)

The first contact resolution is the fraction of issues a contact center is able to resolve on the first try, i.e. the first time the customer reaches out. It’s sometimes also known as Right First Time (RFT), for this reason. Note that first contact resolution can apply to any channel, whereas first call resolution applies only when the customer contacts you over the phone. They have the same acronym but refer to two different metrics.

Average Handle Time (AHT)

The average handle time is one of the more important metrics contact centers track, and it refers to the mean length of time an agent spends on a task. This is not the same thing as how long the agent spends talking to a customer, and instead encompasses any work that goes on afterward as well.

Customer Satisfaction (CSAT)

The customer satisfaction score attempts to gauge how customers feel about your product and service. It’s common practice, to collect this information from many customers, then averaging them to get a broader picture of how your customers feel. The CSAT can give you a sense of whether customers are getting happier over time, whether certain products, issues, or agents make them happier than others, etc.

Call Abandon Rate (CAR)

The call abandon rate is the fraction of customers who end a call with an agent before their question has been answered. It can be affected by many things, including how long the customers have to wait on hold, whether they like the “hold” music you play, and similar sorts of factors. You should be aware that CAR doesn’t account for missed calls, lost calls, or dropped calls.

***

Data-driven contact centers track a lot of metrics, and these are just a sample. Nevertheless, they should convey a sense of what kinds of numbers a manager might want to examine.

How Can AI Help with Customer Insights?

And now, we come to the “main” event, a discussion of how artificial intelligence can help contact center managers gather and better utilize customer insights.

Natural Language Processing and Sentiment Analysis

An obvious place to begin is with natural language processing (NLP), which refers to a subfield in machine learning that uses various algorithms to parse (or generate) language.

There are many ways in which NLP can aid in finding customer insights. We’ve already mentioned sentiment analysis, which detects the overall emotional tenor of a piece of language. If you track sentiment over time, you’ll be able to see if you’re delivering more or less customer satisfaction.

You could even get slightly more sophisticated and pair sentiment analysis with something like named entity recognition, which extracts information about entities from language. This would allow you to e.g. know that a given customer is upset, and also that the name of a particular product kept coming up.

Classifying Different Kinds of Communication

For various reasons, contact centers keep transcripts and recordings of all the interactions they have with a customer. This means that they have access to a vast amount of textual information, but since it’s unstructured and messy it’s hard to know what to do with it.

Using any of several different ML-based classification techniques, a contact center manager could begin to tame this complexity. Suppose, for example, she wanted to have a high-level overview of why people are reaching out for support. With a good classification pipeline, she could start automating the processing of sorting communications into different categories, like “help logging in” or “canceling a subscription”.

With enough of this kind of information, she could start to spot trends and make decisions on that basis.

Statistical Analysis and A/B Testing

Finally, we’ll turn to statistical analysis. Above, we talked a lot about natural language processing and similar endeavors, but more than likely when people say “customer insights” they mean something like “statistical analysis”.

This is a huge field, so we’re going to illustrate its importance with an example focusing on churn. If you have a subscription-based business, you’ll have some customers who eventually leave, and this is known as “churn”. Churn analysis has sprung up to apply data science to understanding these customer decisions, in the hopes that you can resolve any underlying issues and positively impact the bottom line.

What kinds of questions would be addressed by churn analysis? Things like what kinds of customers are canceling (i.e. are they young or old, do they belong to a particular demographic, etc.), figuring out their reasons for doing so, using that to predict which similar questions might be in danger of churning soon, and thinking analytically about how to reduce churn.

And how does AI help? There now exist any number of AI tools that substantially automate the process of gathering and cleaning the relevant data, applying standard tests, making simple charts, and making your job of extracting customer insights much easier.

What AI Tools Can Be Used for Customer Insights?

By now you’re probably eager to try using AI for customer insights, but before you do that, let’s spend some talking about what you’d look for in a customer insights tool.

Performant and Reliable

Ideally, you want something that you can depend upon and that won’t drive you crazy with performance issues. A good customer insights tool will have many optimizations under the hood that make crunching numbers easy, and shouldn’t require you to have a computer science degree to set up.

Straightforward Integration Process

Modern contact centers work across a wide variety of channels, including emails, chat, social media, phone calls, and even more. Whatever AI-powered customer insights platform you go with should be able to seamlessly integrate with all of them.

Simple to Use

Finally, your preferred solution should be relatively easy to use. Quiq Insights, for example, makes it a breeze to create customizable funnels, do advanced filtering, see the surrounding context for different conversations, and much more.

Getting the Most Out of AI-Powered Customer Insights

Data is extremely important to the success or failure of modern businesses, and it’s getting more important all the time. Contact centers have long been forward-looking and eager to adopt new technologies, and the same must be true in our brave new data-powered world.

If you’d like a demo of Quiq Insights, reach out to see how we can help you streamline your operation while boosting customer satisfaction!

Request A Demo

Security and Compliance in Next-Gen Contact Centers

Along with almost everyone else, we’ve been singing the praises of large language models like ChatGPT for a while now. We’ve noted how they can be used in retail, how they’re already supercharging contact center agents, and have even put out some content on how researchers are pushing the frontiers of what this technology is capable of.

But none of this is to say that generative AI doesn’t come with serious concerns for security and compliance. In this article, we’ll do a deep dive into these issues. We’ll first provide some context on how advanced AI is being deployed in contact centers, before turning our attention to subjects like data leaks, lack of transparency, and overreliance. Finally, we’ll close with a treatment of the best practices contact center managers can use to alleviate these problems.

What is a “Next-Gen” Contact Center?

First, what are some ways in which a next-generation contact center might actually be using AI? Understanding this will be valuable background for the rest of the discussion about security and compliance, because knowing what generative AI is doing is a crucial first step in protecting ourselves from its potential downsides.

Businesses like contact centers tend to engage in a lot of textual communication, such as when resolving customer issues or responding to inquiries. Due to their proficiency in understanding and generating natural language, LLMs are an obvious tool to reach for when trying to automate or streamline these tasks; for this reason, they have become increasingly popular in enhancing productivity within contact centers.

To give specific examples, there are several key areas where contact center managers can effectively utilize LLMs:

Responding to Customer Queries – High-quality documentation is crucial, yet there will always be customers needing assistance with specific problems. While LLMs like ChatGPT may not have all the answers, they can address many common inquiries, particularly when they’ve been fine-tuned on your company’s documentation.

Facilitating New Employee Training – Similarly, a language model can significantly streamline the onboarding process for new staff members. As they familiarize themselves with your technology and procedures, they may encounter confusion, where AI can provide quick and relevant information.

Condensing Information – While it may be possible to keep abreast of everyone’s activities on a small team, this becomes much more challenging as the team grows. Generative AI can assist by summarizing emails, articles, support tickets, or Slack threads, allowing team members to stay informed without spending every moment of the day reading.

Sorting and Prioritizing Issues – Not all customer inquiries or issues carry the same level of urgency or importance. Efficiently categorizing and prioritizing these for contact center agents is another area where a language model can be highly beneficial. This is especially so when it’s integrated into a broader machine-learning framework, such as one that’s designed to adroitly handle classification tasks.

Language Translation – If your business has a global reach, you’re eventually going to encounter non-English-speaking users. While tools like Google Translate are effective, a well-trained language model such as ChatGPT can often provide superior translation services, enhancing communication with a diverse customer base.

What are the Security and Compliance Concerns for AI?

The preceding section provided valuable context on the ways generative AI is powering the future of contact centers. With that in mind, let’s turn to a specific treatment of the security and compliance concerns this technology brings with it.

Data Leaks and PII

First, it’s no secret that language models are trained on truly enormous amounts of data. And with that, there’s a growing worry about potentially exposing “Personally Identifiable Information” (PII) to generative AI models. PII encompasses details like your actual name, residential address, and also encompasses sensitive information like health records. It’s important to note that even if these records don’t directly mention your name, they could still be used to deduce your identity.

While our understanding of the exact data seen by language models during their training remains incomplete, it’s reasonable to assume they’ve encountered some sensitive data, considering how much of that kind of data exists on the internet. What’s more, even if a specific piece of PII hasn’t been directly shown to an LLM, there are numerous ways it might still come across such data. Someone might input customer data into an LLM to generate customized content, for instance, not recognizing that the model often permanently integrates this information into its framework.

Currently, there’s no effective method to extract data from a language model, and no finetuning technique that ensures it never reveals that data again has yet been found.

Over-Reliance on Models

Are you familiar with the term “ultracrepidarianism”? It’s a fancy SAT word that refers to a person who consistently gives advice or expresses opinions on things that they simply have no expertise in.

A similar sort of situation can arise when people rely too much on language models, or use them for tasks that they’re not well-suited for. These models, for example, are well known to hallucinate (i.e. completely invent plausible-sounding information that is false). If you were to ask ChatGPT for a list of 10 scientific publications related to a particular scientific discipline, you could well end up with nine real papers and one that’s fabricated outright.
From a compliance and security perspective, this matters because you should have qualified humans fact-checking a model’s output – especially if it’s technical or scientific.

To concretize this a bit, imagine you’ve finetuned a model on your technical documentation and used it to produce a series of steps that a customer can use to debug your software. This is precisely the sort of thing that should be fact-checked by one of your agents before being sent.

Not Enough Transparency

Large language models are essentially gigantic statistical artifacts that result from feeding an algorithm huge amounts of textual data and having it learn to predict how sentences will end based on the words that came before.

The good news is that this works much better than most of us thought it would. The bad news is that the resulting structure is almost completely inscrutable. While a machine learning engineer might be able to give you a high-level explanation of how the training process works or how a language model generates an output, no one in the world really has a good handle on the details of what these models are doing on the inside. That’s why there’s so much effort being poured into various approaches to interpretability and explainability.

As AI has become more ubiquitous, numerous industries have drawn fire for their reliance on technologies they simply don’t understand. It’s not a good look if a bank loan officer can only shrug and say “The machine told me to” when asked why one loan applicant was approved and another wasn’t.

Depending on exactly how you’re using generative AI, this may not be a huge concern for you. But it’s worth knowing that if you are using language models to make recommendations or as part of a decision process, someone, somewhere may eventually ask you to explain what’s going on. And it’d be wise for you to have an answer ready beforehand.

Compliance Standards Contact Center Managers Should be Familiar With

To wrap this section up, we’ll briefly cover some of the more common compliance standards that might impact how you run your contact center. This material is only a sketch, and should not be taken to be any kind of comprehensive breakdown.

The General Data Protection Regulation (GDPR) – The famous GDPR is a set of regulations put out by the European Union that establishes guidelines around how data must be handled. This applies to any business that interacts with data from a citizen of the EU, not just to companies physically located on the European continent.

The California Consumer Protection Act (CCPA) – In a bid to give individuals more sovereignty over what happens to their personal data, California created the CCPA. It stipulates that companies have to be clearer about how they gather data, that they have to include privacy disclosures, and that Californians must be given the choice to opt out of data collection.

Soc II – Soc II is a set of standards created by the American Institute of Certified Public Accounts that stresses confidentiality, privacy, and security with respect to how consumer data is handled and processed.

Consumer Duty – Contact centers operating in the U.K. should know about The Financial Conduct Authority’s new “Consumer Duty” regulations. The regulations’ key themes are that firms must act in good faith when dealing with customers, prevent any foreseeable harm to them, and do whatever they can to further the customer’s pursuit of their own financial goals. Lawmakers are still figuring out how generative AI will fit into this framework, but it’s something affected parties need to monitor.

Best Practices for Security and Compliance when Using AI

Now that we’ve discussed the myriad security and compliance concerns facing contact centers that use generative AI, we’ll close by offering advice on how you can deploy this amazing technology without running afoul of rules and regulations.

Have Consistent Policies Around Using AI

First, you should have a clear and robust framework that addresses who can use generative AI, under what circumstances, and for what purposes. This way, your agents know the rules, and your contact center managers know what they need to monitor and look out for.

As part of crafting this framework, you must carefully study the rules and regulations that apply to you, and you have to ensure that this is reflected in your procedures.

Train Your Employees to Use AI Responsibly

Generative AI might seem like magic, but it’s not. It doesn’t function on its own, it has to be steered by a human being. But since it’s so new, you can’t treat it like something everyone will already know how to use, like a keyboard or Microsoft Word. Your employees should understand the policy that you’ve created around AI’s use, should understand which situations require human fact-checking, and should be aware of the basic failure modes, such as hallucination.

Be Sure to Encrypt Your Data

If you’re worried about PII or data leakages, a simple solution is to encrypt your data before you even roll out a generative AI tool. If you anonymize data correctly, there’s little concern that a model will accidentally disclose something it’s not supposed to down the line.

Roll Your Own Model (Or Use a Vendor You Trust)

The best way to ensure that you have total control over the model pipeline – including the data it’s trained on and how it’s finetuned – is to simply build your own. That being said, many teams will simply not be able to afford to hire the kinds of engineers who are equal to this task. In such case, you should utilize a model built by a third party with a sterling reputation and many examples of prior success, like the Quiq platform.

Engage in Regular Auditing

As we mentioned earlier, AI isn’t magic – it can sometimes perform in unexpected ways, and its performance can also simply degrade over time. You need to establish a practice of regularly auditing any models you have in production to make sure they’re still behaving appropriately. If they’re not, you may need to do another training run, examine the data they’re being fed, or try to finetune them.

Futureproofing Your Contact Center Security

The next generation of contact centers is almost certainly going to be one that makes heavy use of generative AI. There are just too many advantages, from lower average handling time to reduced burnout and turnover, to forego it.

But doing this correctly is a major task, and if you want to skip the engineering hassle and headache, give the Quiq conversational AI platform a try! We have the expertise required to help you integrate a robust, powerful generative AI tool into your contact center, without the need to write a hundred thousand lines of code.

Request A Demo

LLM-Powered AI Assistants for Hotels – Ultimate Guide

New technologies have always been disruptive, supercharging those firms that embrace it and requiring the others to adapt or be left behind.

With the rise of new approaches to AI, such as large language models, we can see this dynamic playing out again. One place where AI assistants could have a major impact is in the hotel industry.

In this piece, we’ll explore the various ways AI assistants can be used in hotels, and what that means for the hoteliers that keep these establishments running.

Let’s get going!

What is an AI Assistant?

The term “AI assistant” refers to any use of an algorithmic or machine-learning system to automate a part of your workflow. A relatively simple example would be the autocomplete found in almost all text-editing software today, while a much more complex example might involve stringing together multiple chain-of-thought prompts into an agent capable of managing your calendar.

There are a few major types of AI assistants. Near and dear to our hearts, of course, are chatbots that function in places like contact centers. These can be agent-facing or customer-facing, and can help with answering common questions, helping draft replies to customer inquiries, and automatically translating between many different natural languages.

Chatbots (and large language models more generally) can also be augmented to produce speech, giving rise to so-called “voice assistants”. These tend to work like other kinds of chatbots but have the added ability to actually vocalize their text, creating a much more authentic customer experience.

In a famous 2018 demo, Google Duplex was able to complete a phone call to a local hair salon to make a reservation. One remarkable thing about the AI assistant was how human it sounded – its speech even included “uh”s and “mm-hmm”s that made it almost indistinguishable from an actual person, at least over the phone and for short interactions.

Then, there are 3D avatars. These digital entities are crafted to look as human as possible, and are perfect for basic presentations, websites, games, and similar applications. Graphics technology has gotten astonishingly good over the past few decades and, in conjunction with the emergence of technologies like virtual reality and the metaverse, means that 3D avatars could play a major role in the contact centers of the future.

One thing to think about if you’re considering using AI assistants in a hotel or hospitality service is how specialized you want them to be. Although there is a significant effort underway to build general-purpose assistants that are able to do most of what a human assistant does, it remains true that your agents will do better if they’re fine-tuned on a particular domain. For the time being, you may want to focus on building an AI assistant that is targeted at providing excellent email replies, for example, or answering detailed questions about your product or service.

That being said, we recommend you check the Quiq blog often for updates on AI assistants; when there’s a breakthrough, we’ll deliver actionable news as soon as possible.

How Will AI Assistants Change Hotels?

Though the audience we speak to is largely comprised of people working in or managing contact centers, the truth is that there are many overlaps with those in the hospitality space. Since these are both customer-service and customer-oriented domains, insights around AI assistants almost always transfer over.

With that in mind, let’s dive in now to talk about how AI is poised to transform the way hotels function!

AI for Hotel Operations

Like most jobs, operating a hotel involves many tasks that require innovative thinking and improvisation, and many others that are repetitive, rote, and quotidian. Booking a guest, checking them in, making small changes to their itinerary, and so forth are in the latter category, and are precisely the sorts of things that AI assistants can help with.

In an earlier example, we saw that chatbots were already able to handle appointment booking five years ago, so it requires no great leap in imagination to see how slightly more powerful systems would be able to do this on a grander scale. If it soon becomes possible to offload much of the day-to-day of getting guests into their rooms to the machines, that will free up a great deal of human time and attention to go towards more valuable work.

It’s possible, of course, that this will lead to a dramatic reduction in the workforce needed to keep hotels running, but so far, the evidence points the other way; when large language models have been used in contact centers, the result has been more productivity (especially among junior agents), less burnout, and reduced turnover. We can’t say definitively that this will apply in hotel operations, but we also don’t see any reason to think that it wouldn’t.

AI for Managing Hotel Revenues

Another place that AI assistants can change hotels is in forecasting and boosting revenues. We think this will function mainly by making it possible to do far more fine-grained analyses of consumption patterns, inventory needs, etc.

Everyone knows that there are particular times of the year when vacation bookings surge, and others in which there are a relatively small number of bookings. But with the power of big data and sophisticated AI assistants, analysts will be able to do a much better job of predicting surges and declines. This means prices for rooms or other accommodations will be more fluid and dynamic, changing in near real-time in response to changes in demand and the personal preferences of guests. The ultimate effect will be an increase in revenue for hotels.

AI in Marketing and Customer Service

A similar line of argument holds for using AI assistants in marketing and customer service. Just as both hotels and guests are better served when we can build models that allow for predicting future bookings, everyone is better served when it becomes possible to create more bespoke, targeted marketing.

By utilizing data sources like past vacations, Google searches, and browser history, AI assistants will be able to meet potential clients where they’re at, offering them packages tailored to exactly what they want and need. This will not only mean increased revenue for the hotel, but far more satisfaction for the customers (who, after all, might have gotten an offer that they themselves didn’t realize they were looking for.)

If we were trying to find a common theme between this section and the last one, we might settle on “flexibility”. AI assistants will make it possible to flexibly adjust prices (raising them during peak demand and lowering them when bookings level off), flexibly tailor advertising to serve different kinds of customers, and flexibly respond to complaints, changes, etc.

Smart Buildings in Hotels

One particularly exciting area of research in AI centers around so-called “smart buildings”. By now, most of us have seen relatively “smart” thermostats that are able to learn your daily patterns and do things like turn the temperature up when you leave to save on the cooling bill while turning it down to your preferred setting as you’re heading home from work.

These are certainly worthwhile, but they barely even scratch the surface of what will be possible in the future. Imagine a room where every device is part of an internet-of-things, all wired up over a network to communicate with each other and gather data about how to serve your needs.

Your refrigerator would know when you’re running low on a few essentials and automatically place an order, a smart stove might be able to take verbal commands (“cook this chicken to 180 degrees, then power down and wait”) to make sure dinner is ready on time, a smoothie machine might be able to take in data about your blood glucose levels and make you a pre-workout drink specifically tailored to your requirements on that day, and so on.

Pretty much all of this would carry over to the hotel industry as well. As is usually the case there are real privacy concerns here, but assuming those challenges can be met, hotel guests may one day enjoy a level of service that is simply not possible with a staff comprised only of human beings.

Virtual Tours and Guest Experience

Earlier, we mentioned virtual reality in the context of 3D avatars that will enhance customer experience, but it can also be used to provide virtual tours. We’re already seeing applications of this technology in places like real estate, but there’s no reason at all that it couldn’t also be used to entice potential guests to visit different vacation spots.

When combined with flexible and intelligent AI assistants, this too could boost hotel revenues and better meet customer needs.

Using AI Assistants in Hotels

As part of the service industry, hoteliers work constantly to best meet their customers’ needs and, for this reason, they would do well to keep an eye on emerging technologies. Though many advances will have little to do with their core mission, others, like those related to AI assistants, will absolutely help them forecast future demands, provide personalized service, and automate routine parts of their daily jobs.

If all of this sounds fascinating to you, consider checking out the Quiq conversational CX platform. Our sophisticated offering utilizes large language models to help with tasks like question answering, following up with customers, and perfecting your marketing.

Schedule a demo with us to see how we can bring your hotel into the future!

Request A Demo