Heads up! Your AI Agent Will Probably Trip Over at Least One of These Five Pitfalls

Who could forget the beloved search engine butler, Jeeves? Launched in 1997, Ask Jeeves was considered cutting edge and different from other search engines because it allowed users to ask questions and receive answers in natural, conversational language — much like the goal of today’s AI agents.

Unfortunately, Jeeves just couldn’t keep up with rapidly evolving search technology and the likes of Google and Yahoo. While the site is still in operation, it’s no longer a major player in the search engine market. Instead, it functions primarily as a question-and-answer site and web portal, combining search results from other engines with its own Q&A content.

Like Jeeves, which once boasted upwards of a million daily searches in 1999, AI agents have become very popular, very fast. So much so that 80% of companies worldwide now feature AI-powered chat on their websites. And unless companies want their AI agents to experience the same fate as poor Jeeves, it’s critical they take proactive measures to avoid the obstacles and shortcomings that arise as AI continues to advance and customer expectations evolve.

In this blog post, we’ll cover:

  • How AI agents differ from traditional chatbots
  • Five challenges your AI agent is likely to face
  • Tips to help you navigate these roadblocks
  • Resources so you can dive in and learn more

What Is an AI Agent?

Before we dig into our five AI agent pitfalls, it’s critical to understand what an AI agent is and how it’s different from the first-generation AI chatbots we’re all familiar with.

To put it simply and in the context of CX, an AI agent combines the reasoning and communication power of Large Language Models, or LLMs, to understand the meaning and context of a user’s inquiry, as well as generate an accurate, personalized, and on-brand response. They can also interact with customers in a variety of other ways (more on that in a minute).

In contrast, an AI chatbot is rules-based and uses Natural Language Processing, or NLP, to try to match the intent behind a user’s inquiry to a single question and a specific, predefined answer. While some AI chatbots may use an LLM to generate a response from a knowledge base, these answers are often insufficient or irrelevant, because they still rely on the same outdated, intent-based process to determine the user’s request in the first place.

In other words, your customer experience is already behind the times if your company uses an AI chatbot rather than an AI agent! For more information about this distinction and how AI chatbots negatively impact your customer journey, check out this article.

Chatbot vs. AI Agent

AI Agent vs. AI Assistant

Another term you’re likely familiar with is “AI assistant.” AI agents offer information and services directly to customers to improve their experiences, and are also used to educate employees to elevate customer service. Meanwhile, AI assistants augment authentic or human agent intelligence to eliminate busy work and accelerate response times. A tool that automatically corrects a human agent’s grammar and spelling before they reply back to a customer is an example of an AI assistant.

AI Agent Pitfall #1: It Doesn’t Leverage Agentic AI

Because AI is advancing so rapidly, it’s easy to get confused by the latest terms and capabilities, especially when the vision and goal of each generation has been largely the same. But with the rise of agentic AI, it appears the technology is finally delivering on its ultimate promise.

While “AI agent” and “agentic AI” sound similar, they are not the same and cannot be used interchangeably. As we discussed in the previous section of this post, AI agents harness the latest and greatest AI advancements — or LLMs, GenAI, and agentic AI — to do a specific job, like interacting with customers across voice, email, and digital messaging channels.

LLMs offer language understanding and generation functionality, which GenAI models can use to craft contextually-relevant, human-like content or responses. Agentic AI can use both LLMs and GenAI to reason, make decisions, and take actions to proactively achieve specific goals. It helps to think of these three types of AI as matryoshka or nesting dolls, with LLMs being the smallest doll and agentic AI being the largest.

Understanding Agentic AI

Here at Quiq, we define agentic AI as a type of AI designed to exhibit autonomous reasoning, goal-directed behavior, and a sense of self or agency, rather than simply following pre-programmed instructions or reacting to external stimuli. Agentic AI systems may interact with humans in a way that is similar to human-human interaction, such as through natural language processing or other forms of communication.

An example of agentic AI might be an advanced personal assistant that not only responds to requests, but also proactively manages your schedule. Imagine an AI agent that notices you’re running low on groceries, checks your calendar for free time, considers your dietary preferences and budget, creates a shopping list, and schedules a delivery — all without explicit instructions.

It might even adjust these plans if it notices you’ve been ordering healthier foods lately or if your schedule suddenly changes. This kind of autonomous, goal-oriented behavior with genuine understanding and adaptation is what sets agentic AI apart from other AI systems.

Listen in as four industry luminaries discuss what agentic AI is, how it works, and what sets it apart from other AI systems.

[Watch the Webinar]

AI Agent Pitfall #2: It’s Siloed from Your CX Stack

Imagine if a sales team couldn’t see customers’ purchase history — how much harder would it be for them to up-sell? Or if the only shipping detail customer service could access was the estimated delivery date — how would they help customers track their orders? Well, just like a human agent, an AI agent is only as effective as the data it has access to.

From CRM to marketing automation platforms to help desk software, customer-facing teams use many technologies to manage client engagements and information. Today, most of these tools can pass information back and forth to help humans avoid these issues and provide exceptional customer experiences. However, many AI agents remain separate from the rest of the technology stack.

This renders them unable to provide customers with anything other than general information and basic company policies that can be found on the company’s website or knowledge base. Customers enter AI agent conversations expecting personalized assistance and to get things done, so receiving the same general information already available via helpdesk articles adds little value and leaves them disappointed and frustrated. These interactions must be passed to human agents, defeating the purpose of employing an AI agent in the first place.

Shatter This AI Agent Silo

Bridging this gap and ensuring your AI agent can provide the level of personalization modern consumers expect requires connecting it to the tools already in your tech stack. Even if they provide robust out-of-the-box integrations, your AI for CX vendor should still offer the customizations you need to ensure your AI agent fits seamlessly into your existing ecosystem and has access to the same information sources as your human agents.

It’s also important that these integrations are bi-directional, or that the AI agent can also pass any actions taken, newly collected data, or updated customer information back to the appropriate system(s). This helps prevent the creation of any new silos, especially between pre- and post-sales teams.

Last but not least, bake these integrations into the business logic and conversational architecture that guides your AI agents’ interactions. This gives them the power to automatically inform customer interactions with additional, personal attributes accessed from other CX systems, such as a person’s member status or most recent order, without having to explicitly ask, driving efficiencies and accelerating resolutions.

Learn more about this and three other major silos hurting your customers, agents,and business, plus tips for how to shatter them with agentic AI.

[Get the Guide]

AI Agent Pitfall #3: It Doesn’t Work Across Channels

Just over 70% of customers prefer to interact with companies over multiple channels, with consumers using an average of eight channels and business buyers using an average of ten. These include email, voice, web chat, mobile app, WhatsApp, Apple Messaging for Business, Facebook, and more.

This alone presents a major hurdle for IT teams that want to build versus buy AI agents. But modern consumers want more than just the ability to interact with companies using their channel of choice. They also want to engage using more than one of these channels in a single conversation — maybe even simultaneously — or over the course of multiple interactions, without having to reestablish context or repeat themselves.

Unfortunately, many AI for CX vendors still fail to support these types of multimodal and omnichannel interactions. What’s more, the capabilities they support for each channel are also limited. For example, while a chatbot may work on Instagram Direct Messaging for Business, it may not support rich messaging functionality like buttons, carousel cards, or videos. This prevents companies from meeting customers where they are and offering them the best experiences possible, even one channel at a time.

Types of channels

How Top Brands Avoid This Roadblock

A leading US-based airline saw a large percentage of its customers call to reschedule their flights versus using other channels. While cancelling their current flight was fairly straightforward, the company’s existing IVR system made it cumbersome to select a new one. Customers had to navigate through multiple menus and listen to long lists of flight options, often multiple times.

The airline decided to shift to a next-generation agentic AI solution that enabled them to easily build and manage AI agents across channels using a single platform. Their new Voice AI Agent can now automatically understand when a customer is trying to reschedule a flight, and offer them the ability to review their options and select a new flight via text. This multimodal approach provides customers with a much more seamless experience.

See how Quiq delivers seamless customer journeys across voice, email, and messaging channels

[Watch the Video]

AI Agent Pitfall #4: It Hallucinates

AI hallucinations are often classified as outlandish or incorrect answers, but there’s a lesser known yet more common type of hallucination that happens when an AI agent provides accurate information that doesn’t effectively answer the user’s question. These misleading statements can be more problematic than obvious errors, because they are trickier to identify — and prevent.

For example, imagine a customer asks an AI agent for help with their TV. The agent might provide perfectly valid troubleshooting steps, but for a completely different TV model than the one the customer owns. So while the information itself is technically correct, it’s irrelevant to the customer’s specific situation because the AI failed to understand the proper context.

The most thorough and reliable way to define a hallucination is as a breach of the Cooperative Principle of Conversation. Philosopher H. Paul Grice introduced this principle in 1975, along with four maxims that he believed must be followed to have a meaningful and productive conversation. Anytime an AI agent’s response fails to observe any of these four maxims, it can be classified as a hallucination:

  1. Quality: Say no more and no less than is necessary or required.
  2. Quantity: Don’t make any assumptions or unsupported claims.
  3. Manner: Communicate clearly, be brief, and stay organized.
  4. Relevance: Keep comments closely related to the topic at hand.

Protect Your Brand From Hallucinations

Preventing these hallucinations is more than a technical task. It requires sophisticated business logic that guides the flow of the conversation, much like how human agents are trained to follow specific questioning protocols.

After a user asks a question, a series of “pre-generation checks” happen in the background, requiring the LLM to answer “questions about the question.” For example, is the user asking about a particular product or service? Is their question inappropriate or sensitive in nature?

From there, a process known as Retrieval Augmented Generation (RAG) ensures that the LLM can only generate a response using information from pre-approved, trusted sources — not its general training data. Last but not least, before sending the response to the customer, the LLM runs it through a series of “post-generation checks,” or “questions about the answer,” to verify that it’s in context, on brand, accurate, etc.

 

Learn about the three types of AI hallucinations, how they manifest themselves in your customer experience, and the best ways to help your AI agent avoid them.

[Watch the Full Video]

AI Agent Pitfall #5: It Doesn’t Measure the Right Things

Nearly 70% of customers say they would refuse to use a company’s AI-powered chat again after a single bad experience. This makes identifying and remedying knowledge gaps and points of friction critical for maximizing a brand’s self-service investment, reputation, and customer loyalty.

Yet gaining insight into anything outside of containment and agent escalations is tedious and imprecise, even for something as high-level as contact drivers. At worst, it requires CX leaders to manually parse through and tag each individual conversation transcript. At best, they must make sense of out-of-the-box reports or .csv exports that feature hundreds of closely related, pre-defined intents.

What’s more, certain events like hotel bookings or abandoned shopping carts are tracked and managed in other systems. Getting an end-to-end view of path to conversion or resolution often requires combining data from these other tools and your AI for CX platform, which is typically impossible without robust, bi-directional integrations or data scientist intervention.

What To Do About It

Since next-generation AI agents harness LLMs for more than just answer generation, they can also leverage their reasoning power for reporting purposes. Remember the pre- and post-generation checks these LLMs run in the background to determine whether users’ questions are in scope, and sufficient evidence backs their answers? These same prompts can be used to understand conversation topics and identify top contact drivers, which are then seamlessly rolled up into reports.

It’s also possible to build funnels, or series of actions, intended to lead to a specific outcome — even if some of these events happen in other tools. For example, picture the steps that should ideally occur when an AI agent recommends a product to a customer. A product recommendation funnel would enable the team to see what percentage of customers provide the AI agent with their budget and other relevant information, click on the agent’s recommendation, and ultimately check out.

The ability to easily see where customers are dropping off or escalating to a human agent gives CX teams actionable insight into which areas of the customer journey need to be fine-tuned. From there, they can click into individual conversation transcripts at each stage for further detail. For example, do customers have questions during the checkout process? Is there insufficient knowledge regarding returns or exchanges? Custom, bi-directional integrations with other CX tools also make it possible to pass the steps happening in the AI for CX platform back to a CRM or web analytics platform, for example, for additional analysis.

Uncover all the ways your chatbot may be killing your customer journey — and the steps you can take to put a stop to it.

[Read the Guide]

Ensure Your AI Agent Is Always Cutting Edge

If you’re still thinking about Jeeves and wondering what happened to him (and we know you are), he was officially “retired” in 2006, just one year after the company re-branded to Ask.com. Staying on the forefront of technology and ensuring your company consistently offers customers a cutting-edge experience isn’t easy — especially in the rapidly evolving world of AI.

That’s why leading companies rely on Quiq’s advanced agentic AI platform to guarantee their AI agents are always ahead of the curve. It offers technical teams the flexibility, visibility, and control they crave to build secure, custom experiences that satisfy business needs, as well as their desire to create and manage AI agents. At the same time, it saves time, money, and resources to handle the maintenance, scalability, and ecosystem required for CX leaders to deliver impactful AI-powered customer interactions.

We would love to show you Quiq in action! Schedule a free, personalized demo today.

5 Engineering Practices For Your LLM Toolkit

Large Language Models play a pivotal role in automating conversations, enhancing customer experiences, and scaling support capabilities. However, delivering on these promises goes beyond simply deploying powerful models; it involves utilizing a comprehensive LLM, or generative AI, toolkit that enables effective integration, orchestration, and monitoring of agentic AI workflows.

In this article, I’ll touch on a few time-tested software practices that have helped me bridge the gap between traditional software development and agentic AI engineering.

1. API Discoverability and Graph-Based RESTful APIs

Data access is crucial for AI agents tasked with understanding and responding to complex customer inquiries. Modern LLM developer tools should facilitate understanding and access through APIs that are well defined with JSON-LD, GraphQL, or OpenAPI spec. These API protocols enable AI agents to dynamically query and interpret interconnected data structures. The more discoverable your APIs, the easier it becomes for your AI to provide personalized and accurate service.
Similar to human agents onboarding to your support team, AI agents need access and understanding of your system data to provide relevant and accurate customer service.

2. Design by Contract with AI Function Calling

Ensuring reliable AI-to-system interactions requires strict compliance with well-defined operational rules. This is where the practice of design by contract proves invaluable. The best LLM tools should establish clear contracts for AI functions, ensuring that each interaction occurs within its designated boundaries and yields the expected outcomes. This structured approach minimizes errors and enhances the reliability of AI agents by mandating validation checks when reading or writing data.
Your LLM toolkit should promote and enforce a defined data schema for your AI agents. For more insights, refer to Quiq’s exploration of this topic in their LLM Function Calling post.

3. Functional and Aspect-Oriented Programming

Functional programming emphasizes pure functions and immutability, and when combined with aspect-oriented programming, which tackles cross-cutting concerns, it establishes robust and scalable frameworks ideal for AI development.

Modern LLM toolkits that embrace these paradigms offer sophisticated tools for constructing more resilient cognitive architectures. These components can be independently developed, tested, and reused, making them ideal for assembling complex AI agents, including agent swarms. Agent swarms, consisting of multiple AI agents working in concert, benefit particularly from an atomic, yet cohesive approach to decision making. Your design choices will become crucial as the demands of customer interactions grow more complex over time.

4. Observability: Ensuring Transparency and Performance

Your LLM toolkit should offer comprehensive monitoring capabilities that allow developers and business operators to track how AI agents make decisions. These tools should enable high level and deep dive analysis that clearly shows how inputs are processed and decisions are formulated. This level of transparency is crucial for troubleshooting and optimizing performance.

By offering detailed insights into AI performance and behavior, modern LLM toolkits play a critical role in helping businesses maintain high service quality and build trust in their AI-driven solutions. The ability to trace how and why a message was delivered or an action taken has never been so important, and top LLM dev tools provide it. Traditional logging and APM software won’t cut it in the era of stochastic AI. Please see Quiq’s LLM Observability post for a deeper discussion on the topic.

5. Continuous Integration

Continuous integration (CI) systems within LLM toolkits play an important role in development, testing, integration, and deployment of AI agents. Your toolkit should ensure agents adapt correctly to changes in models, data, logic or your system at large. LLM toolkits that oversee the lifecycle of AI agents will need to be resilient to updates and iterative improvements based on real-world scenarios and emerging capabilities of the models.

Additionally, modern LLM toolkits, such as those highlighted in Quiq’s AI Studio Debug Workbench, should provide an environment for running a wide range of scenarios. This includes allowing developers to closely inspect, recreate and replay AI behavior on-demand or test-time. You will need to be well informed and react quickly and confidently across the lifecycle of your project.

Remaining Skeptical in the Era of AI

As a software developer with 20 years of experience, I’ve found that a healthy dose of skepticism and reliance on time-tested practices have helped me remain focused on building robust solutions. Not only has this experience proven effective over the years, but it has also laid a strong foundation for my journey as an Applied AI Engineer.

However, LLMs present new challenges that traditional tools and techniques alone can’t fully address. To unlock the potential of these models, we must remain adaptable and open to integrating new tools, techniques and tactics. While I still often use Emacs for editing, I’ve also come to fully embrace the LLM toolkit equipped with a visual procode interface that promotes solid engineering practices. An LLM toolkit will not erase the need for your software engineering practices, but it does provide me, my team and our customers with the tools necessary to unlock the power of AI in an enterprise environment.

Finally, tools like AI Studio offer a surface where we can collaborate with our counterparts across the business to help grow AI that is well understood, reliable, and impactful. Without collaboration, an AI initiative will likely grind to a halt. You will need some new tools to help you bridge the gap.

To learn more about how Quiq is helping software engineers, operational teams and business leaders find the intersection of AI in 2025, learn more about AI Studio.