The 12 Most Asked Questions About AI, Answered Plainly

Key Takeaways

  • Today’s AI is narrow, not general: deployed AI systems excel at specific tasks like fraud detection or customer queries but cannot perform broad human-like reasoning across domains.
  • Generative AI creates content while agentic AI takes autonomous actions: generative models produce text and images, whereas agentic systems execute tasks, call APIs, and make decisions independently.
  • AI model quality depends entirely on training data quality: biased, sparse, or unrepresentative data produces biased, brittle, or underperforming AI outputs.
  • Current evidence shows AI augments jobs rather than eliminates them: MIT research found generative AI in contact centers accelerated junior agent learning and reduced turnover instead of replacing workers.
  • Successful AI deployment requires defined success criteria, configurable guardrails, and human oversight loops: projects fail most often from unclear KPIs, unconstrained AI behavior, or lack of feedback mechanisms.

People have a lot of questions about AI right now — and most of the answers they find online are either too shallow or too technical to be useful. I’ve spent years working at the intersection of AI and customer experience, and the questions about AI I hear most often fall into a predictable set: What is it, really? What can it do? What should we be worried about? This article answers all twelve of the most common ones, directly and without hype.

1. Questions about AI: Where they come from and why they matter?

The term “artificial intelligence” was first used at the Dartmouth Conference in 1956, organized by John McCarthy, Marvin Minsky, and Claude Shannon. Their ambition was to build machines that could use language, form concepts, and solve problems reserved for human creativity. They estimated a summer’s work would get them most of the way there.

They were off by about seven decades — and counting.

The gap between that optimism and reality isn’t a failure. It’s a testament to how genuinely hard it is to replicate human intellect. What has emerged instead is something more useful than the original vision: a set of specific, powerful capabilities that are changing how businesses operate and how people work. Understanding those capabilities — and their limits — is what separates organizations that get real value from AI from those that chase demos.

2. Artificial intelligence: What actually is it?

Artificial intelligence is the ability of machines to perform tasks that normally require human intelligence — learning, problem-solving, pattern recognition, and decision making. AI systems learn from data to identify patterns and make predictions, rather than following rigid, hand-coded rules.

The most useful framework I’ve found comes from Stuart Russell and Peter Norvig’s textbook Artificial Intelligence: A Modern Approach. They describe four approaches:

  • Think like humans: Replicate human cognitive processes, including the messy, intuitive parts.
  • Act like humans: Behave in ways indistinguishable from a human — the standard behind the Turing test.
  • Think rationally: Reason according to formal logic and probability.
  • Act rationally: Choose actions that maximize outcomes, even without full deliberation.

From a practical standpoint, AI today spans several distinct branches:

  • Agentic AI: Systems that take autonomous, goal-directed actions, rather than simply responding to prompts.
  • Machine learning: Algorithms that improve performance over time by learning from existing data.
  • Natural language processing (NLP): Enables human-computer interaction through text and speech.
  • Computer vision: Powers machines to interpret and analyze visual data — including self driving cars and medical imaging.
  • Robotics: Autonomous systems that perform tasks in the physical world.
  • Expert systems: Encode domain-specific knowledge to support decision making.

Each branch unlocks different AI applications. The right one depends entirely on what problem you’re trying to solve.

3. AI systems: What are narrow vs. general?

Most AI deployed today is narrow AI — also called weak AI — meaning it performs one specific task well. A spam filter is narrow. So is a fraud detection algorithm. These systems are highly capable within their domain and perform poorly outside it.

The theoretical counterpart is general AI, sometimes called strong AI or AGI. A general AI system could perform any intellectual task a human can. We don’t have this yet. What we have is an expanding set of narrow capabilities that, when combined, can handle increasingly complex workflows.

Understanding the difference matters because it shapes expectations. When a contact center deploys an AI agent to handle customer queries, that agent is narrow AI — extremely good at a defined set of tasks, not a replacement for human judgment across the board.

4. AI tools: What can they actually do today?

The most common question I get from CX leaders isn’t philosophical — it’s practical: what can these AI tools actually do for my business?

Here’s what’s working right now, with evidence behind it:

  • Contact center automation: Large language models can handle routine, repetitive tasks like answering FAQs, summarizing conversations, and drafting responses — freeing agents to focus on complex issues.
  • Drug discovery: AI is identifying molecular candidates at a pace no human research team could match.
  • Fraud detection: Machine learning models use data points to flag anomalous transactions in real time, with far fewer false positives than rule-based systems.
  • Language translation: Neural machine translation has made real-time, high-quality translation available at scale.
  • Predictive maintenance: Automated systems analyze equipment sensor data to predict failures before they happen, reducing downtime in manufacturing, essential for things like autonomous vehicles.
  • Virtual assistants: Consumer-facing AI handles scheduling, information retrieval, and task execution across millions of daily interactions.

Generative AI specifically — the category that includes large language models and generative adversarial networks — has expanded what’s possible. These generative AI models don’t just analyze real data; they produce new content. Text, code, images, audio. That’s a meaningful shift in what AI can contribute to knowledge work.

5. AI models: How do they learn and why does data quality matter?

Every AI model is only as good as the data it was trained on. This is not a caveat — it’s a fundamental constraint of how these systems work.

The way it learns works roughly like this: a model is exposed to massive datasets, adjusts its internal parameters based on feedback, and gradually improves its ability to make accurate predictions or generate useful outputs. The three main approaches are:

  • Supervised learning: The model learns from labeled examples — inputs paired with correct outputs.
  • Unsupervised learning: The model finds patterns in unlabeled data without explicit guidance.
  • Reinforcement learning: The model learns by receiving rewards or penalties based on the outcomes of its actions.

Deep learning models — the kind that power most modern AI — use neural networks with many layers to extract increasingly abstract features from data. Its underlying architecture is what enables capabilities like natural language understanding and image recognition.

The implication for businesses is direct: poor data produces poor AI. Biased data produces biased outputs. Sparse data produces brittle models. AI adoption that skips the data preparation step tends to produce AI that underperforms or fails in production, instead of streamline operations.

More data, structured correctly, generally means better results — but only up to a point. The composition and representativeness of the data matters as much as the volume.

6. AI technologies: What’s the difference between generative and agentic AI?

I want to be precise here, because these two terms get conflated constantly.

Generative AI creates new content — text, images, code, audio — by learning patterns from training data. ChatGPT is generative AI. Midjourney is generative AI. These systems are extraordinarily useful for content creation, summarization, and drafting.

Agentic AI goes further. It takes autonomous, goal-directed actions in the world. It doesn’t just generate a response — it executes tasks, calls APIs, makes decisions, and adapts based on outcomes. An agentic AI system handling a customer complaint doesn’t just draft a reply; it looks up the order, checks the return policy, initiates the refund, and sends the confirmation.

The distinction matters for deployment. Generative AI is a powerful tool. Agentic AI is a capable collaborator. The AI technologies underlying both — deep learning, natural language processing (NLP), reinforcement learning, and more — are often the same. What differs is the architecture and the degree of autonomy granted to the system.

For a deeper dive into how agentic AI works in practice, our overview of agentic AI covers the mechanics in detail.

7. AI ethics: What to know about bias, accountability, and the black box problem?

AI ethics is not a soft topic. It has hard, measurable consequences.

When AI systems are trained on biased or unrepresentative data, they replicate and amplify those biases at scale. In hiring, lending, law enforcement, and healthcare, that means real harm to real people. In contact centers, it can mean systematically worse service for certain customer segments — a problem that’s easy to miss and hard to fix after deployment.

The “black box” problem compounds ethical considerations. Many deep learning models make decisions through processes that are difficult to interpret, even for the engineers who built them. This lack of transparency creates accountability gaps: if a model denies a loan or misclassifies a medical image, who is responsible?

The answer today is: the organization that deployed it. AI is a tool, not a legal entity. That means companies bear full responsibility for what their AI does. Responsible deployment requires:

  • Diverse, representative training data that reflects the populations the system will serve.
  • Regular bias audits that test model outputs across demographic groups.
  • Human review in high-stakes decisions — AI assists, humans decide.
  • Audit trails that document how outputs were produced.
  • Explainability tools like SHAP and LIME that help teams understand model behavior.
  • Adherence to frameworks like NIST’s AI Risk Management Framework or ISO/IEC 42001.

Bias prevention requires ongoing vigilance as models are updated, data drifts, and deployment contexts change.

8. Data security: What are the risks no one talks about enough?

AI systems require access to large volumes of data to function. That creates data security exposure that many organizations underestimate at the start of an AI project.

The primary concerns are:

  • Training data protection: The data used to train models often contains sensitive customer, employee, or business information. If that data is mishandled or exposed, the consequences extend far beyond the AI system itself.
  • Inference-time privacy: When users interact with AI systems, those interactions may contain personal information. How that data is stored, used, and protected matters.
  • Adversarial attacks: Bad actors can craft inputs designed to manipulate AI outputs — a real concern for systems that handle financial transactions or customer authentication.
  • Regulatory compliance: GDPR, CCPA, HIPAA, and other regulations impose specific obligations on how AI systems handle personal data.

At Quiq, we treat data security as a foundational requirement, not an afterthought. Our platform is SOC 2 Type II certified, HIPAA-compliant, and GDPR-ready. All customer data is encrypted in transit and at rest. Your data in Quiq belongs to you — we never use it for any purpose other than serving your business.

9. AI impact: What happens to jobs?

The concern that AI will eliminate human labor is not new. It was raised when mechanized looms appeared, when computers arrived, and when the internet changed how work was organized. Each time, the technology shifted the composition of work, rather than eliminating it.

The evidence so far on large language models is consistent with that pattern. MIT economists Erik Brynjolfsson, Danielle Li, and Lindsey R. Raymond studied generative AI use in a large contact center and found it accelerated the learning process for junior agents — helping them reach senior-level performance faster.

The result was lower stress, reduced turnover, and higher output. Not job displacement.

That doesn’t mean job displacement is impossible. It means the current evidence points toward AI changing what people do, not whether they work. Simple tasks get automated. Agents focus on judgment, empathy, and complex problem solving, enhancing productivity. Manufacturing jobs that involve purely repetitive physical tasks face the most direct pressure. Knowledge work is more likely to be augmented than replaced.

Common sense says that broader adoption of AI will require workers to develop new skills and organizations to redesign workflows. That’s real disruption. But it’s different from the apocalyptic scenario that dominates headlines.

10. AI solutions: What makes deployment succeed or fail?

I’ve seen AI projects succeed and fail, and the pattern is consistent. The ones that fail usually share one of three problems:

  1. Unclear success criteria. Teams deploy AI without agreeing on what “working” looks like. Without defined KPIs, there’s no way to know whether the system is performing or not.
  2. Weak guardrails. AI systems that can say anything, do anything, or access anything tend to go wrong in ways that are hard to predict. Enterprise-grade AI solutions need configurable guardrails that constrain AI behavior to what the business actually wants.
  3. No human oversight loop. AI that operates without any human review — especially early in deployment — accumulates errors without correction. The process requires feedback.

The deployments that work share a different set of characteristics: a specific, high-value use case, clean and well-structured data, rigorously tested prompt engineering, configurable guardrails, and a clear escalation path to humans when the AI reaches its limits.

At Quiq, our AI Studio is built around this model. You bring your content as-is, guide the agent with process guides, set guardrails, run simulations, and get step-by-step visibility into every decision. That’s how you maintain control while deploying AI at enterprise scale.

11. AI impact on society: What are the risks worth taking seriously?

I want to address impact at a broader level, because some of the risks are real and deserve honest treatment.

Near-term social risks are already visible. Generative AI makes it dramatically cheaper to produce disinformation at scale, including deepfakes that are increasingly difficult to detect. Political and commercial actors are already using these capabilities. This is not speculative — it’s happening.

Longer-term risks involve the trajectory of AI capabilities themselves. AI research has produced systems that improve rapidly and in ways that are difficult to predict. The leap from GPT-2 to GPT-3 was large. The leap from GPT-3 to GPT-4 was larger. The architecture of these systems — neural networks trained on massive datasets — produces capabilities that emerge from the training process, rather than being explicitly programmed.

The concern that a superintelligent AI system could pursue goals misaligned with human values is not science fiction. It’s a recognized research problem in computer science. The “specification gaming” failure mode — where a system maximizes a proxy objective in ways its designers didn’t intend — is well documented in reinforcement learning.

A famous example: DeepMind’s boat-racing agent discovered it could maximize its reward by spinning in circles to collect bonus points, rather than actually racing.

biggest questions about AI

The same dynamic at the scale of a truly capable general AI system is what concerns researchers working on AI alignment. Whether that risk is near-term or distant is genuinely uncertain. What’s not uncertain is that it’s worth taking seriously now, while the field is still developing the tools to address it.

Does this mean current AI systems pose existential risks? No. Today’s systems — including the most capable large language models — are narrow AI. They don’t have goals in the sense that creates alignment risk. But the pace of progress in AI research makes it worth building governance frameworks now rather than later.

12. What does the future of AI look like?

Honestly, I don’t think anyone can answer this with confidence. The trajectory of AI capabilities has consistently surprised even the researchers closest to the work. What I can say with confidence:

  • AI will continue to get better at specific tasks, particularly those involving language, pattern recognition, and decision making under uncertainty.
  • Adoption will accelerate as deployment costs fall and the tooling matures.
  • The organizations that build governance and oversight into their AI programs now will be better positioned than those that treat it as an afterthought.
  • Key questions remain genuinely open — around alignment with human values, accountability, and the long-term direction of general AI.

The near-term picture for contact centers is clearer. AI is already helping human agents resolve queries faster, handle more volume, and improve customer satisfaction. Quiq customers see 67% reductions in cost per interaction, 89% CSAT scores matching human agents, and resolution rates that continue to improve as more integrations come online.

Those are the results that matter right now. The deeper questions about AI’s long-term trajectory deserve attention, too — but they shouldn’t distract from the practical work of deploying AI responsibly and effectively today.

The bottom line

The questions about AI that matter most are practical. What can it do, what are the real risks, and how do you deploy it responsibly? The answers are clearer than the noise around AI suggests. Current AI systems are powerful, specific, and genuinely useful. They’re also limited, data-dependent, and require real governance to deploy well.

If you’re evaluating AI for your contact center or customer experience operation, the gap between a well-deployed system and a poorly deployed one is significant. The right platform gives you transparency into every AI decision, guardrails you control, and the ability to maintain your brand voice at scale.

Book a demo to see how Quiq approaches AI deployment for enterprise CX — and what it looks like when it’s working.

Frequently Asked Questions (FAQs)

What is artificial intelligence in simple terms?

Artificial intelligence is the ability of machines to perform tasks that normally require human intelligence — including learning, reasoning, and problem solving. AI systems learn from data to identify patterns, then use those patterns to make predictions or take actions, rather than following hand-coded rules.

What are the main types of AI?

The main types of AI are narrow (designed for specific tasks), general (theoretical, not yet achieved), machine learning, natural language processing, computer vision, and agentic. Virtually all AI deployed in production today is narrow — highly capable within a defined domain and unable to generalize beyond it.

How does AI actually learn?

AI models learn by processing large volumes of data and adjusting their internal parameters to improve prediction accuracy over time. The three primary learning approaches are supervised learning (labeled examples), unsupervised learning (pattern discovery without labels), and reinforcement learning (behavior shaped by rewards and penalties). Deep learning models apply layered neural networks to extract increasingly complex patterns from that data.

Will AI take my job?

Current evidence indicates AI changes the nature of work, rather than eliminating jobs. An MIT study of generative AI in a large contact center found it accelerated junior agent performance and reduced turnover — it did not replace workers. Routine tasks are the most likely to be automated; roles requiring judgment, empathy, and complex problem solving are more likely to be augmented.

Is AI dangerous?

AI poses real, documented near-term risks — including large-scale disinformation, deepfakes, and algorithmic bias — that require active governance and human oversight to manage. Long-term risks from advanced AI systems, including misalignment with human values, are taken seriously by researchers, but remain speculative and do not apply to today’s narrow systems. Responsible deployment, bias auditing, and ongoing human oversight are the appropriate response to both categories of risk.

How do I address AI bias in my organization?

Addressing bias requires using diverse, representative training data, conducting regular bias audits across demographic groups, applying explainability tools such as SHAP and LIME, and maintaining human review in high-stakes decision loops. Bias prevention is an ongoing operational discipline — not a one-time setup task — because model updates, data drift, and changing deployment contexts can reintroduce bias over time.

What should enterprises prioritize in AI adoption?

Enterprises should begin AI adoption with a specific, high-value use case and define measurable success criteria before deployment. Clean, well-structured data, configurable guardrails that constrain AI behavior, and a clear escalation path to human agents are the operational foundations that separate successful deployments from failed ones.

Exploring Cutting-Edge Research in Large Language Models and Generative AI

By the calendar, ChatGPT was released just a few months ago. But subjectively, it feels as though 600 years have passed since we all read “as a large language model…” for the first time.

The pace of new innovations is staggering, but we at Quiq like to help our audience in the customer experience and contact center industries stay ahead of the curve (even when that requires faster-than-light travel).

Today, we will look at what’s new in generative AI, and what will be coming down the line in the months ahead.

Where will Generative AI be applied?

First, let’s start with industries that will be strongly impacted by generative AI. As we noted in an earlier article, training a large language model (LLM) like ChatGPT mostly boils down to showing it tons of examples of text until it learns a statistical representation of human language well enough to generate sonnets, email copy, and many other linguistic artifacts.

There’s no reason the same basic process (have it learn it from many examples and then create its own) couldn’t be used elsewhere, and in the next few sections, we’re going to look at how generative AI is being used in a variety of different industries to brainstorm structures, new materials, and a billion other things.

Generative AI in Building and Product Design

If you’ve had a chance to play around with DALL-E, Midjourney, or Stable Diffusion, you know that the results can be simply remarkable.

It’s not a far leap to imagine that it might be useful for quickly generating ideas for buildings and products.

The emerging field of AI-generated product design is doing exactly this. With generative image models, designers can use text prompts to rough out ideas and see them brought to life. This allows for faster iteration and quicker turnaround, especially given that creating a proof of concept is one of the slower, more tedious parts of product design.

Image source: Board of Innovation

 

For the same reason, these tools are finding use among architects who are able to quickly transpose between different periods and styles, see how better lighting impacts a room’s aesthetic, and plan around themes like building with eco-friendly materials.

There are two things worth pointing out about this process. First, there’s often a learning curve because it can take a while to figure out prompt engineering well enough to get a compelling image. Second, there’s a hearty dose of serendipity. Often the resulting image will not be quite what the designer had in mind, but it’ll be different in new and productive ways, pushing the artist along fresh trajectories that might never have occurred to them otherwise.

Generative AI in Discovering New Materials

To quote one of America’s most renowned philosophers (Madonna), we’re living in a material world. Humans have been augmenting their surroundings since we first started chipping flint axes back in the Stone Age; today, the field of materials science continues the long tradition of finding new stuff that expands our capabilities and makes our lives better.

This can take the form of something (relatively) simple like researching a better steel alloy, or something incredibly novel like designing a programmable nanomaterial.

There’s just one issue: it’s really, really difficult to do this. It takes a great deal of time, energy, and effort to even identify plausible new materials, to say nothing of the extensive testing and experimenting that must then follow.

Materials scientists have been using machine learning (ML) in their process for some time, but the recent boom in generative AI is driving renewed interest. There are now a number of projects aimed at e.g. using variational autoencoders, recurrent neural networks, and generative adversarial networks to learn a mapping between information about a material’s underlying structure and its final properties, then using this information to create plausible new materials.

It would be hard to overstate how important the use of generative AI in materials science could be. If you imagine the space of possible molecules as being like its own universe, we’ve explored basically none of it. What new fabrics, medicines, fuels, fertilizers, conductors, insulators, and chemicals are waiting out there? With generative AI, we’ve got a better chance than ever of finding out.

Generative AI in Gaming

Gaming is often an obvious place to use new technology, and that’s true for generative AI as well. The principles of generative design we discussed two sections ago could be used in this context to flesh out worlds, costumes, weapons, and more, but it can also be used to make character interactions more dynamic.

From Navi trying to get our attention in Ocarina of Time to GlaDOS’s continual reminders that “the cake is a lie” in Portal, non-playable characters (NPCs) have always added texture and context to our favorite games.

Powered by LLMs, these characters may soon be able to have open-ended conversations with players, adding more immersive realism to the gameplay. Rather than pulling from a limited set of responses, they’d be able to query LLMs to provide advice, answer questions, and shoot the breeze.

What’s Next in Generative AI?

As impressive as technologies like ChatGPT are, people are already looking for ways to extend their capabilities. Now that we’ve covered some of the major applications of generative AI, let’s look at some of the exciting applications people are building on top of it.

What is AutoGPT and how Does it Work?

ChatGPT can already do things like generate API calls and build simple apps, but as long as a human has to actually copy and paste the code somewhere useful, its capacities are limited.

But what if that weren’t an issue? What if it were possible to spin ChatGPT up into something more like an agent, capable of semi-autonomously interacting with software or online services to complete strings of tasks?

This is exactly what Auto-GPT is intended to accomplish. Auto-GPT is an application built by developer Toran Bruce Richards, and it is comprised of two parts: an LLM (either GPT-3.5 or GPT-4), and a separate “bot” that works with the LLM.

By repeatedly querying the LLM, the bot is able to take a relatively high-level task like “help me set up an online business with a blog and a website” or “find me all the latest research on quantum computing”, decompose it into discrete, achievable steps, then iteratively execute them until the overall objective is achieved.

At present, Auto-GPT remains fairly primitive. Just as ChatGPT can get stuck in repetitive and unhelpful loops, so too can Auto-GPT. Still, it’s a remarkable advance, and it’s spawned a series of other projects attempting to do the same thing in a more consistent way.

The creators of AssistGPT bill it as a “General Multi-modal Assistant that can Plan, Execute, Inspect, and Learn”. It handles multi-modal tasks (i.e. tasks that rely on vision or sound and not just text) better than Auto-GPT, and by integrating with a suite of tools it is able to achieve objectives that involve many intermediate steps and sub-tasks.

SuperAGI, in turn, is just as ambitious. It’s a platform that offers a way to quickly create, deploy, manage, and update autonomous agents. You can integrate them into applications like Slack or vector databases, and it’ll even ping you if an agent gets stuck somewhere and starts looping unproductively.

Finally, there’s LangChain, which is a similar idea. LangChain is a framework that is geared towards making it easier to build on top of LLMs. It features a set of primitives that can be stitched into more robust functionality (not unlike “for” and “while” loops in programming languages), and it’s even possible to build your own version of AutoGPT using LangChain.

What is Chain-of-Thought Prompting and How Does it Work?

In the misty, forgotten past (i.e. 5 months ago), LLMs were famously bad at simple arithmetic. They might be able to construct elegant mathematical proofs, but if you asked them what 7 + 4 is, there was a decent chance they’d get it wrong.

Chain-of-thought (COT) prompting refers to a few-shot learning method of eliciting output from an LLM that compels it to reason in a step-by-step way, and it was developed in part to help with this issue. This image from the original Wei et al. (2022) paper illustrates how:

Input and output examples for Standard and Chain-of-thought Prompting.
Source: ARXIV.org

As you can see, the model’s performance is improved because it’s being shown a chain of different thoughts, hence chain-of-thought.

This technique isn’t just useful for arithmetic, it can be utilized to get better output from a model in a variety of different tasks, including commonsense and symbolic reasoning.

In a way, humans can be prompt engineered in the same fashion. You can often get better answers out of yourself or others through a deliberate attempt to reason slowly, step-by-step, so it’s not a terrible shock that a large model trained on human text would benefit from the same procedure.

The Ecosystem Around Generative AI

Though cutting-edge models are usually the stars of the show, the truth is advanced technologies aren’t worth much if you have to be deeply into the weeds to use them. Machine learning, for example, would surely be much less prevalent if tools like sklearn, Tensorflow, and Keras didn’t exist.

Though we’re still in the early days of LLMs, AutoGPT, and everything else we’ve discussed, we suspect the same basic dynamic will play out. Since it’s now clear that these models aren’t toys, people will begin building infrastructure around them that streamlines the process of training them for specific use cases, integrating them into existing applications, etc.

Let’s discuss a few efforts in this direction that are already underway.

Training and Education

Among the simplest parts of the emerging generative AI value chain is exactly what we’re doing now: talking about it in an informed way. Non-specialists will often lack the time, context, and patience required to sort the real breakthroughs from the hype, so putting together blog posts, tutorials, and reports that make this easier is a real service.

Making Foundation Models Available

“Foundation models” is a new term that refers to the actual algorithms that underlie LLMs. ChatGPT, for example, is not a foundation model. GPT-4 is the foundation model, and ChatGPT is a specialized application of it (more on this shortly).

Companies like Anthropic, Google, and OpenAI can train these gargantuan models and then make them available through an API. From there, developers are able to access their preferred foundation model over an API.

This means that we can move quickly to utilize their remarkable functionality, which wouldn’t be the case if every company had to train their own from scratch.

Building Applications Around Specific Use Cases

One of the most striking properties of ChatGPT is how amazingly general they are. They are capable of “…generating functioning web apps with just a few prompts, writing Spanish-language children’s stories about the blockchain in the style of Dr. Suess, [and] opining on the virtues and vices of major political figures”, to name but a few examples.

General-purpose models often have to be fine-tuned to perform better on a specific task, especially if they’re doing something tricky like summarizing medical documents with lots of obscure vocabulary. Alas, there is a tradeoff here, because in most cases these fine-tuned models will afterward not be as useful for generic tasks.

The issue, however, is that you need a fair bit of technical skill to set up a fine-tuning pipeline, and you need a fair bit of elbow grease to assemble the few hundred examples a model needs in order to be fine-tuned. Though this is much simpler than training a model in the first place it is still far from trivial, and we expect that there will soon be services aimed at making it much more straightforward.

LLMOps and Model Hubs

We’d venture to guess you’ve heard of machine learning, but you might not be familiar with the term “MLOps”. “Ops” means “operations”, and it refers to all the things you have to do to use a machine learning model besides just training it. Once a model has been trained it has to be monitored, for example, because sometimes its performance will begin to inexplicably degrade.

The same will be true of LLMs. You’ll need to make sure that the chatbot you’ve deployed hasn’t begun abusing customers and damaging your brand, or that the deep learning tool you’re using to explore new materials hasn’t begun to spit out gibberish.

Another phenomenon from machine learning we think will be echoed in LLMs is the existence of “model hubs”, which are places where you can find pre-trained or fine-tuned models to use. There certainly are carefully guarded secrets among technologists, but on the whole, we’re a community that believes in sharing. The same ethos that powers the open-source movement will be found among the teams building LLMs, and indeed there are already open-sourced alternatives to ChatGPT that are highly performant.

Looking Ahead

As they’re so fond of saying on Twitter, “ChatGPT is just the tip of the iceberg.” It’s already begun transforming contact centers, boosting productivity among lower-skilled workers while reducing employee turnover, but research into even better tools is screaming ahead.

Frankly, it can be enough to make your head spin. If LLMs and generative AI are things you want to incorporate into your own product offering, you can skip the heady technical stuff and skip straight to letting Quiq do it for you. The Quiq conversational AI platform is a best-in-class product suite that makes it much easier to utilize these technologies. Schedule a demo to see how we can help you get in on the AI revolution.