4 Benefits of Using AI Assistants in the Retail Industry

Artificial intelligence (AI) has been making remarkable strides in recent months. Owing to the release of ChatGPT in November of 2022, a huge amount of attention has been on large language models, but the truth is, there have been similar breakthroughs in computer vision, reinforcement learning, robotics, and many other fields.

In this piece, we’re going to focus on how these advances might contribute specifically to the retail sector.

We’ll start with a broader overview of AI, then turn to how AI-based tools are making it easier to make targeted advertisements, personalized offers, hiring decisions, and other parts of retail substantially easier.

What are AI assistants in Retail?

Artificial intelligence is famously difficult to define precisely, but for our purposes, you can think of it as any attempt to get intelligent behavior from a machine. This could involve something relatively straightforward, like building a linear regression model to predict future demand for a product line, or something far more complex, like creating neural networks able to quickly spit out multiple ideas for a logo design based on a verbal description.

AI assistants are a little different and specifically require building agents capable of carrying out sequences of actions in the service of a goal. The field of AI is some 70 years old now and has been making remarkable strides over the past decade, but building robust agents remains a major challenge.

It’s anyone’s guess as to when we’ll have the kinds of agents that could successfully execute an order like “run this e-commerce store for me”, but there’s nevertheless been enough work for us to make a few comments about the state of the art.

What are the Ways of Building AI Assistants?

On its own, a model like ChatGPT can (sometimes) generate working code and (often) generate correct API calls. But as things stand, a human being still needs to utilize this code for it to do anything useful.

Efforts are underway to remedy this situation by making models able to use external tools. Auto-GPT, for example, combines an LLM and a separate bot that repeatedly queries it. Together, they can take high-level tasks and break them down into smaller, achievable steps, checking off each as it works toward achieving the overall objective.

AssistGPT and SuperAGI are similar endeavors, but they’re better able to handle “multimodal” tasks, i.e those that also involve manipulating images or sounds rather than just text.

The above is a fairly cursory examination of building AI agents, but it’s not difficult to see how the retail establishments of the future might use agents. You can imagine agents that track inventory and re-order crucial items when they get low, or that keep an eye on sales figures and create reports based on their findings (perhaps even using voice synthesis to actually deliver those reports), or creating customized marketing campaigns, generating their own text, images, and A/B tests to find the highest-performing strategies.

What are the Advantages of Using AI in Retail Business?

Now that we’ve talked a little bit about how AI and AI assistants can be used in retail, let’s spend some time talking about why you might want to do this in the first place. What, in other words, are the big advantages of using AI in retail?

1. Personalized Marketing with AI

People can’t buy your products if they don’t know what you’re selling, which is why marketing is such a big part of retail. For its part, marketing has long been a future-oriented business, interested in leveraging the latest research from psychology or economics on how people make buying decisions.

A kind of holy grail for marketing is making ultra-precise, bespoke marketing efforts that target specific individuals. The kind of messaging that would speak to a childless lawyer in a big city won’t resonate the same way with a suburban mother of five, and vice versa.

The problem, of course, is that there’s just no good way at present to do this at scale. Even if you had everything you needed to craft the ideal copy for both the lawyer and the mother, it’s exceedingly difficult to have human beings do this work and make sure it ends up in front of the appropriate audience.

AI could, in theory, remedy this situation. With the rise of social media, it has become possible to gather stupendous amounts of information about people, grouping them into precise and fine-grained market segments–and, with platforms like Facebook Ads, you can make really target advertisements for each of these segments.

AI can help with the initial analysis of this data, i.e. looking at how people in different occupations or parts of the country differ in their buying patterns. But with advanced prompt engineering and better LLMs, it could also help in actually writing the copy that induces people to buy your products or services.

And it doesn’t require much imagination to see how AI assistants could take over quite a lot of this process. Much of the required information is already available, meaning that an agent would “just” need to be able to build simple models of different customer segments, and then put together a prompt that generates text that speaks to each segment.

2. Personalized Offerings with AI

A related but distinct possibility is using AI assistants to create bespoke offerings. As with messaging, people will respond to different package deals; if you know how to put one together for each potential customer, there could be billions in profits waiting for you. Companies like Starbucks have been moving towards personalized offerings for a while, but AI will make it much easier for other retailers to jump on this trend.

We’ll illustrate how this might work with a fictional example. Let’s say you’re running a toy company, and you’re looking at data for Angela and Bob. Angela is an occasional customer, mostly making purchases around the holidays. When she created her account she indicated that she doesn’t have children, so you figure she’s probably buying toys for a niece or nephew. She’s not a great target for a personalized offer, unless perhaps it’s a generic 35% discount around Christmas time.

Bob, on the other hand, buys fresh trainsets from you on an almost weekly basis. He more than likely has a son or daughter who’s fascinated by toy machines, and you have customer-recommendation algorithms trained on many purchases indicating that parents who buy the trains also tend to buy certain Lego sets. So, next time Bob visits your site, your AI assistant can offer him a personalized discount on Lego sets.

Maybe he bites this time, maybe he doesn’t, but you can see how being able to dynamically create offerings like this would help you move inventory and boost individual customer satisfaction a great deal. AI can’t yet totally replace humans in this kind of process, but it can go a long way toward reducing the friction involved.

3. Smarter Pricing

The scenario we just walked through is part of a broader phenomenon of smart pricing. In economics, there’s a concept known as “price discrimination”, which involves charging a person roughly what they’re willing to pay for an item. There may be people who are interested in buying your book for $20, for example, but others who are only willing to pay $15 for it. If you had a way of changing the price to match what a potential buyer was willing to pay for it, you could make a lot more money (assuming that you’re always charging a price that at least covers printing and shipping costs).

The issue, of course, is that it’s very difficult to know what people will pay for something–but with more data and smarter AI tools, we can get closer. This will have the effect of simultaneously increasing your market (by bringing in people who weren’t quite willing to make a purchase at a higher price) and increasing your earnings (by facilitating many sales that otherwise wouldn’t have taken place).

More or less the same abilities will also help with inventory more generally. If you sell clothing you probably have a clearance rack for items that are out of season, but how much should you discount these items? Some people might be fine paying almost full price, while others might need to see a “60%” off sticker before moving forward. With AI, it’ll soon be possible to adjust such discounts in real-time to make sure you’re always doing brisk business.

4. AI and Smart Hiring

One place where AI has been making real inroads is in hiring. It seems like we can’t listen to any major podcast today without hearing about some hiring company that makes extensive use of natural language processing and similar tools to find the best employees for a given position.

Our prediction is that this trend will only continue. As AI becomes increasingly capable, eventually it will be better than any but the best hiring managers at picking out talent; retail establishments, therefore, will rely on it more and more to put together their sales force, design and engineering teams, etc.

Is it Worth Using AI in Retail?

Throughout this piece, we’ve sung the praises of AI in retail. But the truth is, there are still questions about how much sense it makes to leverage retail at the moment, given its expense and risks.

In this section, we’ll briefly go over some of the challenges of using AI in retail so you can have a fuller picture of how its advantages compare to its disadvantages, and thereby make a better decision for your situation.

The one that’s on everyone’s minds these days is the tendency of even powerful systems like ChatGPT to hallucinate incorrect information or to generate output that is biased or harmful. Finetuning and techniques like retrieval augmented generation can mitigate this somewhat, but you’ll still have to spend a lot of time monitoring and tinkering with the models to make sure that you don’t end up with a PR disaster on your hands.

Another major factor is the expense involved. Training a model on your own can cost millions of dollars, but even just hiring a team to manage an open-source model will likely set you back a fair bit (engineers aren’t cheap).

By far the safest and easiest way of testing out AI for retail is by using a white glove solution like the Quiq conversational CX platform. You can test out our customer-facing and agent-facing AI tools while leaving the technical details to us, and at far less expense than would be involved in hiring engineering talent.

Set up a demo with us to see what we can do for you.

AI is Changing Retail

From computer-generated imagery to futuristic AI-based marketing plans, retail won’t be the same with the advent of AI. This will be especially true once we have robust AI assistants able to answer customer questions, help them find clothes that fit, and offer precision discounts and offerings tailored to each individual shopper.

If you don’t want to get left behind, you’ll need to begin exploring AI as soon as possible, and we can help you do that. Check out our product or find a time to talk with us, today!

AI in Retail: 5 Ways Retailers Are Using AI Assistants

Businesses have always turned to the latest and greatest technology to better serve their customers, and retail is no different. From early credit card payment systems to the latest in online advertising, retailers know that they need to take advantage of new tools to boost their profits and keep shoppers happy.

These days, the thing that’s on everyone’s mind is artificial intelligence (AI). AI has had many, many definitions over the years, but in this article, we’ll mainly focus on the machine-learning and deep-learning systems that have captured the popular imagination. These include large language models, recommendation engines, basic AI assistants, etc.

In the world of AI in retail, you can broadly think of these systems as falling into one of two categories: “the ones that customers see”, and “the ones that customers don’t see.” In the former category, you’ll find innovations such as customer-facing chatbots and algorithms that offer hyper-personalized options based on shopping history. In the latter, you’ll find precision fraud detection systems and finely-tuned inventory management platforms, among other things.

We’ll cover each of these categories, in order. By the end of this piece, you’ll have a much better understanding of the ways retailers are using AI assistants and will be better able to think about how you want to use this technology in your retail establishment.

Let’s get going!

Using AI Assistants for Better Customer Experience

First, let’s start with AI that interacts directly with customers. The major ways in which AI is transforming the customer experience are through extreme levels of personalization, more “humanized” algorithms, and shopping assistants.

Personalization in Shopping and Recommendations

One of the most obvious ways of improving the customer experience is by tailoring that experience to each individual shopper. There’s just one problem: this is really difficult to do.

On the one hand, most of your customers will be new to you, people about whom you have very little information and whose preferences you have no good way of discovering. On the other, there are the basic limitations of your inventory. If you’re a brick-and-mortar establishment you have a set number of items you can display, and it’s going to be pretty difficult for you to choose them in a way that speaks to each new customer on a personal level.

For a number of reasons, AI has been changing this state of affairs for a while now, and holds the potential to change it much more in the years ahead.

A key part of this trend is recommendation engines, which have gotten very good over the past decade or so. If you’ve ever been surprised by YouTube’s ability to auto-generate a playlist that you really enjoyed, you’ve seen this in action.

Recommendation engines can only work well when there is a great deal of customer data for them to draw on. As more and more of our interactions, shopping, and general existence have begun to take place online, there has arisen a vast treasure trove of data to be analyzed. In some situations, recommendation engines can utilize decades of shopping experience, public comments, reviews, etc. in making their recommendations, which means a far more personalized shopping experience and an overall better customer experience.

What’s more, advances in AR and VR are making it possible to personalize even more of these experiences. There are platforms now that allow you to upload images of your home to see how different pieces of furniture will look, or to see how clothes fit you without the need to try them on first.

We expect that this will continue, especially when combined with smarter printing technology. Imagine getting a 3D-printed sofa made especially to fit in that tricky corner of your living room, or flipping through a physical magazine with advertisements that are tailored to each individual reader.

Humanizing the Machines

Next, we’ll talk about various techniques for making the algorithms and AI assistants we interact with more convincingly human. Admittedly, this isn’t terribly important at the present moment. But as more of our shopping and online activity comes to be mediated by AI, it’ll be important for them to sound empathic, supportive, and attuned to our emotions.

The two big ways this is being pursued at the moment are chatbots and voice AI.

Chatbots, of course, will probably be familiar to you already. ChatGPT is inarguably the most famous example, but you’ve no doubt interacted with many (much simpler) chatbots via online retailers or contact centers.

In the ancient past, chatbots were largely “rule-based”, meaning they were far less flexible and far less capable of passing as human. With the ascendancy of the deep learning paradigm, however, we now have chatbots that are able to tutor you in chemistry, translate between dozens of languages, help you write code, answer questions about company policies, and even file simple tickets for contact center agents.

Naturally, this same flexibility also means that retail managers must tread lightly. Chatbots are known to confidently hallucinate incorrect information, to become abusive, or to “help” people with malicious projects, like building weapons or computer viruses.

Even leaving aside the technical challenges of implementing a chatbot, you have to carefully monitor your chatbots to make sure they’re performing as expected.

Then, there’s voice-based AI. Computers have been synthesizing speech for many years, but it hasn’t been until recently that they’ve become really good at it. Though you can usually tell that a computer is speaking if you listen very carefully, it’s getting harder and harder all the time. We predict that, in the not-too-distant future, you’ll simply have no idea whether it’s a human or a machine on the other end of the line when you call to return an item or get store hours.

But computers have also gotten much better at the other side of voice-based AI, speech recognition. Software like otter.ai, for example, is astonishingly accurate when generating transcriptions of podcast episodes or conversations, even when unusual words are used.

Taken together, advances in both speech synthesis and speech recognition paint a very compelling picture of how the future of retail might unfold. You can imagine walking into a Barnes & Noble in the year 2035 and having a direct conversation with a smart speaker or AI assistant. You’ll tell it what books you’ve enjoyed in the past, it’ll query its recommendation system to find other books you might like, and it’ll speak to you in a voice that sounds just like a human’s.

You’ll be able to ask detailed questions about the different books’ content, and it’ll be able to provide summaries, discuss details with you, and engage in an unscripted, open-ended conversation. It’ll also learn more about you over time, so that eventually it’ll be as though you have a friend that you go shopping with whenever you buy new books, clothing, etc.

Shopping Assistants and AI Agents

So far, we’ve confined our conversation specifically to technologies like large language models and conversational AI. But one thing we haven’t spent much time on yet is the possibility of creating agents in the future.

An agent is a goal-directed entity, one able to take an instruction like “Make me a reservation at an Italian restaurant” and decompose the goal into discrete steps, performing each one until the task is completed.

With clever enough prompt engineering, you can sort of get agent-y behavior out of ChatGPT, but the truth is, the work of building advanced AI agents has only just begun. Tools like AutoGPT and LangChain have made a lot of progress, but we’re still a ways away from having agents able to reliably do complex tasks.

It’s not hard to see how different retail will be when that day arrives, however. Eventually, you may be outsourcing a lot of your shopping to AI assistants, who will make sure the office has all the pens it needs, you’ve got new science fiction to read, and you’re wearing the latest fashion. Your assistant might generate new patterns for t-shirts and have them custom-printed; if LLMs get good enough, they’ll be able to generate whole books and movies tuned to your specific tastes.

Using AI Assistants to Run A Safer, Leaner Operation

Now that we’ve covered the ways AI assistants will impact the things customers can see, let’s talk about how they’ll change the things customers don’t see.

There are lots of moving parts in running a retail establishment. If you’ve got ~1,000 items on display in the front, there are probably several thousand more items in a warehouse somewhere, and all of that has to be tracked. What’s more, there’s a constant process of replenishing your supply, staying on top of new trends, etc.

All of this will also be transformed by AI, and in the following sections, we’ll talk about a few ways in which this could happen.

Fraud Detection and Prevention

Fraud, unfortunately, is a huge part of modern life. There’s an entire industry of people buying and selling personal information for nefarious purposes, and it’s the responsibility of anyone trafficking in that information to put safeguards in place.

That includes a large number of retail establishments, which might keep data related to a customer’s purchases, their preferences, and (of course) their actual account and credit card numbers.

This isn’t the place to get into a protracted discussion of cybersecurity, but much of fraud detection relies on AI, so it’s fair game. Fraud detection techniques range from the fairly basic (flagging transactions that are much larger than usual or happen in an unusual geographic area) to the incredibly complex (training powerful reinforcement learning agents that constantly monitor network traffic).

As AI becomes more advanced, so will fraud detection. It’ll become progressively more difficult for criminals to steal data, and the world will be safer as a result. Of course, some of these techniques are also ones that can be used by the bad guys to defraud people, but that’s why so much effort is going into putting guardrails on new AI models.

Streamlining Inventory

Inventory management is an obvious place for optimization. Correctly forecasting what you’ll need and thereby reducing waste can have a huge impact on your bottom line, which is why there are complex branches of mathematics aimed at modeling these domains.

And – as you may have guessed – AI can help. With machine learning, extremely accurate forecasts can be made of future inventory requirements, and once better AI agents have been built, they may even be able to automate the process of ordering replacement materials.

Forward-looking retail managers will need to keep an eye on this space to fully utilize its potential.

AI Assistants and the Future of Retail

AI is changing a great many things. It’s already making contact center agents more effective and is being utilized by a wide variety of professionals, ranging from copywriters to computer programmers.

But the space is daunting, and there’s so much to learn about implementing, monitoring, and finetuning AI assistants that it’s hard to know where to start. One way to easily dip your toe in these deep waters is with the Quiq Conversational CX platform.

Our technology makes it easy to create customer-facing AI bots and similar tooling, which will allow you to see how AI can figure into your retail enterprise without hiring engineers and worrying about the technical details.

Schedule a demo with us today to get started!

Request A Demo

How Scoped AI Ensures Safety in Customer Service

AI chat applications powered by Large Language Models (LLMs) have helped us reimagine what is possible in a new generation of AI computing.

Along with this excitement, there is also a fair share of concern and fear about the potential risks. Recent media coverage, such as this article from the New York Times, highlights how the safety measures of ChatGPT can be circumvented to produce harmful information.

To better understand the security risks of LLMs in customer service, it’s important we add some context and differentiate between “Broad AI” versus “Scoped AI”. In this article, we’ll discuss some of the tactics used to safely deploy scoped AI assistants in a customer service context.

Broad AI vs. Scoped AI: Understanding the Distinction

Scoped AI is designed to excel in a well-defined domain, guided and limited by a software layer that maintains its behavior within pre-set boundaries. This is in contrast to broad AI, which is designed to perform a wide range of tasks across virtually all domains.

Scoped AI and Broad AI answer questions fundamentally differently. With Scoped AI the LLM is not used to determine the answer, it is used to compose a response from the resources given to it. Conversely, answers to questions in Broad AI are determined by the LLM and cannot be verified.

Broad AI simply takes a user message and generates a response from the LLM; there is no control layer outside of the LLM itself. Scoped AI is a software layer that applies many steps to control the interaction and enforce safety measures applicable to your company.

In the following sections, we’ll dig into a more detailed explanation of the steps.

Ensuring the Safety of Scoped AI in Customer Service

1. Inbound Message Filtering

Your AI should perform a semantic similarity search to recognize in-scope vs out-of-scope messages from a customer. Malicious characters and known prompt injections should be identified and rejected with a static response. Inbound message filtering is an important step in limiting the surface area to the messages expected from your customers.

2. Classifying Scope

LLMs possess strong Natural Language Understanding and Reasoning skills (NLU & NLR). An AI assistant should perform a number of classifications. Common classifications include the topic, user type, sentiment, and sensitivity of the message. These classifications should be specific to your company and the jobs of your AI assistant. A data model and rules engine should be used to apply your safety controls.

3. Resource Integration

Once an inbound message is determined to be in-scope, company-approved resources should be retrieved for the LLM to consult. Common resources include knowledge articles, brand facts, product catalogs, buying guides, user-specific data, or defined conversational flows and steps.

Your AI assistant should support non-LLM-based interactions to securely authenticate the end user or access sensitive resources. Authenticating users and validating data are important safety measures in many conversational flows.

4. Verifying Responses

With a response in hand, the AI should verify the answer is in scope and on brand. Fact-checking and corroboration techniques should be used to ensure the information is derived from the resource material. An outbound message should never be delivered to a customer if it cannot be verified by the context your AI has on hand.

5. Outbound Message Filtering

Outbound message filtering tactics include: conducting prompt leakage analysis, semantic similarity checks, consulting keyword blacklists, and ensuring all links and contact information are in-scope of your company.

6. Safety Monitoring and Analysis

Deploying AI safely also requires that you have mechanisms to capture and retrospect on the completed conversations. Collecting user feedback, tracking resource usage, reviewing state changes, and clustering conversations should be available to help you identify and reinforce the safety measures of your AI.

In addition, performing full conversation classifications will also allow you to identify emerging topics, confirm resolution rates, produce safety reports, and understand the knowledge gaps of your AI.

Other Resources

At Quiq, we actively monitor and endorse the OWASP Top 10 for Large Language Model Applications. This guide is provided to help promote secure and reliable AI practices when working with LLMs. We recommend companies exploring LLMs and evaluating AI safety consult this list to help navigate their projects.

Final Thoughts

By safely leveraging LLM technology through a Scoped AI software layer, CX leaders can:

1. Elevate Customer Experience
2. Boost Operational Efficiency
3. Enhance Decision Making
4. Ensure Consistency and Compliance

Reach out to sales@quiq.com to learn how Quiq is helping companies improve customer satisfaction and drive efficiency at the same.

What is an AI Assistant for Retail?

Over the past few months, we’ve had a lot to say about artificial intelligence, its new frontiers, and the ways in which it is changing the customer service industry.

A natural extension of this analysis is looking at the use of AI in retail. That is our mission today. We’ll look at how techniques like natural language processing and computer vision will impact retail, along with some of the benefits and challenges of this approach.

Let’s get going!

How is AI Used in Retail?

AI is poised to change retail, as it is changing many other industries. In the sections that follow, we’ll talk through three primary AI technologies that are driving these changes, namely natural language processing, computer vision, and machine learning more broadly.

Natural Language Processing

Natural language processing (NLP) refers to a branch of machine learning that attempts to work with spoken or written language algorithmically. Together with computer vision, it is one of the best-researched and most successful attempts to advance AI since the field was founded some seven decades ago.

Of course, these days the main NLP applications everyone has heard of are large language models like ChatGPT. This is not the only way AI assistants will change retail, but it is a big one, so that’s where we’ll start.

An obvious place to use LLMs in retail is with chatbots. There’s a lot of customer interaction that involves very specific questions that need to be handled by a human customer service agent, but a lot of it is fairly banal, consisting of things like “How do I return this item” or “Can you help me unlock my account.” For these sorts of issues, today’s chatbots are already powerful enough to help in most situations.

A related use case for AI in retail is asking questions about specific items. A customer might want to know what fabric an article of clothing is made out of or how it should be cleaned, for example. An out-of-the-box model like ChatGPT won’t be able to help much. but if you’ve used a service like Quiq’s conversational CX platform, it’s possible to finetune an LLM on your specific documentation. Such a model will be able to help customers find the answers they need.

These use cases are all centered around text-based interactions, but algorithms are getting better and better at both speech recognition and speech synthesis. You’ve no doubt had the distinct (dis)pleasure of interacting with an automated system that sounded very artificial and that lacked the flexibility actually to help you very much; but someday soon, you may not be able to tell from a short conversation whether you were talking to a human or a machine.

This may cause a certain amount of concern over technological unemployment. If chatbots and similar AI assistants are doing all this, what will be left for flesh-and-blood human workers? Frankly, it’s too early to say, but the evidence so far suggests that not only is AI not making us obsolete, it’s actually making workers more productive and less prone to burnout.

Computer Vision

Computer vision is the other major triumph of machine learning. CV algorithms have been created that can recognize faces, recognize thousands of different types of objects, and even help with steering autonomous vehicles.

How does any of this help with retail?

We already hinted at one use case in the previous paragraph, i.e. automatically identifying different items. This has major implications for inventory management, but when paired with technologies like virtual reality and augmented reality, it could completely transform the ways in which people shop.

Many platforms already offer the ability to see furniture and similar items in a customer’s actual living space, and there are efforts underway to build tools for automatically sizing them so they know exactly which clothes to try on.

CV is also making it easier to gather and analyze different metrics crucial to a retail enterprise’s success. Algorithms can watch customer foot traffic to identify potential hotspots, meaning that these businesses can figure out which items to offer more of and which to cut altogether.

Machine Learning

As we stated earlier, both natural language processing and computer vision are types of machine learning. We gave them their own sections because they’re so big and important, but they’re not the only ways in which machine learning will impact retail.

Another way is with increasingly personalized recommendations. If you’ve ever taken the advice of Netflix or Spotify as to what entertainment you should consume next then you’ve already made contact with a recommendation engine. But with more data and smarter algorithms, personalization will become much more, well, personalized.

In concrete terms, this means it will become easier and easier to analyze a customer’s past buying history to offer them tailor-made solutions to their problems. Retail is all about consumer satisfaction, so this is poised to be a major development.

Machine learning has long been used for inventory management, demand forecasting, etc., and the role it plays in these efforts will only grow with time. Having more data will mean being able to make more fine-grained predictions. You’ll be able to start printing Taylor Swift t-shirts and setting up targeted ads as soon as people in your area begin buying tickets to her show next month, for example.

Where are AI Assistants Used in Retail?

So far, we’ve spoken in broad terms about the ways in which AI assistants will be used in retail. In these sections, we’ll get more specific and discuss some of the particular locations where these assistants can be deployed.

In Kiosks

Many retail establishments already have kiosks in place that let you swap change for dollars or skip the trip to the DMV. With AI, these will become far more adaptable and useful, able to help customers with a greater variety of transactions.

In Retail Apps

Mobile applications are an obvious place to use recommendations or LLM-based chatbots to help make a sale or get customers what they need.

In Smart Speakers

You’ve probably heard of Alexa, a smart speaker able to play music for you or automate certain household tasks. Well, it isn’t hard to imagine their use in retail, especially as they get better. They’ll be able to help customers choose clothing, handle returns, or do any of a number of related tasks.

In Smart Mirrors

For more or less the same reason, AI-powered smart mirrors could have a major impact on retail. As computer vision improves it’ll be better able to suggest clothing that looks good on different heights and builds, for example.

What are the Benefits of Using AI in Retail?

The main reason that AI is being used more frequently in retail is that there are so many advantages to this approach. In the next few sections, we’ll talk about some of the specific benefits retail establishments can expect to enjoy from their use of AI.

Better Customer Experience and Engagement

These days, there are tons of ways to get access to the goods and services you need. What tends to separate one retail establishment from another is customer experience and customer engagement. AI can help with both.

We’ve already mentioned how much more personalized AI can make the customer experience, but you might also consider the impact of round-the-clock availability that AI makes possible.

Customer service agents will need to eat and sleep sometimes, but AI never will, which means that it’ll always be available to help a customer solve their problems.

More Selling Opportunities

Cross-selling and upselling are both terms that are probably familiar to you, and they represent substantial opportunities for retail outfits to boost their revenue.

With personalized recommendations, sentiment analysis, and similar machine-learning techniques, it will become much faster and easier to identify additional items that a customer might be interested in.

If a customer has already bought Taylor Swift tickets and a t-shirt, for example, perhaps they’d also like a fetching hat that goes along with their outfit. And if you’ve installed the smart mirrors we talked about earlier, AI will even be able to help them find the right size.

Leaner, More Efficient Operations

Inventory management is a never-ending concern in retail. It’s also one place where algorithmic solutions have been used for a long time. We think this trend will only continue, with operations becoming leaner and more responsive to changing market conditions.

All of this ultimately hinges on the use of AI. Better algorithms and more comprehensive data will make it possible to predict what people will want and when, meaning you don’t have to sit on inventory you don’t need and are less likely to run out of anything that’s selling well.

What are the Challenges of Using AI in Retail?

That being said, there are many challenges to using Artificial Intelligence in retail. We’ll cover a few of these now so you can decide how much effort you want to put into using AI.

AI Can Still Be Difficult to Use

To be sure, firing up ChatGPT and asking it to recommend an outfit for a concert doesn’t take very long. But this is a far cry from implementing a full-bore AI solution into your website or mobile applications. Serious technical expertise is required to train, finetune, deploy, and monitor advanced AI, whether that’s an LLM, a computer-vision system, or anything else, and you’ll need to decide whether you think you’ll get enough return to justify the investment.

Expense

And speaking of investment, it remains pretty expensive to utilize AI at any non-trivial scale. If you decide you want to hire an in-house engineering team to build a bespoke model, you’ll have to have a substantial budget to pay for the training and the engineer’s salaries. These salaries are still something you’ll have to account for even if you choose to build on top of an existing solution, because finetuning a model is far from easy.

One solution is to utilize an offering like Quiq. We have already created the custom infrastructure required to utilize AI in a retail setting, meaning you wouldn’t need a serious engineering force to get going with AI.

Bias, Abuse, and Toxicity

A perennial concern with using AI is that a model will generate output that is insulting, harmful, or biased in some way. For obvious reasons this is bad for retail establishments, so you’ll want to make sure that you both carefully finetune this behavior out of your models and continually monitor them in case their behavior changes in the future. Quiq also eliminates this risk.

AI and the Future of Retail

Artificial intelligence has long been expected to change many aspects of our lives, and in the past few years, it has begun delivering on that promise. From ultra-precise recommendations to full-fledged chatbots that help resolve complex issues, retail stands to benefit greatly from this ongoing revolution.

If you want to get in on the action but don’t know where to start, set up a time to check out the Quiq platform. We make it easy to utilize both customer-facing and agent-facing solutions, so you can build an AI-positive business without worrying about the engineering.

Request A Demo

Top 7 AI Trends For 2024

The end of the year is generally a time that prompts reflection about the past. But as a forward-thinking organization, we’re going to use this period instead to think about the future.

Specifically, the future of artificial intelligence (AI). We’ve written a great deal over the past few months about all the ways in which AI is changing contact centers, customer service, and more. But the pioneers of this field do not stand still, and there will no doubt be even larger changes ahead.

This piece presents our research into the seven main AI advancements for 2024, and how we think they’ll matter for you.

Let’s dive in!

What are the 2024 Technology Trends in AI?

In the next seven sections, we’ll discuss what we believe are the major AI trends to look out for in 2024.

Bigger (and Better) Generative AI Models

Probably the simplest trend is that generative models will continue getting bigger. At billions of internal parameters, we already know that large language models are pretty big (it’s in the name, after all). But there’s no reason to believe that the research groups training such models won’t be able to continue scaling them up.

If you’re not familiar with the development of AI, it would be easy to dismiss this out of hand. We don’t get excited when Microsoft releases some new OS with more lines of code than we’ve ever seen before, so why should we care about bigger language models?

For reasons that remain poorly understood, bigger language models tend to mean better performance, in a way that doesn’t hold for traditional programming. Writing 10 times more Python doesn’t guarantee that an application will be better – it’s more likely to be the opposite, in fact – but training a model that’s 10 times bigger probably will get you better performance.

This is more profound than it might seem at first. If you’d shown me ChatGPT 15 years ago, I would’ve assumed that we’d made foundational progress in epistemology, natural language processing, and cognitive psychology. But, it turns out that you can just build gargantuan models and feed them inconceivable amounts of textual data, and out pops an artifact that’s able to translate between languages, answer questions, write excellent code, and do all the other things that have stunned the world since OpenAI released ChatGPT.

As things stand, we have no reason to think that this trend will stop next year. To be sure, we’ll eventually start running into the limits of the “just make it bigger” approach, but it seems to be working quite well so far.

This will impact the way people search for information, build software, run contact centers, handle customer service issues, and so much more.

More Kinds of Generative Models

The basic approach to building a generative model fits well with producing text, but it is not limited to that domain.

DALL-E, Midjourney, and Stable Diffusion are three well-known examples of image-generation models. Though these models sometimes still struggle with details like perspective, faces, and the number of fingers on a human hand, they’re nevertheless capable of jaw-dropping output.

Here’s an example created in ~5 minutes of tinkering with DALL-E 3:
biggest questions about AI

As these image-generation models improve, we expect they’ll come to be used everywhere images are used – which, as you probably know, is a lot of places. YouTube thumbnails, murals in office buildings, dynamically created images in games or music videos, illustrations in scientific papers or books, initial design drafts for cars, consumer products, etc., are all fair game.

Now, text and images are the two major generative AI use cases with which everyone is familiar. But what about music? What about novel protein structures? What about computer chips? We may soon have models that design the chips used to train their successors, with different models synthesizing the music that plays in the chip fabrication plant.

Open Source v.s. Closed Source Models

Concerns around automation and AI-specific existential risk aren’t new, but one major front that’s opened in that war concerns whether models should be closed source or open source.

“Closed source” refers to a paradigm in which a code base (or the weights of a generative model) are kept under lock and key, only available to the small teams of engineers working on them. “Open source”, by contrast, is an antipodal philosophy that believes the best way to create safe, high-quality software is to disseminate the code far and wide, giving legions of people the opportunity to find and fix flaws in its design.

There are many ways in which this interfaces with the broader debate around generative AI. If emerging AI technologies truly present an existential threat, as the “doomers” claim, then releasing model weights is spectacularly dangerous. If you’ve built a model that can output the correct steps for synthesizing weaponized smallpox, for example, open-sourcing it would mean that any terrorist anywhere in the world could download and use it for that purpose.

The “accelerationists”, on the other hand, retort by saying that the basic dynamics of open-source systems hold for AI as they do for every other kind of software. Yes, making AI widely available means that some people will use it to harm others, but it also means that you’ll have far more brains working to create safeguards, guardrails, and sentinel systems able to thwart the aims of the wicked.

It’s still far too early to tell whether AI researchers will choose to adopt the open or closed-source approaches, but we predict that this will continue to be a hotly-contested issue. Though it seems unlikely that OpenAI will soon release the weights for its best models, there will be competitor systems that are almost as good which anyone could download, modify, and deploy. We also think there’ll be more leaks of weights, such as what happened with Meta’s LLaMa model in early 2023.

AI Regulation

For decades, debates around AI safety occurred in academic papers and obscure forums. But with the rise of LLMs, all that changed. It was immediately clear that they would be incredibly powerful, amoral tools, suitable for doing enormous good and enormous harm.

A consequence of this has been that regulators in the United States and abroad are taking notice of AI, and thinking about the kind of legal frameworks that should be established in response.

One manifestation of this trend was the parade of Congressional hearings that took place throughout 2023, with luminaries like Gary Marcus, Sam Altman, and others appearing before the federal government to weigh in on this technology’s future and likely impact.

On October 30th, 2023, the Biden White House issued an executive order meant to set the stage for new policies concerning dual-use foundation models. It gives the executive branch around a year to conduct a sweeping series of reports, with the ultimate aim being to create guidelines for industry as it continues developing powerful AI models.

The gears of government turn slowly, and we expect it will be some time before anything concrete comes out of this effort. Even when it does, questions about its long-term efficacy remain. How will it help to stop dangerous research in the U.S., for example, if China charges ahead without restraint? And what are we to do if some renegade group creates a huge compute cluster in international waters, usable by anyone, anywhere in the world wanting to train a model bigger than GPT-4?

These and other questions will have to be answered by lawmakers and could impact the way AI unfolds for the next century.

The Rise of AI Agents

We’ve written elsewhere about the many ongoing attempts to build AI systems – agents – capable of pursuing long-range goals in complex environments. For all that it can do, ChatGPT is unable to take a high-level instruction like “run this e-commerce store for me” and get it done successfully.

But that may change soon. Systems like Auto-GPT, AssistGPT, and SuperAGI are all attempts to augment existing generative AI models to make them better able to accomplish larger goals. As things stand, agents have a notable tendency to get stuck in unproductive loops or to otherwise arrive at a state they can’t get out of on their own. But we may only be a few breakthroughs away from having much more robust agents, at which point they’ll begin radically changing the economy and the way we live our lives.

New Approaches to AI

When people think about “AI”, they’re usually thinking of a machine learning or deep learning system. But these approaches, though they’ve been very successful, are but a small sample of the many ways in which intelligent machines could be built.
Neurosymbolic AI is another. It usually combines a neural network (such as the ones that power LLMs) with symbolic reasoning systems, able to make arguments, weigh evidence, and do many of the other things we associate with thinking. Given the notable tendency of LLMs to hallucinate false or incorrect information, neurosymbolic scaffolding could make them far better and more useful.

Causal AI is yet another. These AI systems are built to learn causal relationships in the world, such as the fact that dropping glass on a hard surface will cause it to break. This too, is a crucial part of what is missing from current AI systems.

Quantum Computing and AI

Quantum computing represents the emergence of the next great computational substrate. Whereas today’s “classical” computers exploit lightning-fast transistor operations, quantum computers are able to utilize quantum phenomena, such as entanglement and superposition, to solve problems that not even the grandest supercomputers can handle in less than a million years.

Naturally, researchers started thinking about applying quantum computing to artificial intelligence very early on, but it remains to be seen how useful it’ll be. Quantum computers excel at certain kinds of tasks, especially those involving combinatorics, solving optimization problems, and anything utilizing linear algebra. This last undergirds huge amounts of AI work, so it stands to reason that quantum computers will speed up at least some of it.

AI and the Future

It would appear as though the Pandora’s box of AI has been opened for good. Large language models are already changing many fields, from copywriting and marketing to customer service and hospitality – and it’ll likely change many more in the years ahead.

This piece has discussed a number of the most AI industry important trends to look out for in 2024, and should help anyone interfacing with these technologies to prepare themselves for what may come.

Generative AI Privacy Concerns – Your Guide to the Current Landscape

Generative AI, such as the large language model (LLM) ChatGPT and the image-generation tool DALL-E, are already having a major impact in places like marketing firms and contact centers. With their ability to create compelling blog posts, email blasts, YouTube thumbnails, and more, we believe they’re only going to become an increasingly integral part of the workflows of the future.

But for all their potential, there remain serious questions about the short- and long-term safety of generative AI. In this piece, we’re going to zero in on one particular constellation of dangers: those related to privacy.

We’ll begin with a brief overview of how generative AI works, then turn to various privacy concerns, and finish with a discussion of how these problems are being addressed.

Let’s dive in!

What is Generative AI (and How is it Trained)?

In the past, we’ve had plenty to say about how generative AI works under the hood. But many of the privacy implications of generative AI are tied directly to how these models are trained and how they generate output, so it’s worth briefly reviewing all of this theoretical material, for the sake of completeness and to furnish some much-needed context.

When an LLM is trained, it’s effectively fed huge amounts of text data, from the internet, from books, and similar sources of human-generated language. What it tries to do is predict how a sentence or paragraph will end based on the preceding words.

Let’s concretize this a bit. You probably already know some of these famous quotes:

  • “You must be the change you wish to see in the world.” (Mahatma Gandhi)
  • “You may say I’m a dreamer, but I’m not the only one.” (John Lennon)
  • “The only thing we have to fear is fear itself.” (Franklin D. Roosevelt)

What ChatGPT does is try to predict what the italicized parts say based on everything that comes before. It’ll read “You must be the change you”, for example, and then try to predict “wish to see in the world.”

When the training process begins the model will basically generate nonsense, but as it develops a better and better grasp of English (and other languages), it gradually becomes the remarkable artifact we know today.

Generative AI Privacy Concerns

From a privacy perspective, two things about this process might concern us:

The first is what data are fed into the model, and the second is what kinds of output the models might generate.

We’ll have more to say about each of these in the next section, then cover some broader concerns about copyright law.

Generative AI and Sensitive Data

First, there’s real concern over the possibility that generative AI models have been shown what is usually known as “Personally Identifiable Information” (PII). This is data such as your real name, your address, etc., and can also include things like health records that might not have your name but which can be used to figure out who you are.

The truth is, we only have limited visibility into the data that LLMs are shown during training. Given how much of the internet they’ve ingested, it’s a safe bet that at least some sensitive information has been included. And even if it hasn’t seen a particular piece of PII, there are myriad ways in which it can be exposed to it. You can imagine, for example, someone feeding customer data into an LLM to produce tailored content for them, not realizing that, in many cases, the model will have permanently incorporated that data into its internal structure.

There isn’t a great way at present to remove data from an LLM, and finetuning it in such a way that it never exposes that data in the future is something no one knows how to do yet.

The other major concern around sensitive data in the context of generative AI is that they will simply hallucinate allegations about people that damage their reputations and compromise their privacy. We’ve written before about the now-infamous case of law professor Jonathan Turley, who was falsely accused of sexually harassing several of his students by ChatGPT. We imagine that in the future there will be many more such fictitious scandals, potentially ones that are very damaging to the reputations of the accused.

Generative AI, Intellectual Property, and Copyright Law

There have also been questions about whether some of the data fed into ChatGPT and similar models might be in violation of copyright law. Earlier this year, in fact, a number of well-known writers leveled a suit against both OpenAI (the creators of ChatGPT) and Meta (the creators of LLaMa).

The suit claims that these teams trained their models on proprietary data contained in the works of authors like Michael Chabon, “without consent, without credit, and without compensation.” Similar charges have been made against Midjourney and Stability AI, both of whom have created AI-based image generation models.

These are rather thorny questions of jurisprudence. Though copyright law is a fairly sophisticated tool for dealing with various kinds of human conflicts, no one has ever had to deal with the implications of enormous AI models training on this much data. Only time will tell how the courts will ultimately decide, but if you’re using customer-facing or agent-facing AI tools in a place like a contact center, it’s at least worth being aware of the controversy.

Contact Us

Mitigating Privacy Risks from Generative AI

Now that we’ve elucidated the dimensions of the privacy concerns around generative AI, let’s spend some time talking about various efforts to address these concerns. We’ll focus primarily on data privacy laws, better norms around how data is collected and used, and the ways in which training can help.

Data Privacy Laws

First, and biggest, are attempts by different regulatory bodies to address data privacy issues with legislation. You’re probably already familiar with the European Union’s General Data Protection Regulation (GDPR), which puts numerous rules in place regarding how data can be gathered and used, including in advanced AI systems like LLMs.

Canada’s lesser-known Artificial Intelligence and Data Act (AIDA) mandates that anyone building a potentially disruptive AI system, like ChatGPT, must create guardrails to minimize the likelihood that their system will create biased or harmful output.

It’s not clear yet the extent to which laws like these will be able to achieve their objectives, but we expect that they’ll be just the opening salvo in a long string of legislative attempts to ameliorate the potential downsides of AI.

Robust Data Collection and Use Policies

There are also many things that private companies can do to address privacy concerns around data, without waiting for bureaucracies to catch up.

There’s too much to say about this topic to do it justice here, but we can make a few brief comments to guide you in your research.

One thing many companies are investing in is better anonymization techniques. Differential privacy, for example, is emerging as a promising way of simultaneously allowing for the collection of private data while anonymizing it enough to guard against LLMs accidentally exposing it at some point in the future.

Then, of course, there are myriad ways of securely storing data once you have it. This mostly boils down to keeping a tight lid on who is able to access private data – through i.e. encryption and a strict permissioning system – and carefully monitoring what they do with it once they access it.

Finally, it helps to be as public as possible about your data collection and use policies. Make sure they’re published somewhere that anyone can read them. Whenever possible, give users the ability to opt out of data collection, if that’s what they want to do.

Better Training for Those Building and Using Generative AI

The last piece of the puzzle is simply to train your workforce about data collection, data privacy, and data management. Sound laws and policies won’t do much good if the actual people who are interacting with private data don’t have a solid grasp of your expectations and protocols.

Because there are so many different ways in which companies collect and use data, there is no one-size-fits-all solution we can offer. But you might begin by sending your employees this article, as a way of opening up a broader conversation about your future data-privacy practices.

Data Privacy in the Age of Generative AI

In all its forms, generative AI is a remarkable technology that will change the world in many ways. Like the printing press, gunpowder, fire, and the wheel, these changes will be both good and bad.

The world will need to think carefully about how to get as many of the advantages out of generative AI as possible while minimizing its risks and dangers.

A good place to start with this is by focusing on data privacy. Because this is a relatively new problem, there’s a lot of work to be done in establishing legal frameworks, company policies, and best practices. But that also means there’s an enormous opportunity as well, to positively shape the long-term trajectory of AI technologies.