Aakriti Bhargava of Revionics discusses demand-based optimisation at scale using AI
Retail businesses operate in increasingly competitive environments. In these dynamic markets, the ability to accurately predict how retail strategies around pricing, advertising, procurement, and other operational factors affect demand can help enterprises meet and maximise their financial and marketing goals.
New artificial intelligence (AI) and machine learning (ML) tools are changing the prediction game, but the pathway to achieving accurate results can be fraught with challenges - from training and deploying millions of demand models simultaneously, to dealing with sparsity and collinearity in the data, to the need to constantly adapt to an ever evolving market.
Developing and implementing effective AI tools to achieve demand-based optimisation at scale requires solving for these problems, as well as a focus on accuracy, scalability, and extensibility.
Asking and answering the right questions
In the retail industry, there are various questions that retailers are continuously trying to answer, depending on whether you are a category manager, pricing analyst, or in a related functional role. For example, you might need to know:
· How will the new promotion I’m envisioning affect my sales?
· How effective is a particular kind of advertisement (i.e. front page newspaper ad, end cap store display, digital leaderboard)?
· What prices should I set for a particular set of items to maximize profit?
· Where should I put this item in a store (assortment planning)?
· Who is my audience for this advertisement?
· How many items should I procure to meet demand without being overstocked?
Answering these questions will require an understanding of how different factors affect demand (and therefore business metrics such as revenue, profit, etc.), plus the ability to turn that understanding into decisions that meet your objectives.
While that may sound simple, getting to the right answers may be complicated, because retailers need to be able to adapt to market changes quickly. Retailers want the ability to customise for their business’s nuances, and they want to make sure that they are hitting their overall company revenue and profit goals.
Additionally, retailers want a tool that meets the above requirements for every product and every store, and want these outputs to be updated frequently. Finally, they also want explainability – to be able to understand and communicate all of this information themselves.
Doing this at scale for an entire assortment is a hard problem, particularly because merchants generally have a lot more products than they have specialists to optimally—and accurately—manage decisions around those products.
Imagine the thousands of products being sold in thousands of stores every day to millions of customers, the diversity of different ways items could be promoted, the ever changing external factors that affect demand, and all the new items being sold for the first time.
In this context, the need for AI and a robust framework to support optimisation becomes apparent.
How AI helps with retail decision-making
AI is available today in various forms. For instance, generative AI (GenAI) generates images or writes articles, while decision-making AI can learn to solve problems such as driving a car or optimizing a supply chain.
This latter decision-making AI involves two parts: modeling the environment and optimizing action decisions. In modeling, we define different factors to explain our target (i.e. a mathematical representation of the real-world problem) and use historical data to simulate and predict the target under various conditions (via open source libraries like TensorFlow).
In optimisation, we try to find the best solution that uses predictions of the modeled environment to minimise or maximise a given objective function (via python libraries such as SciPy). To do this well, the AI framework needs to meet objectives of accuracy, scalability, and extensibility.
Targeting for accuracy
When a merchant predicts how many units they will sell, the closer this forecast is to the actual number (Figure 1), the better business results (profit, revenue, etc.) will be achieved.
For example, if they reduce the price on bananas, thinking it will lead to an increase in sales, but the increase was not as much as they predicted, this will have a financial impact, as the item may now be underpriced and overstocked.
Similarly, if demand for chocolate exceeds the prediction for Valentine’s Day, the merchant will lose out on what would have been more potential sales.
Figure 1: Forecast vs Actuals; Mean absolute percentage error (MAPE). (Source: Revionics)
How do we achieve accuracy?
The retail space has several nuances that make it different from other standard AI applications, where you can effectively use “off the shelf” AI tools.
Retail generally involves training millions of models at a granular level to be able to capture the nuances in the data, given the numerous product-store combinations, which leads to sparsity and collinearity in the data. AI/ML teams can overcome some of these challenges by:
Using domain knowledge to fall back on in case there is very little or no data, as may be the case for new product introductions. In this scenario, we could use something like a Bayesian hierarchical modeling framework that enables borrowing “information” from a less granular level, or rule-based heuristics that provides transparent guardrails around the AI logic.
Identifying the different factors (seasonality, holiday, product availability, product lifecycle, cannibalisation, affinity, etc.) that could potentially affect demand, and jointly modeling these factors (Figure 2).
Note that a common alternative approach is to model these effects sequentially (“deseasonalise” before fitting elasticity) because it is simpler, but this introduces biases to the parameter estimates, which can lead to incorrect “optimal” decisions.
A common example is high seasonality products, or products that tend to concentrate price changes around holidays. Deseasonalising before computing elasticity effectively would attribute all the lift to the holiday, before letting elasticity explain the residuals. Joint modeling allows the models to consider both effects together, to find the range of likely explanations.
Explainability: once you have the right modeling framework in place, being able to explain its predictions helps build confidence in users, and is critical in diagnosing cases where actuals are very different from forecasts.
Figure 2: A few different factors that could affect demand. (Source: Alexander Braylan, Revionics)
Solving for scalability
A scalable AI system in this context refers to the ability of the system to adapt to higher computational demands, whether through scaling up (enhancing the capacity of existing hardware or software) or scaling out (adding more hardware or distributed software instances), while being able to process the larger amounts of data with speed and under reasonable costs.
Some of the obstacles often associated with scalability include the need to retrain all models frequently (e.g. weekly) to quickly adapt to the dynamic market conditions, dealing with complex company wide objectives, and simultaneously needing to make granular decisions for individual items.
For example, even if a merchant thinks they have done a good job pricing an item, they usually do not have the perspective or analytics to understand how it impacts their broader corporate objectives.
How do we achieve scalability?
Figure 3: An example of the architecture of a scalable AI system. (Source: Aakriti Bhargava.)
The architecture for a potentially scalable AI system is illustrated in Figure 3. This can be achieved with the following (and other) guidelines:
● Architect a system that can parallelize efficiently, built on infrastructure that can autoscale both vertically and horizontally.
○ Using open source container orchestration systems (e.g. Kubernetes) and different cloud native scalable services, such as an enterprise data warehouse, synchronous and scalable messaging services, and other services for logging
○ Defining and identifying smart auto scaling metrics
● Try to use lower cost computing resources (e.g. Spot Virtual Machines) that help reduce cost, but at the same time try to maintain reliability by having fallback options, to on-demand instances when the need arises.
● To solve for the special nuances of retail AI that we discuss above, it is also important to layer proprietary techniques on top of common infrastructure patterns. For example :
○ Reducing the number of models that you run simultaneously by inheriting model parameter values from a less granular level.
○ Clustering product data into groups in order to parallelize uniform-sized jobs, to run efficiently and avoid long tail jobs that run for long periods of time.
Innovating for extensibility
Given the frequently changing market, and every retailer’s unique needs and challenges, there is a constant need to innovate, which means you want to keep building new features, quickly, to help explain demand and make optimal decisions.
A good example is how retailers needed to quickly adapt to changes in consumer behavior during the Covid-19 pandemic.
How do we achieve extensibility?
Figure 4: An AI-generated image of various products made with building blocks, a metaphor for extensibility. (Source: Aakriti Bhargava using GenAI.)
Much like how children’s building blocks can be assembled into an array of imaginative structures, the foundational elements of AI, such as optimisation algorithms and tensor auto-differentiation, serve as versatile building blocks.
From these core components, we can construct a diverse spectrum of models, ranging from statistical time series models like autoregressive integrated moving average (ARIMA), to the neural networks that generated Figure 4.
Extensibility can be best achieved by:
Architecting and building a modular system using robust design principles like separation of concerns, with minimal overlap between the functions of the individual units.
For example: separating out the optimisation algorithms and financial logic from the demand models.
Reliance on well maintained, open source, third-party platforms such as TensorFlow.
Employing good software development practices for test-driven development (TDD), continuous integration, etc. Additionally, keeping certain principles in mind like DRY (Don’t Repeat Yourself) to reduce repetition of software patterns, and YAGNI (You Aren’t Gonna Need It) to avoid implementing features before they are needed, will help achieve better results.
Having a demand forecasting and optimisation AI platform that can achieve all three pillars is crucial to helping retail businesses operate nimbly and profitably.
However, due to the unique challenges of the retail sector, these AI systems have a whole set of requirements that are distinct from those of more famous AI systems such as Large Language Models (LLMs) or autonomous vehicles.
Therefore, it is important to invest in a solution that is built the right way. The value realized not only boosts the bottom line, but also provides the intelligence to navigate business decisions that drive consumer perception and increase market share.
About the author
Aakriti Bhargava is Senior Director and Head of AI, Data, and Engineering at Revionics, an Aptos company, specialising in AI powered lifecycle price optimisation software for US and global retailers.
Since earning her Masters in Information Systems Management (MISM) from Carnegie Mellon University, Bhargava has specialised in retail data science, engineering, and analytics, leveraging her extensive experience and subject matter expertise in e-commerce, consumer behaviour, demand modeling, and compute infrastructure to develop cutting-edge applications that help retailers solve their unique business challenges.
She now drives overall technical strategy, architecture, and innovation across the Revionics engineering organisation.
Continue reading…