Gen AI powered Recommender Systems for Revenue Growth

What happens when customers have too many choices?

As people ponder what to buy, many brands offer recommendations. The recommender systems that enable this leverage data on what users have already liked or bought and are particularly useful when brands offer such a huge array of products that the choices can overwhelm the customer, whether that’s music, movies, accessories, or other products.

Recommender systems are integral to modern digital ecosystems. Well-executed recommendations can increase the customer’s affinity for a brand and can drive revenue for brands by increasing cross-sell and upsell. But when recommender systems don’t deliver truly personalized experiences, customers become disillusioned with the underlying brand, with disappointing results for the bottom line.

Traditional Recommender System Techniques

Core recommendation engine functions like handling user interactions and maintaining scalability still rely on traditional techniques, which can include:

GenAI can’t replace recommenders.

But it can help brands fill data gaps and understand nuanced user behavior by generating synthetic data, creating personalized content, and providing context-aware insights that enhance the results that recommenders provide.

Overcoming Data Scarcity and Cold Start Problems

These traditional methods rely heavily on historical user-item interaction data. But when that data is limited or missing, they can struggle, especially with the “cold start” problem –– making recommendations to new users or introducing new items to existing users.

The Cold Start Problems

To address this, brands can use GenAI to create synthetic data that mimics real user interactions, then use it to enrich existing datasets, leading to more robust and nuanced recommendations. It most often does this with two techniques:

Hybrid models can leverage both traditional methods and GenAI. For instance, they might use collaborative filtering to identify similar users and recommend items based on their preferences while simultaneously using a VAE to generate synthetic interactions for users with sparse data, enhancing the performance of the collaborative filtering model.

Unified Propensity Models

Recommender systems traditionally use propensity models to drive upsell and cross-sell, predict churn, and perform other functions. These models are accurate but can be complex and resource intensive. GenAI offers an alternative, a “decent proxy” for these models that can analyze trends and user behavior. Although not necessarily as precise, GenAI can achieve good results with significantly less effort, so long as we remember that GenAI models themselves can inherit biases from the data used to train them.

While GenAI can be more efficient , individual propensity models might still provide slightly more accurate results in some scenarios. The key lies in finding the right balance—so when the brand has a high-value customer making a key business purchase and wants to target them with a premium product offering, it might be better to use a dedicated upsell model. For broader recommendations where speed and efficiency are paramount, GenAI’s unified approach can be highly beneficial.

Do you need a recommender system at all?

The null model—simply recommending popular items—can work surprisingly well. It’s easy to implement, requires minimal data, and guarantees users will see well-regarded products. It can be particularly useful for new businesses or those with limited data on user preferences. Focusing on popular items can also create a sense of social proof, encouraging users to choose products others have enjoyed.

But again, GenAI can play a role. It can be crucial in determining when to choose a null model over a recommender system (assuming the brand already has one). It can also analyze user behavior, interaction patterns, data availability, and other factors to predict the most effective approach for each scenario. We’re not at the point where GenAI can (or should) dictate when to switch between null models and complex recommender systems, but soon it will be able to leverage intelligent routing or agentic frameworks to make this recommendation.

Ever onward

As GenAI matures, recommender systems will transform into even more intricate tools, weaving themselves seamlessly into the fabric of digital experiences. Recommendations will not only anticipate our desires but surprise us with hidden gems. GenAI can unlock this potential, but further exploration and development are needed to bring this vision to life.

Citations

Every day, more enterprises leverage GenAI to enable self-service, streamline call-center operations, and otherwise improve the experiences they offer customers and employees. But questions arise about how to optimize the relationships between retrieval-augmented generation (RAG), knowledge graphs (KG), and large language models (LLM). Combining these three components in the smartest ways enables enterprise tools to generate more accurate and useful responses that can improve the experience for all users.

Augmenting RAG with KGs leverages the structured nature of KGs to enhance traditional vector search retrieval methods, which improves the depth and contextuality of the information it retrieves. KGs organize data as nodes and relationships representing entities and their connections, which can be queried to infer factual information. Combining KGs with RAG enables LLMs to generate more accurate and contextually enriched responses.

Implementing a KG-RAG approach involves several steps.

Knowledge Graph Curation

RAG Integration

System Optimization

Flow diagram

Implementing a KG-RAG approach involves several steps

When should you choose KG over RAG?

When you typically have complex queries. KGs support more diverse and complex queries than vector databases, handling logical operators and intricate relationships. This capability helps LLMs generate more diverse and interesting text. Examples of complex queries where KG outperforms RAG include

When you need to enhance reasoning and inference. KGs enable reasoning and inference, providing indirect information through entity relationships. This enhances the logical and consistent nature of the text generated by the LLM.

When you need to reduce hallucinations. KGs provide more precise and specific information than vector databases, which reduces hallucinations in LLMs. Vector databases indicate the similarity or relatedness between two entities or concepts, whereas KGs enable a better understanding of the relationship between them. For instance, a KG can tell you that “Eiffel Tower” is a landmark in “Paris”, while a vector database can only indicate how similar the two concepts are.

The KG approach effectively retrieves and synthesizes information dispersed across multiple PDFs. For example, it can handle cases where an objective answer (e.g., yes/no) is in one document and the reasoning is in another PDF. This reduces wrong answers (False Positive) which is common in Vector db

Vector databases give similar matches just on text similarity which increases False Negative cases and hallucinations. Knowledge graphs capture contextual and causality relationships, so they don’t produce answers that seem similar based purely on text similarity.

Metric RAG KG
Correct Response 153 167
Incorrect Response 46 31
Partially Correct 1 2

*Comparison run on sample data between RAG and KG on 200 questions

When you need scalability and flexibility. KGs can scale more easily as new data is added while maintaining the structure and integrity of the knowledge base. RAG often requires significant retraining and re-indexing of documents.

The best way to leverage the combined power of RAG, KGs, and LLMs always varies with each enterprise and its specific needs. Finding and maintaining the ideal balance enables the enterprise to provide more useful responses that improve both the customer experience and the employee experience.

Business Benefits

Combining these three powerful technologies can improve the customer experience immensely, even if just by providing more accurate, precise, and relevant answers to questions.

More relevant responses

A KG could model relationships between product features (nodes) and customer reviews (nodes). When a customer asks about a specific product, the system can retrieve relevant data points like reviews, specifications, or compatibility (edges) based on the structured graph. This reduces misinformation and enhances the relevance of responses, improving the customer experience.

KGs are also better at resolving multi-hop queries—those that require information from multiple sources. For example, a question like “Which laptops with Intel i7 processors are recommended for gaming under $1,500?” would require a system to traverse various nodes—products, price ranges, and processor specifications—and combine the results to deliver a complete and accurate response.

Better personalization

KG enable businesses to create a detailed, interconnected view of customer preferences, behaviour, and history. For instance, nodes representing a customer’s purchase history, browsing behaviour, and preferences can be linked to product recommendations and marketing content through specific edges like purchased, viewed, or interested in.

This enables the seamless multichannel experiences that are the goal of many marketing initiatives. With KGs, businesses can maintain a consistent understanding of customers across multiple platforms. For example, a support query initiated on social media can be connected to the customer’s order history and past issues using the graph’s interconnected relationships. This provides a unified experience across channels, such as online chat and in-store support, ensuring consistency and enhancing customer loyalty.

Faster, more consistent responses

KGs can link various pieces of data (nodes) such as FAQs, product documents, and troubleshooting guides. When a customer asks a question, the system can traverse these linked nodes to deliver a response faster than traditional methods, as the relationships between nodes are pre-established and easy to query. For instance, an issue with a smartphone’s battery can be connected to battery FAQs, known issues, and repair services, providing faster resolution times.

KG benefits also extend to the employee experience

Smarter knowledge management

KGs help employees, particularly those in customer support and sales, access the right information quickly. For example, nodes representing common issues, product details, and past customer queries can be linked together. When a customer asks a question, employees can easily navigate through these interconnected nodes, finding precise answers without searching multiple databases.

Shorter Training Times

Because KGs organize information in an intuitive, node-based structure, employees can quickly learn to navigate relationships such as product features, customer preferences, and support histories. For instance, a new hire in tech support can easily find how a specific device model is connected to its known issues and solutions, reducing training complexity and time.

Smarter decisions and better collaboration

KGs can centralize and structure knowledge across departments. For instance, marketing can link nodes such as campaign effectiveness and customer feedback, while the product team may link user feedback with feature improvements. These interconnected graphs allow employees to share insights easily and align strategies across departments, breaking down silos.

Reduced Workload on Repetitive Tasks

KGs can automate repetitive queries by linking commonly asked questions to their answers. For example, support tickets (nodes) could be linked to solutions (nodes), and if a similar query arises, the system can automatically retrieve the relevant information without human intervention.

Use cases for RAG, KGs, and LLMs will continue to expand as more enterprises leverage GenAI. Each is powerful in its own right but combining them creates a suite of tools that are far greater than the sum of their parts.

References

The telecom industry is in the midst of a technology-driven transformation. Due to rising numbers of devices, increasing broadband and mobile connection speeds, telecom’s technology challenges include implementing IoT, embracing AI, taking advantage of 5G and making their customer gain a competitive advantage. Telcos are focused on:

We start with a roadmap using strategies that measure market effectiveness for better conversion rates. Our research into products, segments and behaviors helps us to understand customer profitability and increase margins, followed by an action plan to understand churn and predict CLV.

In this playbook, discover how we’ve helped our clients increase ARPU, drive operational efficiencies, improve NPS scores, conversion rates, revenues and more.

On a global scale, the telecom industry provides crucial services that billions of consumers and businesses rely on every day. Like other industries, telecom has undergone a technology-driven transformation. We’ve helped global telecom’s increase profits and business growth to:

As your partner for success, you leverage our state-of-the-art data science and analytics methodologies and exclusive partnerships.

This playbook, especially curated for the telecom industry demonstrates how we’ve helped our clients increase conversion rates and revenues; reduce churn rates and increase response rates, and more.

Our client’s team started working on a new service application. Preparation of CI/CD process for it was required. The process contained some repeatable manual steps for each new service and during maintenance. It took up to three days for this process to be completed. Infogain’s team saw opportunity for improvement.

Better quality code and optimizing costs are top priorities for software developers. Finding the right tools to streamline projects can be a game-changer when using C++. Conan, a package manager, enhances the development, build, and deployment process. It also helps achieve optimal results and saves time.

Let’s discuss how this tool can be an asset for software developers.

The package manager as an important tool

In recent years, C++ has experienced significant growth and dynamic development. Beyond periodic language standard updates and improvements to the Standard Template Library (STL), the C++ ecosystem has seen the emergence of powerful development tools dedicated to enhancing code quality. These tools include static code analyzers, code sanitizers, and performance analyzers.

One such important tool is the package manager. While package managers have long been a staple in other programming languages, they have been underutilized in C++. Notable options, such as NuGet, Vcpkg, and Conan, offer solutions for managing dependencies and simplifying library preparation during deployment.

Conan has gained recognition as a standard choice among developers for its efficiency and versatility. Conan’s availability on multiple platforms, compatibility with any build systems, and numerous external libraries make it a compelling option. If you want to know more, please visit this page https://conan.io

Benefits of using Conan in the CI/CD process

When establishing the CI/CD process for a new service, there are inevitably some inefficiencies and areas for improvement. Creating a new Docker image for each new service and the manual installation of service-specific dependencies by developers was the norm. These images also required ongoing maintenance as new dependencies emerge in the future.

benefits of using conan

Instead of maintaining individual Docker images for each service, a unified Docker image for C++ projects, complete with the Conan application, is more advantageous. This image, when employed during the Jenkins job execution, would fetch the project’s specific dependencies from a conanfile.py located within the project repository.

This solution fosters standardization across different projects’ CI processes, enhancing consistency and efficiency.

When a developer using Conan in the CI/CD process can count on the following benefits:

benefits of using conan

How Conan can improve your development environment preparation

The same base Docker image with Conan used in CI/CD process can be used by a Python script, which will automatically prepare local development environment on developer PC. It just automatically installs and starts the Docker service on local PC, download base Docker image, install a ssh key to access the git repository, also install some tools which will help with development and run a Local Development Environment container.

The developer needs just to download a C++ project from git repository and call command conan build. Thanks to the specification file placed in repository, Conan will automatically provide all required dependencies for our environment and trigger compilation of your project.

Now we have one common development environment in which we can code any C++ project that uses Conan instead of preparing a new one for each new project.

benefits of using conan

So, thanks to this we have:

Cost savings

Developers can work without a virtual machine from a cloud such as Google Cloud Platform (GCP). While this may not be feasible for big projects due to computing limitations, it is invaluable for small and mid-sized projects. Implementing this solution can translate into cost savings for the company.

Effortless environment setup

Just run the Python script.

Saving time

The Python script takes care of the main tasks of setting up the work environment, allowing developers to focus on coding and development rather than dealing with environment setup intricacies.

Environment consistency and ease of use

The development environment can be employed across various C++ projects using the same build system and compiler. New images with a newer version of the compiler can be provided as required. This consistency minimizes the risk of compatibility.

Local success translates to pipeline success

When developers build a project locally, they operate in the same environment used during the CI/CD process. This alignment ensures that if a build succeeds locally, it is highly likely to succeed in the CI pipeline, providing an extra layer of confidence in the code.

Collaborative environment

All team members work in an identical environment, eliminating discrepancies and promoting better collaboration. This reduces troubleshooting, enabling the team to focus on the task.

Future opportunities

The versatility of this environment opens the door for many improvement possibilities. Adding static code analyzers like Clang-Tidy, preparing the environment for Fuzz testing, code debugging using IDEs, or staying updated with the latest compiler versions and language standards are just some of the many enhancements possible, which, in turn, can lead to higher software quality.

Integrating a package manager into your software development process offers an array of advantages, from time and cost savings to enhanced consistency and collaboration. It simplifies the development environment setup and paves the way for continuous improvements, ensuring the development process remains agile, efficient, and resilient.

Databricks continues to innovate ahead of the market, and customers are buying. In the recently concluded Databricks Data & AI Summit in San Francisco, we met with industry stalwarts, decision-makers, and influencers across industries. The energy and excitement was palpable among both customers and partners. We attended the electrifying sessions, interacted with industry leaders, and niche players, and explored how Databricks continues to innovate ahead of the market and why customers are buying from them.

Here are the key highlights from the event.

  1. Enterprise AI
  1. Becoming an End-to-End Platform
  1. “AI first” visualization
    AI/BI, a new ‘AI-first’ visualization tool that will likely compete with PowerBI, Tableau, etc was announced at the event.
  1. Opensource
    There is an emphasis on open formats and standards over closed ecosystems. Unity Catalog was open-sourced live at the keynote. This will ensure greater connectivity with Delta Sharing.
  1. Evolving Partner landscape
    Databricks’ partner community has doubled in the last year. The partner ecosystem continues to thrive with remarkable growth in partner-certified consultants as both large IT companies as well as smaller and niche players keep coming to the fold.

In conclusion

With a 60% YoY sales growth, Databricks is projected to hit $2.4B in revenue, making it one of the largest private unicorns in the world. This clearly shows that modern clients are razor-focused on making the most of their data through intelligent solutions driven by advanced AI capabilities.

At Infogain, we have been helping our clients transform their businesses with our industry-recognized Data and AI practices. Connect with us today and let’s discuss how we can help you leapfrog to the bold new world of data and AI.

With the recent advancements in the space of Artificial Intelligence (AI) and especially generative AI, machines are being trained to think, learn, and even communicate – just like humans. Enterprises are now looking leverage this to create generative AI use cases that let them streamline, speed-up, and transform their businesses.

This is where, the importance of prompt engineering is also evolving and scaling up as it sits in the crucial space between what the human user wants and what the machines understand.

So, what exactly is prompt engineering?

Prompt engineering is all about having a conversation with a machine based on a ‘prompt’ where it will respond with relevant information and requested actions. The key essence is the crafting of the right questions to help guide the Large Language Models (LLMs) to generate the desired outcomes.  Why prompt engineering is becoming crucial is the ability to harness the power of computing to get answers which are generated at the drop of a hat, and with a lot of details as well.

Prompt Engineering: Is it an art or science? Or both?

Let’s understand Prompt engineering more.

Well, it is the practice of designing and refining prompts—questions or instructions—to elicit specific responses from AI models. More closely, it’s interface between human intent and machine output.

We are already leveraging a lot of Prompt Engineering in our day to day lives

Useful Approaches for better output

While there are multiple ways to articulate the prompt, I came across a simple yet interesting model to fine tune the prompt from a user perspective.

It is based on the “5W+1H” approach, which talks about crafting the prompt considering all the Ws and H in mind to be able to give the LLM the right context to traverse all the tokens towards a meaningful outcome.

The 5W+1H Approach

Image 1: The 5W+1H Approach

What Skills are needed for Prompt Engineering?

Prompt engineering is an amalgamation of both technical and non-technical skills.

Skills of a Prompt Engineer

Image 2: Skills of a Prompt Engineer

Relevance in Today’s tech driven world

As AI continues its footprint across all sectors, some of the areas there prompt have started making an immediate impact are the customer service bots.

Some of the popular airlines have already deployed chatbots for handling customer queries. While I happened to get my queries answered through an interaction with chatbots lately, this seem to gain more accuracy as the inputs and training of the models ‘learn’ more over the period. One important aspect is that these bots are already helping with optimizing the support cost by around 30%. However, the key is a well-crafted prompt that will help the bot to respond with the right and relevant information and reducing the cycle time in the process.

Prompt engineering is indeed crucial in an AI-powered world. Its importance will only continue to amplify.

Towards a bold new future

As AI models become complex and get leveraged by applications around us, the need for an effective communication with them becomes more and more important. Also, with the advent of tools that will simplify technology, the role of prompt engineers will gain more importance towards building, refining, and defining the right interfaces to augment human capabilities and democratize the use of AI.

As Stephen Hawking said. “Success in creating AI would be the biggest event in human history.”  As AI and more specifically generative AI becomes more advanced, prompt engineering will keep playing a critically important role.

That of augmenting human productivity and creativity by establishing an increasingly more seamless connect between the man and machine.

At Infogain, we are looking forward to see how this bold new world shapes up and also help you get there.