SIILI-Blog-Insights_from_building_an_AI_advisor-Hero

25.03.2025

Insights from building an AI advisor for product discovery and comparison

Share Content

Listen audio version of this post:

Insights from building an AI advisor for product discovery and comparison
9:00


Contact Us → Contact Us →

How can customers choose the right product when they don’t fully understand the technical specifications? In many industries, traditional product comparison tools rely on complex filters and jargon-heavy data, creating a frustrating experience for users—especially those without deep technical expertise.

Key takeaways

  • Our AI PoC advisor delivered faster, more focused results than manual filtering when comparing multiple parameters.
  • Granular indexing improved response quality but didn’t require overcomplicated structure.
  • Clear separation of specs and descriptions was key to accurate, relevant answers.
  • Strict context control minimized hallucinations and kept responses grounded in real data.
  • Users expect AI to be both human-like and error-free—a contradiction worth rethinking.

When customers search for the right product in a highly technical industry, they often face a frustrating challenge – traditional product comparison tools require users to understand complex specifications. Many customers lack the technical knowledge to navigate such information, and even those who do often find these tools time-consuming and cumbersome. Traditional filters force users to manually sift through a vast number of parameters, making the selection process lengthy and exhausting. 

In my recent AI advisor Proof of Concept (PoC), I set out to solve this problem by building a product discovery and comparison tool that allows users to simply describe their needs in natural language and receive precise, well-structured answers. 

About the AI advisor PoC

The goal was to create an AI-powered assistant that not only understands product details but also provides accurate comparisons and tailored recommendations – without requiring in-depth technical knowledge. The insights gained from this project are valuable for anyone looking to build AI solutions that truly enhance customer interactions. 

The project involved extracting data from PDFs and transforming it into a format suitable for multi-level indexing in Azure AI Search, as well as leveraging OpenAI models via Azure AI Studio. I also experimented with LangChain to evaluate how different context management approaches affected the quality of AI-generated responses. A crucial role was played by Azure AI Search, which vectorized the data and supplied it to the model in an organized manner. Thanks to this, the Large Language Model (LLM) could operate with a well-prepared context, enabling it to generate relevant and accurate responses. To further guide the model toward the desired response style, I also applied the Few-Shot Prompting technique, but I will discuss this in a separate article. 

Two approaches to documentation indexing 

To evaluate how data structure affects response quality, two different approaches to documentation indexing were tested during the project.

  1. 1. Chunk-based indexing

In this approach, documentation was divided into fragments of 500 characters (chunks) and indexed collectively in Azure AI Search. Search queries were performed at the fragment level – if a fragment matched a user's query, it was returned as context for the large language model. The model received only that specific fragment. 

  1. 2. Granular data indexing

The second approach focused on indexing data in a highly detailed manner rather than treating documentation as a collection of text fragments. Azure AI Search stored individual technical parameters and key product details separately: 

  • Technical specifications (e.g., boom height: 22 m, engine type: electric) 
  • General descriptions (e.g., performance characteristics, primary applications) 
  • Additional features (e.g., drive type, machine category) 

Search queries were performed at the level of precisely indexed parameters, and the large language model received a complete document related to a specific machine rather than isolated text fragments. 

At a later stage, I also provided separate descriptions of the technologies used in the machines. This allowed the AI advisor to not only answer questions about the technologies applied in a specific model but also enable users to inquire about particular technologies and their applications. 

With a better data segmentation strategy in Azure AI Search, the LLM could operate with a more precisely tailored context, making it easier to compare machines and generate responses that were better aligned with user queries. 

Results 

Both approaches were analyzed in terms of response quality.

The second approach yielded better results – the model resembled an experienced sales consultant, providing more detailed and precise answers. It was able to compare machines more accurately, highlight key differences, and recognize machine categories more efficiently. Because Azure AI Search supplied structured data, the AI model could generate responses that were more relevant to user needs. 

However, testing revealed that such extreme data segmentation was unnecessary. It was sufficient to separate technical specifications from descriptive texts – technical details such as size, engine type, and other key parameters should be kept distinct from general and marketing descriptions.

By preparing the context appropriately through Azure AI Search, the AI model could achieve the same benefits without unnecessary complexity in the indexing structure. 

Keeping the AI model within a controlled context 

The LLM was restricted from going beyond the provided context, meaning it could not independently compare machines from competing brands or retrieve external data. Interestingly, I included official client documents in Azure AI Search, which already contained comparisons with competing models. As a result, the AI model had access only to these pre-existing comparisons, ensuring full control over the scope of responses. 

Furthermore, by providing separate descriptions of the client’s applied technologies, the AI advisor was not only able to indicate which technologies were used in a given model but also answer questions about those specific solutions. This approach underscored the importance of data filtering and proper categorization – limiting the model to a specific context minimized the risk of hallucinations and ensured that all responses were strictly based on available documentation. 

AI performance in search and comparison 

Searching for individual machine models using AI compared to a skilled person querying the database with filters does not show significant differences in speed. In many cases, a human who understands the database structure and search criteria can find information faster and more precisely. 

However, the situation changes dramatically when a user needs to find a machine based on three different parameters. In such cases, AI can deliver results up to two or three times faster than an experienced user manually filtering the database. At the same time, AI focuses on and presents only the parameters that matter most to the user, even if they appear at opposite ends of a comparison table. This also significantly improves the presentation of data. 

An interesting observation about AI expectations 

I noticed that we tend to hold AI to much higher standards than humans. We expect AI models to be accurate, consistent, and error-free, while at the same time, we demand that they communicate naturally, like humans. However, what we consider "human-like communication" often involves simplifications, contextual shortcuts, or even omissions – elements that, in AI-generated content, might appear as hallucinations. 

We want AI to behave like a human but at the same time avoid typical human mistakes. This contradiction makes it challenging to strike a balance between precision and conversational fluidity. 

This is a topic worth a separate post – it’s a crucial issue that affects not only AI expectations but also the practical possibilities of communication within language and verbal interactions. 

 

About the author

SIILI-Piotr_Ludwig-Author Piotr Ludwig
Frontend Developer, Siili Solutions
LinkedIn

Piotr Ludwig is a software developer and innovator with a deep passion for problem-solving and building cutting-edge technological solutions. He doesn’t confine himself to specific tools or frameworks—instead, he takes a broad, strategic view, always looking for the most effective way to approach any challenge. His wide range of interests fuels a creative, outside-the-box mindset, enabling him to craft truly innovative solutions. Currently, he focuses on AI engineering, particularly large language models, prompt engineering, and the development of intelligent tools.

Related Stories

Spiking Now

Get Our Latest News

Immerse yourself into the latest twists and turns of life at Siili! Subscribe to our monthly newsletter, and stay up to date with our stories.