
Recenlty, I built this web application (Deal Finder Pro). When I started building the goal was simple: let someone type in a product and quickly see the best prices online and nearby (locally).
In practice, it turned into a much more interesting AI engineering problem.
It is one thing to generate a shopping price summary. It is another thing entirely to build something that produces quality results, can pull in nearby store information, stays reasonably fast, and does not quietly burn through API credits every time someone runs a search.
That balance became the real project.
The most important early decision was choosing the right model. For this app, Gemini 2.5 ended up being a very good fit because it could use Google Search grounding and Google Maps grounding in the same workflow. FYI - grounding means the AI is not answering from memory alone. It is allowed to look at live information from Google Search while generating its response, so it can base the answer on current web results instead of just its training data. That mattered a lot. I did not just need a model that could “sound smart.” I needed one that could reason across national retailer results while also understanding location context for nearby stores. That combination made it much more useful for a deal-finding experience than a model that only excelled at text generation. Suprisingly, the Gemini 3 model can not do this yet.
The basic workflow became a layered system rather than a single AI call. Gemini handles the synthesis: organizing results into the top deals, nearby stores, national retailers, and a quick takeaway. But I found pretty quickly that local pricing is the hardest part of the whole problem. Large online retailers are relatively easy to compare. However, many local stores do not expose prices cleanly, some rely heavily on JavaScript, some hide data behind store-specific context, and others simply block direct scraping.
That is where Bright Data became useful. Bright Data is a web data platform that enables businesses to collect public web data at scale.
I did not want to use Bright Data as the default path for every search because the cost can add up quickly. Instead, I used it more selectively. The first step is always the cheaper path: direct fetching, public product pages, retailer APIs where available, and structured data already exposed by the site. Only when a store blocks direct access or hides pricing behind a more protected layer does Bright Data become the fallback. That let me improve coverage without turning every user search into an expensive scraping event.
That tradeoff became one of the biggest architectural lessons in the project: do not pay premium costs for work you can do with a lighter, cheaper path first.
Over time, I ended up with a hybrid approach. Gemini handles the reasoning and summary. Store-specific logic handles price verification where possible. Bright Data helps when a retailer is difficult to access. Cached results help avoid repeating expensive work. That combination turned out to be much more practical than trying to solve everything with a single model prompt.
Another big lesson was that speed matters almost as much as accuracy. Even if the results are good, people will abandon a search if it feels stuck. Some of these searches can take 20 seconds or more because they involve grounded search, maps, local store checks, and formatting. So part of the product work became managing expectations and optimizing the flow.
That meant doing things like:
- short-lived caching for repeated searches
- stopping local store lookups once enough strong results are found
- using cached shared result pages instead of rerunning the whole search when someone shares a result
- showing users a clear message that the first search may take a little longer
Those are not flashy AI features, but they matter. A useful AI product is not just about model quality. It is about the total experience.
The cost side took just as much thought. I wanted people to be able to try the app without friction, but I also needed guardrails so I was not subsidizing unlimited searches. The solution I landed on was a tiered access model. Guests get a limited number of free searches. Logged-in users get a monthly free allotment. Paid credits cover heavier use. That creates room for curiosity and sharing, while still protecting the economics of the app.
That balance feels especially important in AI products. If you lock everything down too early, nobody tries it. If you make everything free and unlimited, you can create a cost problem before you have a business. I wanted something in the middle: generous enough to encourage discovery, but structured enough that usage scales responsibly.
If I had to summarize the biggest takeaway from building Deal Finder, it would be this: AI apps get much better when you stop asking the model to do everything.
Use the model for what it is best at. Use targeted integrations for what they are best at. Use caching to control cost. Use product design to reduce abandonment. And build the system in layers so you can improve accuracy without breaking speed or blowing through your budget.
For me, that has been the most interesting part of the project: not just building an AI feature, but building an AI product that has to work in the real world.