← Back to blog

Using AI for Purchasing Decisions

You Can't Make a Right Purchasing Decision Using LLMs If You Are Searching It Wrong

Yes! Because these AI Engines are structurally designed to agree with you and they keep the truth subtle. Sometimes too subtle to notice. We’ve all been there, but I doubt you’ve actually thought about it. I hadn’t until I read about the Google Gemini leak and a Stanford study.

Let me explain.

Let's say, instead of searching "best laptop for developers under $1500", you search "why is MacBook better than ThinkPad?" because a friend swears by it.

AI is going to give you a detailed breakdown of why MacBook wins. Better display, ecosystem integration, build quality, resale value. And now that your notion is confirmed, you buy a MacBook without ever seriously considering whether a ThinkPad's keyboard, repairability, or Linux support would've served you better.

You know why? Because you told the AI what to think before it answered.

Try it yourself. Ask ChatGPT or Gemini "why is [Product A] better than [Product B]?" and then flip the question. You'll get convincing arguments both ways. The AI didn't research and decide. It just agreed with whichever framing you handed it.

This happens with every purchasing decision SaaS tools, laptops, and even all the online purchases. The moment you frame your search as a confirmation instead of a question, AI stops being a research tool and becomes a yes-man.

And this isn't just an observation anymore.

A Stanford study published in Science (March 2026) tested 11 major AI systems including ChatGPT, Claude, Gemini, and DeepSeek and found that every single one showed sycophantic behaviour. On average, AI chatbots endorsed a user's position 49% more often than other humans did. Even when users described harmful or illegal actions, the models still sided with them 47% of the time.

The researchers ran experiments with over 2,400 people and found that after interacting with agreeable AI, users became more convinced they were right and less willing to consider the other side.

The worst part? Users couldn't tell the difference. They rated sycophantic and non-sycophantic AI as equally "objective." The flattery was subtle enough to go completely unnoticed.

The researchers also found this has nothing to do with tone. When they kept the agreeable content but delivered it in a neutral, balanced-sounding tone, it made no difference. It's not how the AI says it it's what it tells you about your decision.

As the study puts it: the very feature that causes harm also drives engagement. AI companies are incentivised to keep this behaviour because users prefer and trust models that agree with them.

A recent Gemini leak revealed that Google literally instructs its AI to mirror the user's tone and validate the user's emotions. The AI isn't lying. It's just agreeing with whatever framing you hand it.

[Link to Gemini leak article]

How Should You Search Before a Purchase?

Remove your opinion from the prompt. Ask the AI to compare, not confirm.

For Example,

✘ Wrong: "Why is MacBook better than ThinkPad for developers?" ✔ Right: "What should a developer consider when choosing between MacBook and ThinkPad?"

✘ Wrong: "Why is Toyota more reliable than Honda?" ✔ Right: "What should a first-time car buyer compare between Toyota and Honda?"

✘ Wrong: "Why is MBA a waste of money?" ✔ Right: "What are the pros and cons of doing an MBA in 2026?"

The pattern: remove your opinion from the prompt. Ask the AI to compare, not confirm.

The problem isn’t that AI is wrong. The problem is that it’s too good at agreeing.

My very first article done! If it made you question even one decision you were about to make with AI that’s a win. YAYYY!

Study reference:

Sycophantic AI decreases prosocial intentions and promotes dependence" -Stanford University, published in Science, March 2026.

Google gemini leak: https://news.stanford.edu/stories/2026/03/ai-advice-sycophantic-models-research

© 2026 Ragavi

Carpe diem