AI SEARCH OPTIMIZATION

Welcome to AI Search Optimization 🎉

Hey you...I see you and appreciate you for being an early subscriber!
I've spent two and a half years writing AI for Ecommerce, obsessively tracking how AI is reshaping online commerce at truly inadvisable hours. But somewhere around my 47th late-night deep-dive into Google's latest existential crisis, I realized: the search revolution isn't just an ecommerce story. It's an everything story.
Whether you're running a SaaS company, a local accounting firm, or a DTC brand selling artisanal hot sauce (respect) — the rules of visibility are being rewritten in real time. AI Overviews, answer engine optimization, LLM citations, agentic discovery — all of it.
This newsletter is practical, honest analysis of what's changing and what it means for your business. No jargon. No breathless hot takes without receipts.
I'm thrilled you're here — in the "nervous host who's triple-checked the appetizers" way, not the "corporate sign-off" way. This is new, so it'll evolve. Hit reply anytime with suggestions. I read everything.
Now let's get into it.
Feature Story
How to Track Your AI Visibility with Prompts (Without Enterprise Tools)
Okay, I need to confess something: I spent last Friday night running the same prompt through ChatGPT eleven times in a row, watching it recommend completely different brands each time, and muttering "what even ARE you" at my laptop like a person who definitely has their life together.
But here's the thing — that mildly unhinged experiment actually revealed something important. There's a growing consensus that AI search engines are becoming a genuine source of discovery for businesses across every industry. What nobody seems to agree on is how to actually measure whether your brand shows up when someone asks ChatGPT, Perplexity, or Gemini for a recommendation in your space.
The instinct for most marketers is to treat this like SEO rank tracking — punch in a keyword, check your position, move on, feel briefly productive. That instinct is wrong, and acting on it will waste your time. (I know because I wasted mine first. You're welcome.)
Why "Position One" Doesn't Exist Anymore (And Why That's Fine, Probably)
AI responses are volatile in a way that would give any traditional SEO professional heart palpitations. Ask ChatGPT for the best project management tool for small teams, and you might get Monday.com and Asana at the top. Ask again thirty seconds later and suddenly ClickUp is leading the pack like it bribed someone. The response shifts depending on phrasing, model version, whether the engine triggered a web search, your conversation history, and factors that neither you nor the platform fully controls.
There is no stable "position one" in AI search. There may never be. Which, if you've spent the last decade obsessing over rankings, is either terrifying or oddly liberating. (My brain keeps toggling between both.)
Yet directional insight still matters. If your brand never appears across dozens of relevant prompts while three competitors show up consistently, that tells you something actionable. If AI engines are confidently stating your service lacks a capability it actually has — with that infuriating "here to help!" energy — that's a problem you can fix.
Glen Allsopp of Ahrefs recently published a detailed framework for AI visibility tracking, anchored around a concept he calls prompt clustering — grouping semantically similar prompts by intent and analysing aggregate patterns rather than fixating on individual responses. The logic is sound, and crucially, it doesn't require Ahrefs or any enterprise tool to implement.
What follows is a practical methodology any brand can build with ChatGPT, a spreadsheet, and data sources you likely already have. No five-figure software contracts required. (Your CFO can thank me later.)
The Mental Model That Stops You Going Insane
Before building anything, it helps to be precise about what AI visibility tracking actually is — and what it is not.
It is not a ranking system. It is not stable. It is not something you configure once and check quarterly while feeling smugly data-driven. And it is not a replacement for analytics, server logs, or traditional search data.
What it is: a way to detect directional signals about how AI engines represent your brand, your competitors, and your category over time. Whether your brand gets mentioned at all. Whether the information is accurate. Which competitors appear alongside you. What sources get cited. What attributes AI associates with your business.
Here's the critical bit: you are not tracking individual prompts. You are tracking clusters of prompts grouped by intent, and you're reading the aggregate response patterns across those clusters. One ChatGPT answer to one question on one afternoon is noise. Thirty related prompts tested across two or three engines over several weeks — that starts to become signal.
It's the difference between checking the weather once and understanding the climate. (Look at me getting all metaphorical at midnight. My English teacher would be proud.)
Building Prompt Clusters from Real Demand (Not Your Imagination)
The foundation of any useful tracking system is the prompts themselves, and the most common mistake is generating them from imagination rather than evidence. Sitting in a brainstorm inventing what customers might ask an AI assistant produces prompts that sound plausible but may bear no resemblance to actual demand patterns. It's like writing customer personas based on who you wish your customers were rather than who they actually are. (We've all done it. No judgement.)
There are better sources, and you're probably already sitting on them.
Google Search Console remains wildly underused for this. Filter for question-format queries — those starting with "what," "how," "is," "can," "should," "which" — and you have a direct window into the language real people use when they have intent that matters. Long-tail queries of six words or more are gold because they approximate the conversational style people use with AI assistants. A query like "best CRM for consultants who hate CRMs" in Search Console maps almost directly to how someone would phrase that to ChatGPT.
Customer support data is arguably even more valuable, and it's almost entirely absent from the conversation about AI visibility. Support tickets, chat transcripts, sales call objections, and review complaints contain the exact friction points that determine whether a customer converts or walks away.
Forums, Reddit, and community discussions offer phrasing that sits much closer to natural language than any optimised landing page. Forum users aren't thinking about keywords. They write things like "anyone switched from HubSpot to something less bloated for a small agency?" — which is precisely how someone talks to an AI assistant. Google's Discussions and Forums feature (accessible via the &udm=18 parameter, for those who enjoy URL parameters as a hobby) surfaces these conversations efficiently.
Reviews and user-generated content — both yours and your competitors' — are a rich and frequently overlooked source. Mining reviews for recurring praise, common complaints, comparison language, and specific use cases generates prompts that map directly to the attributes AI engines associate with businesses. If your competitor's reviews consistently mention customer support while yours emphasise ease of setup, those become distinct prompt clusters worth tracking separately.
People Also Ask in Google search results, while not a direct reflection of AI query patterns, remains effective for expanding clusters quickly. A single seed query can generate dozens of adjacent questions that reveal how people think about the decision — including concerns you may not have considered. (Humbling, but useful.)
A Framework That Won't Make You Want to Cry
For each high-value topic your brand competes in, build prompts across several angles. Not every angle applies to every business, but the framework provides consistent coverage.
Category discovery prompts capture top-of-funnel intent: "What are the best [solutions] for [use case or persona]?" These are the prompts where you either appear in the consideration set or you don't.
Constraint-based prompts add specificity: "What's the best [solution] if I need [specific requirement]?" These map to the filtered, specific queries that increasingly happen in AI interfaces.
Comparison prompts test direct competitive positioning: "[Your brand] vs [competitor] for [use case]." These reveal how AI engines frame your offering relative to alternatives — and which attributes they emphasise. (Sometimes the results are flattering. Sometimes you need a stiff drink.)
Validation prompts assess trust signals: "Is [brand] worth it in 2026?" or "Is [brand] reliable long-term?" These are where review sentiment, third-party mentions, and content quality have outsized influence.
Accuracy checks test whether AI engines have correct information: "Does [brand] offer [specific capability]?" or "What is [brand's] pricing model?" Inaccurate responses here represent direct revenue leakage. Silent, invisible revenue leakage. The worst kind.
Alternative and substitution prompts monitor competitive threats: "What are alternatives to [brand]?" If AI consistently recommends three competitors and omits you, that's a visibility gap worth investigating.
Persona-contextualised prompts approximate personalised queries: "I'm a [persona] with [constraint]. What would you recommend for [goal]?" These are harder to validate but useful for understanding how AI engines match offerings to specific profiles.
For most brands, starting with three to five high-value topic areas and building eight to ten prompts per cluster provides a workable starting point. The discipline is in starting narrow and being consistent. (Over-engineering from day one is how tracking systems end up in the graveyard of "ambitious Q1 initiatives.")
Logging Without Losing Your Mind
Once you have your clusters, the testing protocol matters more than the testing tool. At minimum, run prompts across ChatGPT and Perplexity. Adding Gemini or Copilot broadens coverage but isn't essential at the outset.
For each response, log structured observations rather than taking screenshots and hoping you'll remember what you noticed three weeks later. (Narrator: you will not remember.) A simple tracking sheet should capture: the date, which engine you tested, the cluster name, the specific prompt, whether a web search was triggered, whether your brand was mentioned, approximate order of mention, recommendation sentiment, which competitors appeared, attributes associated with your brand, sources cited, accuracy issues, and — this is the important column — what action the finding suggests.
If a finding doesn't suggest something you can do — update a page, create missing content, correct a third-party mention, strengthen a citation source — then the finding is interesting but not useful. Tracking without a plan to act on what you find is an expensive way to feel informed. (And we already have Twitter for that.)
A lightweight scoring approach that does add value: rate each response on a simple zero-to-two scale across five dimensions — whether your brand was mentioned, whether the information was accurate, whether the response aligned with commercial intent, whether cited sources were relevant and current, and how you were positioned relative to competitors. Averaged across a cluster, this gives you a rough but trackable directional metric over time.
When AI Confidently Gets You Wrong
Perhaps the most immediately actionable outcome of all this work is discovering that AI engines are presenting incorrect information about your business with the serene confidence of someone who has never once doubted themselves. This happens more often than you'd expect, and the consequences compound because AI responses carry an authority that a random forum post does not.
When you find inaccuracies, work across multiple layers simultaneously. First, fix your own properties — service pages, FAQ sections, documentation, capability descriptions, and pricing should be unambiguous, current, and structured so AI crawlers can parse them clearly. If your own site is the source of confusion, nothing external will help.
Second, address the third-party ecosystem. AI engines draw heavily from reviews, expert mentions, comparison articles, and trusted publications. If a widely-cited review from 2024 states your product lacks a feature you've since added, that stale information is actively working against you in every AI response that references it. Updating those mentions isn't optional — it's maintenance.
Third, create content for the gaps. If AI engines consistently answer questions about a use case your business serves well but your site has no dedicated page addressing it, that absence is visible in the response data. Comparison pages, use-case content, and objection-handling material aren't just SEO assets anymore — they're inputs to the information ecosystem that AI engines synthesise.
Fourth — and this is the part most people skip because it feels tedious — repeat the tracking. After making changes, re-run the same prompt clusters on the same cadence and measure whether responses shift. The feedback loop between tracking, action, and re-tracking is where this methodology produces compounding value rather than a one-time audit.
The Bottom Line
AI visibility tracking isn't glamorous. It won't generate the dopamine hit of watching a keyword hit position one. But in a world where AI engines are increasingly becoming the first — and sometimes only — place people go to evaluate options, knowing how those systems represent your brand is no longer optional.
The good news: you don't need enterprise tools to start. A spreadsheet, some real demand data, and a willingness to run the same prompts enough times that your laptop starts to judge you. That's the kit.
The brands that build this discipline now — while competitors are still arguing about whether AI search "really matters yet" — are going to have a meaningful head start. And in a landscape that's shifting this fast, a head start is worth more than a perfect system.
Now if you'll excuse me, I have eleven more prompts to run and a laptop to apologise to.
Behind The Writing
ABOUT THE WRITER

Jo Lambadjieva is an entrepreneur and AI expert in the e-commerce industry. She is the founder and CEO of Amazing Wave, an agency specializing in AI-driven solutions for e-commerce businesses. With over 13 years of experience in digital marketing, agency work, and e-commerce, Joanna has established herself as a thought leader in integrating AI technologies for business growth.