AI SEARCH OPTIMIZATION

Feature Story
Bing Just Showed You Which Pages AI Actually Cites — And Everyone Yawned at the Wrong Moment
Okay, I need to talk about something that happened last week that barely made a ripple, and I'm slightly furious about it. Microsoft — yes, Microsoft, the company your brain auto-files under "Teams notifications I'm ignoring" — just released what might be the most useful piece of search infrastructure this year.
And practically nobody noticed. Which is either a testament to how effectively Microsoft has trained us to tune them out, or a damning indictment of our collective attention spans. (Probably both. Definitely both.)
Here's what happened: Bing Webmaster Tools now lets you see exactly which of your pages are being cited in AI-generated answers, and which queries triggered those citations. Two datasets that used to sit in separate rooms making awkward eye contact through a window are now actually connected.
I know. Bing. I can hear you already. But stay with me, because this matters way more than the branding suggests.
The "Flying Blind" Problem (That We've All Been Pretending Doesn't Exist)
For the past couple of years, we've all been nodding along to the "optimise for AI" narrative while having essentially zero visibility into whether AI systems are actually referencing our content. We could track rankings. We could monitor clicks. We could obsessively refresh Google Search Console like it owed us money.
But the moment someone asked "so, is your content actually showing up in AI answers?" the honest response was basically a shrug and a prayer. We were optimising for a system we couldn't measure, which — if you think about it for more than three seconds — is genuinely unhinged behaviour. Like training for a marathon but never checking whether you're actually signed up for the race.
Microsoft's AI Performance dashboard, which launched back in February, started addressing this. It showed citation counts, which pages were being referenced, trend data over time. Useful stuff. But it was a bit like being told "congratulations, someone read your book" without being told which chapters, or why, or in response to what question. Nice to know, impossible to act on.
The update that landed last week fills that gap. And suddenly, the dashboard goes from "interesting reporting surface" to "thing you can actually make decisions with." Which, in the world of SEO tooling, is a rarer transformation than you'd think.
Grounding Queries: Not What You Think They Are
Here's where it gets properly interesting — and where most people are going to get tripped up, so pay attention. (I say this as someone who got tripped up myself and spent an embarrassing amount of time staring at the dashboard before the penny dropped.)
The queries you see in the dashboard aren't search queries. They're not what humans typed into a search bar. They're grounding queries — the internal phrases that the AI's retrieval system generates when it's constructing an answer.
So when someone asks Copilot something like "what's the best way to reduce bounce rates on a SaaS landing page," the grounding queries the system generates might be more like "reduce website bounce rate," "SaaS landing page optimisation," or "bounce rate improvement strategies." Shorter. More generic. Decomposed into component information needs.
In other words, you're not seeing user intent. You're seeing how the AI interprets user intent. Which is a completely different thing. And understanding that distinction is the difference between looking at this data and going "huh, interesting" versus actually knowing what to do with it.
It means the game isn't about matching exact phrases people type. It's about whether your content clearly and authoritatively addresses the underlying topic the AI is trying to ground its answer in. Less "keyword matching," more "does this page convincingly know what it's talking about on this subject."
Which, honestly, has been the direction of travel for a while. But now you can actually see it happening.
The Bidirectional Bit (Where It Gets Genuinely Useful)
The mapping works both ways, and this is where I started getting unreasonably excited. (My partner walked in, saw me grinning at a Bing dashboard, and left the room without comment. We've reached that stage of the relationship.)
You can click on a grounding query and see which of your pages were cited for it. Or you can select a specific page and see which grounding queries are driving its citations. Same data, two entry points, completely different strategic applications.
From the query side, you can spot where your site has strong citation presence and where it's conspicuously absent. From the page side, you can see whether a page is being cited broadly across many query types — suggesting the AI treats it as a general authority — or narrowly for one specific topic.
That's the kind of information that tells you which pages to update, which topics to expand into, and where the gaps are. Not in theory. Not based on vibes and educated guesses. In actual, observable data from an actual AI retrieval system.
For anyone building a content strategy around AI visibility — and at this point, if you're not at least thinking about it, we should talk — this is the closest thing to a feedback loop we've had.
The Caveats (Because There Are Always Caveats)
Now, before anyone gets carried away and starts reorganising their entire content operation based on a Bing dashboard — deep breath. The limitations are real, and Microsoft is refreshingly upfront about them.
No click-through data. You can see that your page was cited, but not whether anyone actually clicked through to your site. Citation and traffic are not the same thing, and without that connection, you can't measure the commercial value of being cited. It's like knowing your restaurant was recommended by a food critic but having no idea whether anyone actually showed up for dinner.
Sampled data, not comprehensive. The dashboard represents a sample of citation activity, not a complete log. If your site is getting cited infrequently, that activity might not surface at all. This is designed for trends and patterns, not precise accounting. So please don't put exact citation numbers in a board presentation and attribute them to me.
Microsoft ecosystem only. The dashboard covers Copilot, Bing AI summaries, and some partner integrations. It tells you nothing about ChatGPT, Google's AI Overviews, Perplexity, or anything else. Given that Microsoft's infrastructure powers a meaningful chunk of the AI assistant market, the data is significant — but it's one window into one system.
Meanwhile, Over at Google...
This is the part that's quietly fascinating. Google, the company that has historically set the standard for search analytics, currently offers less granular visibility into AI citation activity than Bing does.
Google folds AI Overviews and AI Mode data into the standard Performance reporting in Search Console. There's no separate AI citation report. No page-level citation counts. No equivalent of the grounding query view.
Microsoft moved first. On this specific thing, in this specific moment, Bing Webmaster Tools is the more advanced product. I realise typing that sentence felt strange, and reading it probably feels stranger. But here we are.
Whether Google will follow with something similar is anyone's guess. But for now, if you want to understand how your content performs in AI-generated answers, Microsoft is the one showing you.
Why This Matters More Than It Seems
Here's the bigger picture, and it's the reason I think more people should be paying attention to this instead of scrolling past it.
AI-generated answers are becoming a primary way people interact with information. Recent industry estimates suggest that half of consumers are already using AI-powered search, with the potential to influence hundreds of billions in revenue within the next few years. As more queries get answered by AI systems rather than traditional blue links, the question of whether your content is being cited — and for which queries — becomes increasingly central to your visibility strategy.
This isn't about replacing traditional SEO. It's about recognising that a new measurement layer now exists alongside it. A page can rank beautifully in organic search and never appear in an AI answer. A page can be cited constantly by AI systems and generate zero direct traffic. These are different signals measuring different things, and the businesses that understand both will have a substantially clearer picture of how their content actually performs in the real world.
The Bottom Line
The Bing AI Performance dashboard doesn't solve the full measurement problem — not even close. But it does something that wasn't possible six months ago: it shows you which of your pages AI systems consider worth citing, and which queries they associate with that content.
Is it comprehensive? No. Is it from Bing? Yes, I know. Is it still the most actionable window into AI citation behaviour that any search engine currently offers?
Also yes.
For anyone producing content and expecting it to be discovered — whether you're running a SaaS blog, a local business site, a B2B resource hub, or a publishing operation — this is data worth having. And more importantly, it's data worth acting on before the rest of your industry catches on that the quiet kid in the corner just built the most interesting toy in the room.
Go set up your Bing Webmaster Tools. I know. I'm sorry. But do it anyway.
Behind The Writing
ABOUT THE WRITER

Jo Lambadjieva is an entrepreneur and AI expert in the e-commerce industry. She is the founder and CEO of Amazing Wave, an agency specializing in AI-driven solutions for e-commerce businesses. With over 13 years of experience in digital marketing, agency work, and e-commerce, Joanna has established herself as a thought leader in integrating AI technologies for business growth.
