The Measurement Gap Requires Honest Frameworks

Standard SEO metrics do not capture whether a brand is being cited, how it's being characterised, or which AI platforms are including or excluding it.

The metrics available for AI visibility are inadequate, inconsistently defined and frequently misrepresented by vendors.

And there is no equivalent of Search Console for LLM interactions – so prompt behaviour is largely invisible.

We have developed a confidence framework that tiers AI search metrics by reliability and actionable value. High-confidence metrics (inc. fan-out query rankings, LLM bot crawl frequency, branded search demand) form the core of reporting and strategy. Mid-confidence metrics provide quarterly directional context. No-confidence metrics (proprietary “AI visibility scores,” citation position rankings, exact keyword matching) we ignore entirely.

We can measure more than nothing but less than everything, so building the right framework now is key, because if/when standardised tooling arrives, those already tracking the right signals will be ahead.

Track what is reliable, use what is directional with appropriate context, and ignore what misleads… regardless of how impressive the vendor dashboard looks.

Recommended Reading

"AIs are highly inconsistent when recommending brands or products"

Research across 2,961 prompts showing less than a 1-in-100 chance of getting the same brand recommendation list twice from the same query

Belief Navigation