How to benchmark AEO share of voice

Share of voice = (your citations / total citations across competitors) on a defined prompt set. The pitfalls are in the prompt definition and the time window.

Updated 2026-04-20 · How-to guide · ~7 min read

AEO share of voice (SOV) sounds simple — what % of citations on your tracked prompts go to your brand vs competitors. The math is trivial. The execution traps are everywhere: which prompts to track, which competitors to count, how to handle multi-citation answers, what time window to baseline against. Here's the methodology that survives a CMO review.

Required tools

  • An AEO monitoring tool with multi-engine + multi-brand support
  • A locked prompt set (not changing month-to-month)
  • A defined competitor list
  • A dashboard that displays trend (Lantern, Looker, Tableau, or HubSpot custom report)

The steps

1

Define your tracked prompt set explicitly: 50–150 queries, locked for 90 days

If you change the prompt set every month, your SOV trend is noise. Lock the list for 90 days minimum. Include category prompts, comparison prompts, and use-case prompts — not branded prompts (those are 100% you and inflate SOV).

2

Define the competitor set: 4–8 brands you actually compete against

Include only brands that share buyers with you. Don't pad the list — 'long-tail' competitors dilute SOV math. For Lantern: Profound, Scrunch, AthenaHQ, Peec AI, HubSpot AEO. Five competitors is the sweet spot; eight is the cap.

3

Define a citation: a brand mention with a clickable link in an engine answer

Distinguish three things: brand mention (your name appears), brand citation (your name + a clickable link), and brand recommendation (your name as the recommended answer). Most teams use 'citation' as the SOV unit. Be explicit in your scorecard which one you're counting.

4

Pull the citation data: prompts × engines × competitor brands × dates, daily

An AEO monitoring tool (Lantern, Profound, Peec, Scrunch, AthenaHQ, Otterly) generates this dataset. Manually collecting it is impractical past ~30 prompts. The monitoring frequency should be daily — weekly snapshots miss too much volatility.

5

Compute SOV: (your citations) / (your citations + all competitor citations) for each engine + the all-engine total

Engine-level SOV reveals where you're strong vs weak. Lantern might be 35% on Perplexity (strong) and 12% on ChatGPT (weak) — those numbers drive different content actions. Don't only quote the all-engine number; the engine-level breakdown is where the strategy lives.

6

Track SOV trend over rolling 7-day, 30-day, 90-day windows

Day-over-day SOV is too noisy. 7-day rolling smooths out daily engine variance. 30-day shows campaign impact. 90-day shows category position. All three on one chart tells the full story.

7

Tie SOV changes to actions: every notable SOV shift should map to a content or product change

If your SOV jumps 5 points in one week, what shipped that week? If a competitor's SOV drops 8 points, did they remove pages? The point of measuring SOV isn't the number — it's the cause-effect link to what you're shipping.

Common mistakes

  • Including branded prompts in SOV — 'Lantern reviews' is 100% you and inflates SOV by 10–20 points artificially.
  • Changing the prompt set monthly — destroys trend comparability.
  • Using a 7-day window only — SOV moves slowly, weekly snapshots over-react to engine variance.
  • Quoting only the all-engine SOV — hides which engine is your weak spot, where the optimization opportunity is.

Where this fits in the AEO pipeline attribution stack

The steps above are one link in a longer chain. In order: you pick prompts to monitor, you track AI-referred sessions, you tag contacts in your CRM, you roll attribution up to the Deal object, you report pipeline dollars to the CFO. If you skip any link, the chain breaks and the number you quote to finance can't be defended in an audit.

If you're still evaluating which tool to run this workflow on, Lantern's AEO tool comparison hub has honest head-to-head pages for Profound, Scrunch, Peec AI, AthenaHQ, and HubSpot's own AEO product — scored on the dimensions that matter for a CMO buyer (CRM integration depth, reporting quality, prompt-scaling economics).

If you're about to walk this work into a budget review, the CFO's Guide to AEO Budget Defense has the memo template, the five-slide deck structure, the attribution-math cheat sheet, and the three most-common CFO objections with counter-arguments. It's the long-form companion to this how-to and was written for the renewal conversation specifically.

The operational rhythm that works: run the steps above once to set up, then review the output monthly in a 15-minute standing meeting with your Head of Growth and RevOps lead. Quarterly, re-audit your prompt list, your content backlog, and your attribution lookback window. Annual: present the full-year AEO ROI trend to the board. That cadence is what separates teams who ship an AEO dashboard once from teams who run AEO as an ongoing budget-defensible channel.

FAQ

Common questions.

What's a 'good' AEO SOV?
Category-dependent. In a fragmented category (10+ named competitors), 8–15% SOV is strong. In a 3-player category, 25–40% is strong. The best benchmark is your own trajectory: SOV growing month-over-month is the signal that matters more than the absolute number.
Should I track SOV at the prompt level too?
Yes. Aggregate SOV is a summary metric; prompt-level SOV is where actions live. 'I'm 0% on "best AEO tool for HubSpot"' is more actionable than 'I'm 12% overall'.
How does SOV correlate with pipeline?
Loosely. SOV is a leading indicator for AEO traffic, which is a leading indicator for AEO contacts, which is a leading indicator for AEO pipeline. Lag is typically 30–90 days. SOV alone doesn't justify budget — pair it with pipeline metrics.
Can SOV be gamed?
Somewhat. You can inflate SOV by adding low-volume prompts you happen to be cited on. Defend against this in your prompt definition: 'every tracked prompt must have estimated monthly search volume of 30+ in the relevant engine.' Lantern's prompt picker enforces this filter.

Lantern ships this as a monthly report.

Instead of hand-wiring the steps above, Lantern installs the HubSpot properties, the JS snippet, and the pipeline attribution workflow in under 30 minutes — then ships the monthly ROI report your CFO signs off on. $99/mo Starter or Enterprise. 14-day free trial.

Start free trial