Share of voice = (your citations / total citations across competitors) on a defined prompt set. The pitfalls are in the prompt definition and the time window.
AEO share of voice (SOV) sounds simple — what % of citations on your tracked prompts go to your brand vs competitors. The math is trivial. The execution traps are everywhere: which prompts to track, which competitors to count, how to handle multi-citation answers, what time window to baseline against. Here's the methodology that survives a CMO review.
If you change the prompt set every month, your SOV trend is noise. Lock the list for 90 days minimum. Include category prompts, comparison prompts, and use-case prompts — not branded prompts (those are 100% you and inflate SOV).
Include only brands that share buyers with you. Don't pad the list — 'long-tail' competitors dilute SOV math. For Lantern: Profound, Scrunch, AthenaHQ, Peec AI, HubSpot AEO. Five competitors is the sweet spot; eight is the cap.
Distinguish three things: brand mention (your name appears), brand citation (your name + a clickable link), and brand recommendation (your name as the recommended answer). Most teams use 'citation' as the SOV unit. Be explicit in your scorecard which one you're counting.
An AEO monitoring tool (Lantern, Profound, Peec, Scrunch, AthenaHQ, Otterly) generates this dataset. Manually collecting it is impractical past ~30 prompts. The monitoring frequency should be daily — weekly snapshots miss too much volatility.
Engine-level SOV reveals where you're strong vs weak. Lantern might be 35% on Perplexity (strong) and 12% on ChatGPT (weak) — those numbers drive different content actions. Don't only quote the all-engine number; the engine-level breakdown is where the strategy lives.
Day-over-day SOV is too noisy. 7-day rolling smooths out daily engine variance. 30-day shows campaign impact. 90-day shows category position. All three on one chart tells the full story.
If your SOV jumps 5 points in one week, what shipped that week? If a competitor's SOV drops 8 points, did they remove pages? The point of measuring SOV isn't the number — it's the cause-effect link to what you're shipping.
The steps above are one link in a longer chain. In order: you pick prompts to monitor, you track AI-referred sessions, you tag contacts in your CRM, you roll attribution up to the Deal object, you report pipeline dollars to the CFO. If you skip any link, the chain breaks and the number you quote to finance can't be defended in an audit.
If you're still evaluating which tool to run this workflow on, Lantern's AEO tool comparison hub has honest head-to-head pages for Profound, Scrunch, Peec AI, AthenaHQ, and HubSpot's own AEO product — scored on the dimensions that matter for a CMO buyer (CRM integration depth, reporting quality, prompt-scaling economics).
If you're about to walk this work into a budget review, the CFO's Guide to AEO Budget Defense has the memo template, the five-slide deck structure, the attribution-math cheat sheet, and the three most-common CFO objections with counter-arguments. It's the long-form companion to this how-to and was written for the renewal conversation specifically.
The operational rhythm that works: run the steps above once to set up, then review the output monthly in a 15-minute standing meeting with your Head of Growth and RevOps lead. Quarterly, re-audit your prompt list, your content backlog, and your attribution lookback window. Annual: present the full-year AEO ROI trend to the board. That cadence is what separates teams who ship an AEO dashboard once from teams who run AEO as an ongoing budget-defensible channel.
Instead of hand-wiring the steps above, Lantern installs the HubSpot properties, the JS snippet, and the pipeline attribution workflow in under 30 minutes — then ships the monthly ROI report your CFO signs off on. $99/mo Starter or Enterprise. 14-day free trial.
Start free trial