Perplexity's referrer behavior differs from ChatGPT — it sends referrer headers more reliably and converts at 1.5–2x the rate. Here's the engine-specific setup.
Perplexity is the under-tracked AEO engine. It sends fewer total clicks than ChatGPT but converts dramatically better for B2B because Perplexity cites sources in-line and the user has already decided which source to investigate before they click. Here's how to capture and report on it cleanly.
Update the document.referrer check from your ChatGPT setup: add 'perplexity.ai', 'www.perplexity.ai', 'labs.perplexity.ai'. When matched, set utm_source=perplexity&utm_medium=ai-referral. Perplexity (unlike mobile ChatGPT) reliably sends referrer headers, so the snippet hits 90%+ accuracy.
Lists > Active list > ai_source is perplexity. Use this to compare Perplexity's contact-creation and conversion rates against ChatGPT in side-by-side reports. The compare is what surfaces Perplexity's outperformance.
Perplexity's citation logic differs from ChatGPT's — it favors recently-updated content and sources with strong on-page schema. A page that's cited on ChatGPT may not be cited on Perplexity, and vice versa. Run separate monitoring (Lantern, Profound, etc. all support engine-specific tracking).
Single-object Contact report. Filter: ai_source is perplexity OR chatgpt. Group by: ai_source. Compare: contacts created in last 30 days, conversion-to-MQL rate, conversion-to-opportunity rate. This is where you'll see Perplexity's per-visit value advantage.
Perplexity Pro users (paid tier) tend to be higher-intent buyers. Distinguish them in your data: utm_source=perplexity-pro for clicks from pro.perplexity.ai. In B2B, Pro users convert at 2–3x the rate of free users.
Perplexity surfaces related questions below an answer. Clicks from those have a slightly different URL signature (?ref=related). If your AEO tool (or your own snippet) parses this, you can attribute revenue to the related-question surface separately — useful for content strategy.
Open Perplexity, search a prompt your brand should appear in, click your citation, fill a form. Within 5 minutes the new contact should have ai_source=perplexity. If it shows ai_source=chatgpt or none, your hostname matching missed.
The steps above are one link in a longer chain. In order: you pick prompts to monitor, you track AI-referred sessions, you tag contacts in your CRM, you roll attribution up to the Deal object, you report pipeline dollars to the CFO. If you skip any link, the chain breaks and the number you quote to finance can't be defended in an audit.
If you're still evaluating which tool to run this workflow on, Lantern's AEO tool comparison hub has honest head-to-head pages for Profound, Scrunch, Peec AI, AthenaHQ, and HubSpot's own AEO product — scored on the dimensions that matter for a CMO buyer (CRM integration depth, reporting quality, prompt-scaling economics).
If you're about to walk this work into a budget review, the CFO's Guide to AEO Budget Defense has the memo template, the five-slide deck structure, the attribution-math cheat sheet, and the three most-common CFO objections with counter-arguments. It's the long-form companion to this how-to and was written for the renewal conversation specifically.
The operational rhythm that works: run the steps above once to set up, then review the output monthly in a 15-minute standing meeting with your Head of Growth and RevOps lead. Quarterly, re-audit your prompt list, your content backlog, and your attribution lookback window. Annual: present the full-year AEO ROI trend to the board. That cadence is what separates teams who ship an AEO dashboard once from teams who run AEO as an ongoing budget-defensible channel.
Instead of hand-wiring the steps above, Lantern installs the HubSpot properties, the JS snippet, and the pipeline attribution workflow in under 30 minutes — then ships the monthly ROI report your CFO signs off on. $99/mo Starter or Enterprise. 14-day free trial.
Start free trial