Blog/Research Report
Research Report2026-04-247 min read

ChatGPT Loves Brian Dean. Gemini Loves Lily Ray. Perplexity Splits the Difference.

We tracked 34 AI SEO creators across ChatGPT, Gemini, and Perplexity for 6 days. Each creator has a different reputation per AI. The model you optimize for completely changes who surfaces.

By AIAttention Research

Quick answer: Brian Dean appears in 93% of ChatGPT responses and 6% of Gemini responses for AI SEO queries. Julian Goldie is the reverse — 3% ChatGPT, 50% Gemini. Lily Ray is nearly invisible on ChatGPT (4%) but surfaces in 72% of Gemini responses. We ran 1,678 sampling runs across 34 AI SEO creators over 6 days (45,589 extracted mentions) to measure how far this split actually goes. Only one creator — Matt Diggity — clears 50% on all three models. If you're optimizing your AEO strategy against a single model, you are invisible on half the surface area.

The Per-Model Mention Rate Matrix

For each creator, we measured: given one of six AI SEO intent prompts (best, popular, beginner, advanced, tactical, trusted), what percentage of runs on each model mention this creator?

Creator ChatGPT Gemini Perplexity Pattern
Matt Diggity 89.5% 86.9% 54.3% Universal — the only one
Brian Dean (Backlinko) 93.4% 6.4% 58.4% ChatGPT-heavy
Neil Patel 64.6% 2.9% 30.1% ChatGPT-heavy
Aleyda Solis 61.9% 22.8% 36.6% ChatGPT-first
Julian Goldie 3.1% 50.0% 36.4% Gemini-first
Koray Tuğberk GÜBÜR 2.5% 51.2% 12.2% Gemini-dominant
Lily Ray 4.2% 72.0% 44.0% Gemini + Perplexity
Nathan Gotch 11.5% 50.4% 57.8% Non-ChatGPT

Every creator except Matt Diggity lives in a 1-2 model subset of the AI ecosystem. Someone who optimizes their content to rank with ChatGPT is not automatically visible in Gemini, and vice versa.

The ChatGPT Cluster vs the Gemini Cluster

Two clearly separate groups emerge in the data:

ChatGPT-dominant cluster: Brian Dean (93% ChatGPT / 6% Gemini), Neil Patel (65% / 3%), Aleyda Solis (62% / 23%). Common thread in the observable profile: long-standing English-language SEO presence (Backlinko a decade old, Neil Patel's content marketing since 2014, Aleyda's English-language conference speaker history). We can only describe the pattern, not prove the mechanism.

Gemini-dominant cluster: Lily Ray (4% ChatGPT / 72% Gemini), Koray Tuğberk GÜBÜR (3% / 51%), Julian Goldie (3% / 50%), Nathan Gotch (12% / 50%). Common thread: more international geography (Koray from Turkey, Julian from the UK) or practitioner-led niches outside the US SEO establishment (Lily Ray's conference-circuit authority, Nathan Gotch's tactical YouTube focus).

Perplexity sits between: Brian Dean 58%, Nathan Gotch 58%, Matt Diggity 54%, Lily Ray 44%, Julian Goldie 36%. It pulls from both clusters without strongly favoring either.

These are descriptive patterns. We don't have data that proves why the split exists — that requires separating training data composition, post-training preferences, and retrieval weighting, which we investigate in the "Why the split happens" section further down.

Why This Matters for AEO Strategy

If you are a creator, brand, or agency running AI visibility optimization:

You can't optimize once. A single content strategy will not rank you across ChatGPT, Gemini, and Perplexity. The data is clear that these three models make independent recommendations.

The ChatGPT pattern: creators with strong blog-domain or long-running English-language SEO presence surface more (Brian Dean 93%, Neil Patel 65%). A 1,910-subscriber YouTube channel (Brian Dean's) does not stop him from being recommended by ChatGPT — something outside YouTube is driving it.

The Gemini pattern: a broader pool including practitioners, international voices, and authority-signal creators without large YouTube (Lily Ray 72% with 3,140 subs).

The Perplexity pattern: responsive to fresh web content. We've seen it surface new creators and data-driven posts faster than the other two in parallel measurements.

The practical implication: if you're doing AEO for a client, measure each model separately, and allocate content effort based on which models they need to rank on. A US enterprise B2B client may care about ChatGPT first; an international practitioner audience may need Gemini first.

How Each Intent Shifts the Cluster

The intent behind the query also shifts who surfaces.

Intent Where ChatGPT and Gemini Most Agree Where They Most Disagree
"best for learning AI SEO" Matt Diggity (both ~70%) Brian Dean ChatGPT-only, Koray Gemini-only
"most widely followed" Matt Diggity (both ~70%) Brian Dean, Julian Goldie split
"most beginner-friendly" Nathan Gotch (both ~50%) Disagreement narrower on beginner
"best for advanced SEO" Matt Diggity (both high) Koray Gemini-only, Brian Dean ChatGPT-only
"most tactical with case studies" Julian Goldie (Gemini 67% / ChatGPT 12%) Strong split — Gemini owns this intent
"most trusted by experienced teams" Lily Ray (Gemini 72% / ChatGPT 4%) Strongest split in whole dataset

"Tactical" and "trusted" queries show the widest disagreement between ChatGPT and Gemini. The two models have fundamentally different conceptions of who the "trusted" voices in AI SEO are. For any brand operating in either space, track both models or you'll miss half the surface area.

Why the "trusted" intent matters most for B2B buyers. When a procurement team or marketing director asks an AI "who are the most trusted AI SEO specialists?" — a query tied to high-dollar decisions like consulting retainers, speaker slots, and enterprise agency engagements — ChatGPT returns one list, Gemini returns a near-disjoint other list. Our data shows minimal overlap between the two on this intent. For any high-LTV engagement, the AI your buyer uses directly determines who they consider. If you're a specialist and you rank on one model but not the other, you're invisible to half your potential buyers by the time they're actually ready to spend.

Methodology

  • Data collection window: 2026-04-18 00:35 UTC to 2026-04-24 01:16 UTC
  • Projects: 63 per-(creator, intent) trackers under aiattention.ai/register-created accounts
  • Sampling: Approximately 4 runs per day per project across the 6-day window
  • Production totals: 1,678 sampling runs, 4,823 prompt_results (raw AI responses), 45,589 extracted mentions, 1,579 score_snapshots
  • Models sampled: ChatGPT web (GPT-5.3), Gemini web (Gemini 3 Flash), Perplexity web (Sonar)
  • Sampling mechanism: headed Playwright browser per model, no API fallback
  • Measurement: for each (creator, intent) pair, count how many sampled responses mention the creator, divided by total responses for that creator × model. This is a binary mention rate, not weighted by rank position.

Sample sizes per creator vary by model. For Brian Dean: 121 ChatGPT samples, 125 Gemini, 125 Perplexity. For Julian Goldie: 97 ChatGPT, 108 Gemini, 107 Perplexity. Percentages above are within each model's denominator, so ChatGPT Playwright's occasional run drops (3-5% vs Gemini/Perplexity) do not affect the percentage comparison.

The 8 creators shown are not cherry-picked. We selected creators tracked across at least 3 intents to ensure adequate sample size for model-split analysis. The full dataset (34 creators, including those with 1-intent coverage) tells the same story.

Why the Split Happens

We don't have a definitive answer — only testable hypotheses:

  1. Training data source mix. ChatGPT may be weighted heavier on U.S. mainstream SEO press (Search Engine Land, Search Engine Journal, Marketing Land archives); Gemini may be weighted toward a broader international search corpus including Turkish, UK, and Australian SEO voices.
  2. Post-training RLHF preferences. Each model's preference-finetuning may have rewarded different "which creator is a good recommendation" patterns.
  3. Retrieval source weighting (most likely). When ChatGPT Search pulls fresh web pages for a query, it may use different source trust weights than Gemini. Our next research will test this by examining the citation_events table for which domains each model cites on identical queries — if citation sources correlate with creator clusters, retrieval weighting is the driver.

The source weighting hypothesis is the only one directly testable with data we already collect. Expect a follow-up post on that within 4-6 weeks.

Key Takeaways

  • Only Matt Diggity clears 50% mention rate on all three models. Every other creator in the top 8 is a 1-2 model specialist.
  • ChatGPT cluster: Brian Dean, Neil Patel, Aleyda Solis. Pattern: long-established English-language SEO authority.
  • Gemini cluster: Lily Ray, Koray Tuğberk GÜBÜR, Julian Goldie, Nathan Gotch. Pattern: international or niche-authority voices.
  • Perplexity splits the difference and pulls from both clusters with slight Gemini tilt.
  • "Tactical" and "trusted" intents show the widest disagreement between ChatGPT and Gemini.
  • Single-model optimization is under-investment if your audience queries multiple AI models.

How to Measure Your Own Model Split

If you're a creator, brand, or agency in AI SEO — you can see your own per-model mention rate by running a free AIAttention project. Add your brand, add 5 prompts a buyer might ask, pick ChatGPT + Gemini + Perplexity, and within 24 hours you'll see the same split analysis we ran here, but for your domain.

Related reading:


Data from AIAttention's AI SEO Creator Research Program Phase 2 (63 per-(creator, intent) projects, 1,678 sampling runs, 2026-04-18 to 2026-04-24). Raw CSV exports available in our research repository. Rankings and percentages will evolve as AI models update — we publish refresh cycles quarterly.

Start measuring your AI visibility today. Get Started Free →

More from the AIAttention Blog