Blog/Research
Research2026-04-065 min read

The Paradox: Claude Is the Best AI Model, But Anthropic Ranks 5th in AI Visibility

AI visibility experiment results

By AIAttention Research

Everyone in the AI world seems to agree on one thing: Claude is exceptional. Developers praise its reasoning. Writers love its nuance. Researchers trust its accuracy. And yet, when we asked AI models to recommend AI companies, Anthropic barely made the top half of the list.

That's not an opinion. That's data.

We ran a four-day tracking study across 7 AI companies and 7 AI models, measuring how often each company appeared in AI-generated answers. The results were humbling — at least for Anthropic fans.

OpenAI topped the chart at 82.85. No surprise. ChatGPT colonized public consciousness before most people knew what a large language model was. Brand ubiquity has a compounding effect, and OpenAI has been compounding for years.

Google DeepMind came in second at 75.89. Also unsurprising. Google's AI infrastructure is so embedded in everyday life — Search, Gmail, Maps, Android — that brand awareness almost doesn't need to be earned. It's inherited.

Then things get interesting.

Alibaba's Qwen scored 45.40, appearing in 4 of 7 models. A Chinese tech giant with a lesser-known model family outranked two Western AI labs that arguably have better brand recognition in English-speaking markets.

DeepSeek landed at 39.29, mentioned in 3 of 7 models. For a company that exploded onto the global stage just months ago, that's a remarkable foothold.

And then: Anthropic, at 26.83.

A company widely respected in AI safety circles. The creator of Claude — a model that consistently ranks at or near the top in head-to-head benchmarks. Mentioned in 6 of 7 models, which means AI systems know who Anthropic is. They just don't talk about them first.

xAI scored 22.60, and Baidu AI — despite being one of the largest AI labs in the world — scored 0. Completely invisible in AI-generated answers across our entire study.

Here's the irony worth sitting with: the models doing the ranking were themselves trained on human-generated content that reflects human attention. And humans, collectively, talk about OpenAI and Google far more than they talk about Anthropic. So the models have learned to do the same.

This is the core insight. Product quality does not equal brand visibility. Anthropic has arguably built the best model. They've also written some of the most thoughtful research on AI safety and alignment. But in the attention economy — even the AI-mediated version of it — what matters is how much surface area your brand occupies in the conversation.

Being the best-kept secret is not a growth strategy.

This is exactly why we built AIAttention.ai — to give companies a way to measure and track their visibility inside AI-generated answers, not just in search results.

The question worth asking: if AI models are becoming the new front page of the internet, is your brand showing up?

Start measuring your AI visibility today. Get Started Free →

More from the AIAttention Blog