Research2026-04-035 min read
New Products Are Invisible to AI
Claude Code dropped 57% in AI visibility in one day. Rivian doubled. New products get wildly inconsistent AI recommendations.
By AIAttention Research
{
"title": "Your New Product Is Invisible to AI — And Here's Proof",
"meta_description": "We tracked AI visibility for new vs. established products. Claude Code lost 57% of its AI visibility in a single day. Here's what the data reveals.",
"tags": ["ai", "marketing", "seo", "startup", "webdev"],
"body_markdown": "We tracked AI visibility for a handful of products over 48 hours. On day two, I opened the dashboard and one number stopped me cold.\n\n**Claude Code's AI Attention Score had dropped from 37.50 to 16.07 overnight.**\n\nThat's a **57% collapse in one day**. Not a gradual fade. Not a slow decline. One day visible, next day half-gone.\n\n---\n\n## The Stability Gap Nobody Talks About\n\nHere's what makes AI recommendations different from Google rankings: they're not just about *whether* you appear — they're about *consistency*.\n\nWe measured how often seven major AI models (GPT-4o, Claude, Gemini, Perplexity, and others) recommended each product when users asked relevant questions. The metric is simple: out of all the moments where a recommendation could happen, how often does your brand actually show up?\n\nFor established brands, the answer is boring in the best possible way.\n\n**GitHub Copilot:** 90–93 AAS. Rock solid. Day after day.\n\n**Tesla:** 81–90 AAS. Predictable within a narrow band.\n\nThese brands have trained the models. Their presence in training data, documentation, reviews, and editorial coverage is so deep that AI systems have a confident, consistent opinion. They show up in 7 out of 7 models, almost every time.\n\nNow look at the emerging products.\n\n---\n\n## When AI Visibility Is a Coin Flip\n\n**Claude Code** went from coverage in 4 out of 7 models to 2 out of 7 models between measurements. AAS: 37.50 → 16.07.\n\nThis isn't a knock on the product. Claude Code is genuinely good. But it's newer. AI models have patchy, inconsistent information about it — so their recommendations are patchy and inconsistent too.\n\n**Rivian** showed the opposite swing: AAS jumped from 29.35 to **57.49 in a single day**, going from 4/7 models to full coverage across all 7. Something — a news cycle, a viral review, a fresh model update — shifted perception across the board.\n\n**Windsurf** held steady. Steadily invisible. It's been sitting at 12–14 AAS with only 2 out of 7 models recommending it. No crash, no spike. Just a floor.\n\nThe pattern is clear: **established brands have stable AI visibility, emerging brands have volatile AI visibility**.\n\n---\n\n## Why This Actually Matters\n\nA few years ago, if your startup wasn't ranking on page one of Google, you worked on SEO. Structured data, backlinks, content cadence — there was a playbook.\n\nAI search doesn't have an established playbook yet. And unlike Google, which returns ten blue links with consistent rankings, a customer asking ChatGPT \"what's the best code editor\" might get a completely different answer tomorrow than they got today.\n\nIf you're a new product, **you could be recommended in the morning and invisible by afternoon** — based on nothing you did.\n\nThat volatility has real consequences. Customers are increasingly using AI assistants for purchase research, product comparisons, and tool recommendations. If your brand appears in those answers 40% of the time instead of 90%, you're losing sales you'll never even know you lost.\n\nThe worst part: **you won't notice unless you're measuring it**.\n\n---\n\n## The Opportunity Hidden in the Chaos\n\nHere's the flip side.\n\nEstablished brands are already locked in at the top. GitHub Copilot isn't going to *more* than dominate — it's already at 90+ AAS. Their ceiling is visible.\n\nEmerging products have volatile baselines, which means **the floor is also not fixed**. Rivian proved that a single spike of coverage can push you from 29 to 57 overnight. The models don't have a settled opinion about you yet.\n\nThat's not just a risk. It's a window.\n\nThe brands that start building deliberate AI visibility now — consistent documentation, structured data, high-quality coverage that AI systems can confidently cite — are laying a foundation before the market calcifies. In two or three years, the visibility gap between established and emerging brands will be as hard to close as the Google authority gap is today.\n\nWe tracked all of this using our monitoring platform, [AIAttention.ai](https://aiattention.ai) — which logs AAS and per-model coverage across multiple AI systems on a scheduled cadence.\n\nThe data is early. But the pattern is already striking.\n\n---\n\n**If customers are using AI to discover products in your category, do you actually know how consistently your brand shows up — or are you flying blind?**"
}
Start measuring your AI visibility today. Get Started Free →