Big Agnes, REI, Zpacks, or Hilleberg — AI's Best Backpacking Tent Depends Entirely on How You Ask.
90 queries across ChatGPT, Gemini, and Perplexity asking for the best backpacking tent. Six phrasings returned four different brand winners. Hilleberg only wins bad-weather prompts. REI only wins budget and beginner. The single-winner leaderboard doesn't exist.
By AIAttention Research
We've been running a series of experiments to see whether AI gives one stable answer to "what's the best X" — or whether the answer changes depending on the phrasing. So far we've tested AI SEO creators and AI agents educators. In both, the #1 winner depended entirely on how the question was asked.
The obvious rebuttal: those are niche creator economies. Maybe mature consumer categories — the kind with household brand names and decades of product history — are different. Maybe AI has a settled opinion on "the best backpacking tent."
It doesn't.
The Setup
We ran 90 queries across ChatGPT, Gemini, and Perplexity — six phrasings of the same underlying question, five times each. Every prompt asked for backpacking tent brand recommendations, with a different intent framing:
- Overall — "…best backpacking tent brands?"
- Budget — "…best budget backpacking tent brands?"
- Ultralight — "…best ultralight backpacking tent brands?"
- Bad weather — "…best backpacking tent brands for severe weather?"
- Beginners — "…best backpacking tent brands for beginners?"
- Thru-hiking — "…best backpacking tent brands for thru-hiking?"
Each prompt excluded retailers that resell many brands, so we'd get manufacturer names. A GPT-4o-mini extractor pulled brand names from each response in rank order, with position-weighted scoring: 1.00 for first mention, 0.75 for second, 0.56 for third.
Unlike the creator experiments, this category has near-saturated visibility: 93–94% of prompt-model pairs produced a valid brand list. Mature categories with established players are fundamentally different data-shape than a fragmented creator niche.
And yet the ranking still doesn't hold across intents.
Six Intents, Four Different Winners
| Intent | #1 Brand |
|---|---|
| Overall | Big Agnes |
| Budget | REI Co-op |
| Ultralight | Zpacks |
| Bad weather | Hilleberg |
| Beginners | REI Co-op |
| Thru-hiking | Big Agnes |
Four different #1 brands across six intents. Big Agnes and REI Co-op each win two; Zpacks and Hilleberg each win one.
But the stronger finding is how wide each win is, and how invisible each winner is outside their intent:
- Hilleberg wins Bad Weather decisively (13.56) — and is completely absent from Budget, Ultralight, Beginners. Hilleberg is the strongest intent-specialized brand in the entire dataset.
- REI Co-op wins Budget (11.13) and Beginners (11.25) but drops to rank 5 in Overall and rank 3 in Thru-hiking.
- Zpacks wins Ultralight (13.09) but only ranks 6 in Budget and 3 in Beginners.
- Big Agnes is the closest thing to a universal brand — top 5 in every intent — but even Big Agnes is #5 in Budget and #3 in Beginners.
Whoever told you "just buy a Big Agnes" was half right. AI recommends it about half the time. The other half, they have a different answer, and the difference isn't noise — it's structure.
Hilleberg Is the Clearest Aspect-Specialist We've Seen
Every experiment in this program has surfaced one creator or brand that appears almost exclusively in one intent. In the SEO experiment it was Lily Ray (Trusted intent only). In the AI agents experiment it was Liam Ottley (Non-coder).
In backpacking tents, it's Hilleberg.
Across 90 queries, Hilleberg shows up in Bad Weather repeatedly and almost nowhere else. Not top-10 in Budget. Not top-10 in Ultralight. Not in Beginners.
This is AI's mental model of a category doing exactly what you'd want it to: mapping intent to a tier of brands whose positioning matches the use case. Hilleberg makes $1,000+ four-season mountaineering tents. Asking for "budget" or "beginners" and getting Hilleberg would be a miss — and AI correctly doesn't.
The inverse: REI Co-op, which wins Budget and Beginners and nothing else. A retailer-cooperative with a private-label in-house line, explicitly aimed at new outdoor consumers. AI has learned that positioning.
Whoever wrote the Wikipedia and outdoorgearlab entries for these brands shaped that mental model. That's a fact with consequences for any brand reading this: the content corpus you show up in determines which intents AI recommends you for.
Top 8 Brands Overall
| Rank | Brand | Weighted | Avg Rank | Models |
|---|---|---|---|---|
| 1 | Big Agnes | 47.73 | 2.99 | 3 |
| 2 | Zpacks | 36.38 | 3.02 | 3 |
| 3 | REI Co-op | 26.80 | 3.27 | 3 |
| 4 | MSR | 25.08 | 4.10 | 3 |
| 5 | Nemo Equipment | 17.80 | 3.89 | 2 |
| 6 | Tarptent | 15.99 | 5.69 | 3 |
| 7 | Hilleberg | 15.58 | 2.35 | 3 |
| 8 | Durston Gear | 13.93 | 3.82 | 2 |
Notice Hilleberg's average rank of 2.35 — when it is mentioned, it's mentioned near the top. Its low aggregate score is driven by appearing in few intents, not by ranking poorly. That's the shape of an aspect-specialist: high precision, low recall.
The Product-Level Twist
We also ran the same six intents at product level — asking for specific tent models instead of brands. The product data tells a different story.
At brand level, Durston Gear ranks #8. At product level, Durston is #1 — its X-Mid 2, X-Mid 1, and X-Mid Pro 2+ dominate Ultralight and Thru-hiking responses.
AI treats Durston as a product-centric brand. When you ask "what's the best ultralight tent?", AI answers with "the Durston X-Mid." When you ask "what's the best ultralight tent brand?", AI forgets to name Durston and surfaces Zpacks and Big Agnes instead.
This is a visibility asymmetry most brands don't know they have. If you have a dominant SKU that's better-known than your brand name, AI will only recommend you at product-level queries. Users asking a brand-level question will never see you.
SlingFin is another example. At brand level: 17 mentions, weighted 8.62. At product level: the SlingFin Portal 2 appears six times in Bad Weather — a distinct, visible SKU even when the parent brand is marginal.
Model Differences Show Up Here Too
Three observations from the model-by-model data:
- ChatGPT gives flat numbered lists. Clean, easy to extract, low-opinion.
- Gemini has the strongest opinions. It groups brands into categories ("Gold Standard," "Best Overall for Beginners"), names specific SKUs unprompted, and occasionally hallucinates product names into brand-level questions.
- Perplexity cites heavily but names brands sparingly. 4 of 6 aspects produce no extractable brand mentions from Perplexity in a typical run — it returns citation-heavy prose, and its prose often avoids ranked product claims.
Mature consumer categories don't fix the per-model divergence we saw in creator queries. If your tracking is ChatGPT-only, you're reading a flatter list than the richer (or weirder) answers Gemini is giving your customers.
Top Citation Sources
| Domain | Citations |
|---|---|
| cleverhiker.com | 25 |
| outdoorgearlab.com | 15 |
| youtube.com | 13 |
| trailgroove.com | 8 |
| switchbacktravel.com | 8 |
Two review sites — CleverHiker and OutdoorGearLab — drive most of the AI's recommendations. For any tent brand trying to shift its AI visibility, those two sites are load-bearing. Getting reviewed there matters more than any ad spend.
Takeaways If You're a Brand in a Mature Category
Three that generalize beyond tents:
"Best" isn't a single query — it's a family of queries with different winners. You cannot track AI visibility against one prompt and believe you've measured it. Six intents × three models = 18 cells. Each one matters separately.
Aspect-specialization is real, and it's read from your content corpus. Hilleberg wins Bad Weather because the reviews that exist about Hilleberg are reviews of four-season mountaineering use. REI wins Budget and Beginners for the same reason. If you want AI to recommend you for a different intent, the content that talks about you has to cover that intent.
Brand visibility and product visibility are not the same metric. Durston Gear is invisible at brand level and dominant at product level. AI treats them separately. Track both, or you'll miss entire layers of visibility.
Want to see where your brand shows up across the intents your customers actually search for? Track it at aiattention.ai.
Earlier in this research series: AI Can't Agree Who the Best AI SEO Teacher Is and We Asked AI Who Teaches AI Agents.
Start measuring your AI visibility today. Get Started Free →