AI Attention Scoring Methodology Reference
Complete technical reference for the AI Attention Score (AAS), Visibility Rate, Share of Voice, rank determination, entity detection, and competitor discovery.
By AIAttention Team
Current methodology as of April 2026. This document is updated whenever the scoring methodology changes — previous versions are archived.
Overview
AI Attention measures how prominently AI language models mention a tracked brand or entity across a configurable set of prompts and models. Each monitoring run produces three metrics:
| Metric | Definition | Range |
|---|---|---|
| AI Attention Score (AAS) | Position-weighted visibility across all prompt-model pairs | 0–100 |
| Visibility Rate (VR) | Percentage of prompt-model pairs where the brand is mentioned | 0–100% |
| Share of Voice (SoV) | Brand's share of total mentions (count-based) across all detected entities | 0–100% |
1. AI Attention Score (AAS)
1.1 Formula
AAS = (sum of position weights) / N x 100
Where N is the total number of prompt-model pairs in the run, and each pair's position weight is:
position_weight = 0.75 ^ (rank - 1) if mentioned with a determinable rank
position_weight = 1.0 if mentioned in an unranked context
position_weight = 0.0 if not mentioned
1.2 Position Weight Table
| Rank | Weight | Interpretation |
|---|---|---|
| 1st | 1.000 | Top recommendation |
| 2nd | 0.750 | Strong visibility |
| 3rd | 0.563 | Clearly present |
| 5th | 0.316 | Present but fading |
| 10th | 0.075 | Marginal visibility |
| Not mentioned | 0.000 | Not visible in this response |
| Mentioned, unranked | 1.000 | Full mention credit (see 1.4) |
1.3 Why Exponential Decay (0.75)
The 0.75 decay factor applies a 25% penalty per rank position. We evaluated three alternatives:
- Linear (1/rank): Too steep — rank 2 drops to 0.5, rank 5 to 0.2. The difference between mid-rank positions becomes negligible.
- Flat (binary mention): Discards position signal entirely.
- Exponential (0.75^(rank-1)): Preserves meaningful weight through rank 5 (32%) while still differentiating clearly between top and bottom positions.
1.4 Unranked Mention Credit
If a brand is mentioned but the response does not contain a rankable structure (e.g., narrative text without lists or tables), the pair receives full mention credit (weight = 1.0).
This is a modeling choice, not a literal equivalence to rank 1. The rationale: in unstructured responses, the AI chose to mention the brand without being prompted to produce a ranking — this carries meaningful signal. The tradeoff is that AAS may overweight passing mentions in long narrative responses. We monitor this and may adjust in future methodology versions.
1.5 Worked Example
Tracking HubSpot with 2 prompts x 2 models (4 pairs):
| Prompt | Model | Response Summary | Rank | Weight |
|---|---|---|---|---|
| "Best CRM for startups?" | Model A | "1. HubSpot 2. Pipedrive 3. Freshsales" | 1st | 1.000 |
| "Best CRM for startups?" | Model B | "1. Salesforce 2. Zoho 3. HubSpot" | 3rd | 0.563 |
| "CRM tools comparison" | Model A | Listed 5 tools; HubSpot absent | — | 0.000 |
| "CRM tools comparison" | Model B | "HubSpot is known for its free tier..." (no list) | unranked | 1.000 |
AAS = (1.000 + 0.563 + 0.000 + 1.000) / 4 x 100 = 64.1
One model's omission on a single prompt costs 25 points — AAS rewards consistent visibility across both models and prompts.
2. How Rank Is Determined
2.1 Principle
We do not ask AI models to rank brands. That would measure the model's instruction-following ability, not its organic brand awareness. Instead, we parse the structure of the actual response to extract position.
2.2 Rank Assignment Rules
Automatic rank is assigned only for response structures with clear ordering:
- Numbered lists: "1. Salesforce 2. HubSpot 3. Zoho" — rank = list number.
- Comparison tables: Rank = row position.
- Explicit shortlists with ordering language (e.g., "the top three are...") — rank by stated position.
For all other formats — unordered bullets, bold labels, narrative paragraphs — no automatic rank is assigned. The brand receives full mention credit (weight = 1.0) per section 1.4.
2.3 Why Deterministic
Rank detection uses rule-based parsing, not a secondary LLM call:
- Reproducibility: The same response always produces the same rank within a given methodology version.
- Cost: No additional API calls for rank computation.
- Auditability: The raw AI response is stored; rank can be independently verified.
2.4 Entity Detection
Brand mentions are detected using deterministic string matching:
- Exact match: "HubSpot" in response text
- Case-insensitive: "hubspot" matches "HubSpot"
- Domain variations: "hubspot.com" matches "HubSpot"
- Brand suffixes: "HubSpot Inc." matches "HubSpot"
No LLM is involved in determining whether the tracked brand was mentioned. This keeps the mention detection layer deterministic and reproducible within a given methodology version.
3. Visibility Rate (VR)
VR = (number of pairs where the brand is mentioned) / (total pairs) x 100
Using the HubSpot example: 3 of 4 pairs = VR = 75%.
VR measures breadth (how widely you're mentioned), while AAS measures depth (how prominently). A brand with VR = 100% / AAS = 30 is mentioned everywhere but never leads the recommendation. A brand with VR = 50% / AAS = 90 dominates half its queries and is absent from the rest.
4. Share of Voice (SoV)
SoV = (your mentions) / (your mentions + all competitor mentions) x 100
Share of Voice is count-based, not position-weighted. It reflects how much of the AI's output is about your brand relative to all detected entities.
4.1 Example: Electric Vehicle Brands
Tracking Rivian across 10 prompts:
| Brand | Times Mentioned |
|---|---|
| Tesla | 42 |
| BYD | 38 |
| Rivian | 15 |
| Lucid | 8 |
| NIO | 7 |
SoV = 15 / (42 + 38 + 15 + 8 + 7) x 100 = 13.6%
Even when Rivian is mentioned, Tesla and BYD dominate the conversation. SoV captures competitive dynamics that AAS alone does not.
4.2 Competitor Detection
Competitors are discovered automatically, not specified by the user:
- The interpretation engine extracts all brand and company names from each response.
- Names are normalized: legal suffixes stripped (Inc., Corp., Ltd.), whitespace collapsed, case-insensitive deduplication.
- Every extracted entity is grounded — verified against the actual response text. Entities that cannot be located in the response are discarded to prevent hallucinated names from entering the data.
4.3 Estimated Competitor Visibility Score
For each competitor, we compute a directional visibility estimate based on mention frequency. This is not the same full position-weighted computation as AAS. Competitor visibility scores are useful for relative comparison ("Competitor X appears more often than Competitor Y") but should not be treated as precise AAS equivalents.
5. Sampling Methodology
5.1 Single Sample Per Pair (K=1)
Each run fetches one AI response per prompt per model. With 10 prompts across multiple models, a single run produces dozens of data points.
K=1 was chosen because:
- AI outputs are relatively stable for factual and recommendation queries within short time windows
- Multi-sample runs multiply cost without proportional signal improvement
- Trend analysis across runs naturally smooths single-run variance
5.2 Multi-Model Coverage
We query across multiple AI providers to avoid provider-specific bias. The model pool evolves as new AI systems become relevant. The AAS formula is model-agnostic: the same computation applies regardless of which or how many models are queried.
5.3 Run Frequency
- Free tier: approximately 1 run per week
- Paid tiers: up to 4 runs per day (~6-hour intervals)
Higher frequency produces smoother trends and faster detection of visibility changes.
6. Methodology Integrity
6.1 Segment Breaks
When the user changes their prompt set or model selection, the dashboard inserts a segment break. Historical data from the prior configuration and new data from the updated configuration are never connected as a continuous trend line.
6.2 Immutable Snapshots
Every completed run produces an immutable Score Snapshot containing:
- AAS, VR, and SoV values
- A formula audit trail: each prompt-model pair with mention status, rank, and position weight
- Competitor data with mention counts and estimated visibility scores
Snapshots are never recalculated retroactively. If the scoring algorithm is updated, old scores remain as originally computed.
6.3 Methodology Versioning
Changes to the scoring methodology — formulas, rank rules, weighting factors — are documented in the changelog below. Previous methodology versions are archived. No historical score is retroactively recomputed under a new methodology.
7. Limitations and Known Tradeoffs
K=1 sampling: Individual runs can be noisy. Trends across multiple runs are more reliable than any single data point.
Prompt dependency: AAS depends on the prompts being monitored. Different prompt phrasing can produce meaningfully different scores for the same brand.
Model coverage: We query across multiple providers but do not cover every AI system. Models not yet tracked may present a different picture.
Unranked mention credit: Assigning weight = 1.0 to unranked mentions is a modeling choice that may overweight passing mentions in long narrative responses.
No sentiment weighting: Positive and negative mentions currently receive the same position weight. Sentiment-weighted scoring is on the roadmap.
Competitor scores are directional: Estimated competitor visibility uses mention frequency, not position-weighted computation. Useful for relative comparison, not as precise AAS equivalents.
8. Changelog
| Date | Change |
|---|---|
| 2026-03 | Initial methodology: AAS with 0.75 position weight decay, K=1 sampling |
| 2026-03 | Added Visibility Rate and Share of Voice metrics |
| 2026-04 | Improved deterministic entity matching, replacing earlier partial LLM-based detection |
| 2026-04 | Evidence-driven interpretation pipeline with grounded competitor extraction |
| 2026-04 | Methodology integrity: segment breaks on config change, immutable score snapshots, no retroactive recomputation |
This document is updated whenever the scoring methodology changes. Previous versions are archived.
Questions about our methodology? Contact us at hello@aiattention.ai
Start measuring your AI visibility today. Get Started Free →