UX Research Term

Agreement Rate

Agreement Rate

Agreement rate is a quantitative metric that measures the percentage of card sorting participants who placed a specific card into the same category. It serves as a direct indicator of how clear and intuitive a content item's placement is within your information architecture. High agreement rates signal strong user consensus, while low rates reveal content that participants find ambiguous or difficult to categorize.

Key Takeaways

  • Consensus measurement: Agreement rate quantifies how consistently participants categorize individual cards, providing a per-card clarity score for your information architecture
  • Clear thresholds: 70%+ agreement indicates strong consensus, 50-70% signals moderate agreement requiring validation, and below 50% reveals problematic content placement
  • Actionable insight: Low agreement rates identify specific cards that need relabeling, restructuring, or additional context to match user mental models
  • Complementary analysis: Agreement rate works alongside similarity matrices and dendrograms to build a complete picture of card sorting results
  • Scalable metric: Works consistently across open, closed, and hybrid card sorts regardless of study size

How to Calculate Agreement Rate

Agreement rate uses a straightforward formula: divide the number of participants who placed a card into the most popular category by the total number of participants, then multiply by 100.

Formula: Agreement Rate = (Participants in Top Category / Total Participants) x 100

For example, suppose you run an open card sort with 30 participants and one of your cards is "Return Policy." If 24 participants place it under a category like "Orders & Shipping" while the remaining 6 distribute it across other categories, the agreement rate for that card is:

24 / 30 x 100 = 80%

This 80% agreement rate tells you that "Return Policy" has a clear home in participants' mental models. Most card sorting tools calculate agreement rates automatically, but understanding the underlying math helps you interpret edge cases and validate results.

When multiple categories attract similar numbers of participants, the agreement rate drops accordingly. If "Return Policy" had 14 participants choosing "Orders & Shipping" and 12 choosing "Customer Support," the agreement rate would be only 47%, signaling genuine ambiguity about where that content belongs.

Interpreting Agreement Rate Thresholds

Strong Agreement (70% and Above)

Cards with agreement rates of 70% or higher reflect clear participant consensus. These items have an obvious home in your information architecture and should anchor the categories they fall into. When you see clusters of high-agreement cards landing in the same category, that category is well-defined and likely matches user expectations.

In practice, cards exceeding 80% agreement are the most reliable data points in your study. They represent content items that users intuitively understand and can locate without hesitation, making them ideal candidates for primary navigation labels and top-level categories.

Moderate Agreement (50-70%)

Moderate agreement rates indicate partial consensus. A majority of participants agree on placement, but a significant minority sees the card differently. These cards deserve attention during analysis because they often sit at category boundaries or carry dual meanings depending on context.

Common causes of moderate agreement include ambiguous card labels, content that legitimately spans two categories, and terminology that different user segments interpret differently. Follow-up testing with task-based scenarios can clarify whether these cards need relabeling, cross-linking, or placement in a broader parent category.

Low Agreement (Below 50%)

Cards with agreement rates below 50% lack meaningful consensus. No single category attracted even half of participants, which typically means one of three things: the card label is confusing, the content genuinely belongs in multiple places, or participants lack familiarity with the topic.

Low-agreement cards are valuable signals rather than failures. They highlight exactly where your information architecture needs the most work. Solutions include rewriting card labels for clarity, creating cross-references between categories, adding contextual cues, or conducting follow-up interviews to understand what participants were thinking.

Practical Example With Real Numbers

Consider a card sorting study for an e-commerce site with 25 participants and five sample cards:

CardTop CategoryParticipantsAgreement Rate
Track My OrderOrders & Shipping2392%
Gift CardsShopping1872%
Size GuideProduct Info1560%
Loyalty ProgramAccount1144%
Sustainability ReportAbout Us936%

"Track My Order" at 92% is a clear winner; place it confidently under Orders & Shipping. "Gift Cards" at 72% has solid agreement but may also attract participants to a "Payments" category worth investigating. "Size Guide" at 60% sits in moderate territory, likely split between Product Info and Shopping, suggesting it could benefit from appearing in both contexts.

"Loyalty Program" and "Sustainability Report" both fall below 50%, meaning participants had no dominant mental model for where these belong. These cards need further research. You might examine the similarity matrix to see which other cards they were grouped with, or use cluster analysis to discover whether they form their own natural grouping.

Relationship to Other Card Sorting Metrics

Agreement rate focuses on individual cards, answering the question "Where does this card belong?" Similarity matrices shift the focus to card pairs, answering "How often are these two cards grouped together?" Dendrograms then visualize the hierarchical relationships between all cards as a tree structure, showing how categories nest within broader groupings.

These three metrics work best in combination. Start with agreement rates to identify which cards are clear and which are problematic. Then examine the similarity matrix to understand the relationships between problematic cards and their neighbors. Finally, use dendrograms and cluster analysis to determine the optimal number and structure of categories for your information architecture.

Agreement rate is particularly useful as a first-pass filter. Cards with high agreement can be categorized with confidence, freeing you to spend analysis time on the moderate and low-agreement items that actually require judgment calls.

Using Agreement Rate to Improve Information Architecture

After calculating agreement rates for all cards, sort them from lowest to highest. The bottom quartile represents your most urgent IA challenges. For each low-agreement card, review the full distribution of category placements to understand competing mental models rather than just the winning category.

Look for patterns across low-agreement cards. If several cards related to account management all show weak agreement, the problem may be structural rather than card-specific. Perhaps participants need a clearer account section, or those tasks are spread across too many conceptual areas.

Combine agreement rate data with tree testing results to validate whether high-agreement placements actually help users complete tasks. A card may have strong sorting consensus but still cause findability problems if the category label itself is unclear during navigation.

Further Reading

Frequently Asked Questions

What is a good agreement rate in card sorting? An agreement rate of 70% or higher is considered strong, indicating clear participant consensus that a card belongs in a particular category. Rates between 50% and 70% represent moderate agreement and suggest the card's placement may need further validation. Agreement rates below 50% signal low consensus and indicate the card may be ambiguous or fit multiple categories equally well.

How do you calculate agreement rate for a card sort study? To calculate agreement rate, divide the number of participants who placed a card into the most common category by the total number of participants, then multiply by 100. For example, if 18 out of 25 participants placed a card in the same category, the agreement rate is 72%. This calculation is performed for each card individually to identify which items have clear placements and which are ambiguous.

How does agreement rate relate to similarity matrices and dendrograms? Agreement rate measures consensus for individual cards, while similarity matrices measure how often pairs of cards are grouped together. Dendrograms build on similarity matrix data to show hierarchical clustering relationships. Together these three metrics provide a complete picture of card sorting results, from individual card clarity to pairwise relationships to overall category structure.

Try it in practice

Start a card sorting study and see how it works

Related UX Research Resources

Explore related concepts, comparisons, and guides