UX Research Term

Reverse Card Sort

Reverse Card Sort

A reverse card sort (sometimes called card-based classification) gives participants a set of content cards and a predefined category structure, then asks them to place each card into the category where they'd expect to find it. It's the validation counterpart to an open card sort — instead of discovering structure, you're testing whether a structure you've already built actually makes sense to users.

Key Takeaways

  • Validation, not discovery: A reverse card sort tests an existing structure; it won't tell you what structure users would have created on their own
  • Quantitative output: You get a clear accuracy percentage per card, making it easy to identify specific items that are misplaced in your IA
  • Bigger sample needed: Plan for 30-50 participants since you're measuring success rates, not exploring patterns

How It Works

You set up categories that mirror your existing (or proposed) navigation. Participants see a shuffled deck of cards and drag each one into the category where they think it belongs. There's usually no option to create new categories or rename existing ones — that constraint is the point.

The primary metric is accuracy: what percentage of participants placed each card into the "correct" category. You define correct ahead of time based on where each item actually lives (or where you plan to put it). Cards that land in the right spot 80%+ of the time are in good shape. Cards below 60% need attention.

When a Reverse Sort Is the Right Call

The classic use case: you've just finished an open card sort, built a new category structure from the results, and want to verify it works before shipping. The open sort gave you discovery. The reverse sort gives you confidence.

It also works well for auditing existing navigation. If support tickets keep saying "I couldn't find X," a reverse card sort quantifies the problem. You'll see exactly which content items users can't locate and where they're looking instead.

A Help Center Example

A SaaS company redesigned their help center after an open card sort suggested seven top-level categories. Before launching, they ran a reverse sort with 40 participants using 35 support articles as cards.

Most articles landed correctly at rates above 75%. But three articles stood out. "Two-factor authentication setup" split almost evenly between "Account Settings" (45%) and "Security" (42%). "API rate limits" scattered across "Developer Docs" (38%), "Account Settings" (30%), and "Troubleshooting" (25%).

The 2FA article revealed that having both an "Account Settings" and a "Security" category created a false boundary — users saw them as overlapping. The team merged them into "Account & Security," which solved the split. The API rate limits card exposed a deeper issue: developers and non-developers had fundamentally different mental models for where technical content belongs. The team added cross-links rather than forcing a single home.

Without the reverse sort, they would have launched the new structure assuming it worked. The whole study took a day to set up and three days to collect responses.

Limitations to Keep in Mind

A reverse card sort only tests the structure you give it. If your categories are all wrong, participants will still place cards into them — they just won't feel great about it. You might get 60% accuracy across the board and think "not bad," when in reality an entirely different structure would have scored 85%.

It also doesn't capture whether users understand the category labels themselves. A participant might place "Billing FAQ" into "Payments" correctly, but still struggle to find "Payments" in actual navigation because the label doesn't stand out. That's a job for label testing or tree testing.

And beware of the "best guess" problem. Participants will always place every card somewhere, even when they have no confidence in their choice. Unlike an open sort where confusion leads to creative category names (a useful signal), a reverse sort hides uncertainty behind forced choices. Consider adding a confidence rating per placement if you want to capture that nuance.

Further Reading

Frequently Asked Questions

What is the difference between a reverse card sort and a closed card sort? The terms are closely related and sometimes used interchangeably, but a reverse card sort specifically emphasizes testing an existing structure rather than exploring possible ones. In a closed card sort, categories are predetermined but the goal is still to understand groupings. A reverse card sort is explicitly a validation exercise — you already have a structure and want to know if users can work within it. The mechanics are identical; the intent and framing differ.

When should you use a reverse card sort instead of a regular card sort? Use a reverse card sort when you already have a category structure you want to validate — after a redesign, after an open card sort has suggested categories, or when auditing existing navigation. Use a regular open card sort when you're starting from scratch and need to discover how users naturally group content. The reverse sort answers "does this structure work?" while the open sort answers "what structure should we build?"

How many participants do you need for a reverse card sort? Since reverse card sorts produce quantitative accuracy data, aim for 30-50 participants to get statistically meaningful results. This is higher than the 15-20 typically recommended for open card sorts, because you're measuring success rates per card and need enough data points for each card-category pair to be reliable. With fewer than 30 participants, a single confused participant can swing a card's accuracy rate by 3-5 percentage points.

Try it in practice

Start a card sorting study and see how it works

Related UX Research Resources

Explore related concepts, comparisons, and guides