UX Research Term

Card Sorting Participant

Card Sorting Participant

A card sorting participant is the person doing the sorting — organizing content items into groups that make sense to them. Not all participants are equal. The single biggest factor in whether your card sort produces useful data isn't your card list or your analysis method. It's whether you recruited the right people.

Key Takeaways

  • 15-30 participants is the sweet spot; patterns stabilize around 20-25 for most studies
  • Participant quality matters more than quantity — five well-matched users beat fifty random ones
  • Screen for domain familiarity, not expertise; you want typical users
  • Demographic diversity within your target audience strengthens your results

Who to Recruit (and Who to Avoid)

You need people who represent your actual users. This sounds obvious, but it's where most card sorts go wrong. Teams grab whoever is available — coworkers, friends, a random panel with no screening — and end up with data that reflects the wrong mental models.

For a cooking website, recruit people who cook 2-3 times per week. Not professional chefs, who think about food in specialized categories like "mise en place" and "mother sauces." Not people who never cook, who won't understand why "baking" and "roasting" might be separate categories. You want the middle — people familiar enough to have opinions but not so expert that they've developed unusual organizational schemes.

Screen for these things:

  • Domain familiarity. Do they actually use products like yours? How often?
  • Task relevance. Would they realistically need to find the content you're sorting?
  • Demographic match. Age, location, and technical comfort level should reflect your user base.

Skip internal stakeholders. Your product manager already knows where everything lives. That's exactly why their sorting data is useless for understanding how new users think.

How Many Participants You Actually Need

The research is pretty clear: 15-30 participants covers most card sorting studies.

Below 15, the data is too noisy. You'll see patterns that look meaningful but don't hold up. Above 30, you hit diminishing returns — the similarity matrix stabilizes and new participants rarely shift the clusters.

Some nuance: open card sorts benefit from the higher end (25-30) because participants create their own categories, which introduces more variability. Closed card sorts, where categories are fixed, can get stable results with 15-20.

If you're comparing across user segments — say, beginners vs. experienced users — you need 15-30 per segment. Don't pool them together and assume the patterns apply to everyone.

Briefing and Managing Participants

How you brief participants directly affects data quality. Tell them too much and you bias their sorting. Tell them too little and they flounder.

A good brief covers:

  • What they'll do. "You'll organize items into groups that make sense to you." That's it.
  • No right answers. Emphasize this. Participants who think they're being tested will try to sort "correctly" instead of naturally.
  • Time expectation. 15-20 minutes for most card sorts. If it takes longer than 30 minutes, you have too many cards.

For remote unmoderated studies, your instructions carry even more weight because you can't course-correct in real time. Test your brief with 2-3 people before launching.

One thing to watch for in your data: participants who finish suspiciously fast (under 3 minutes for 30+ cards) or sort everything into one or two groups. These are low-quality responses. Most tools let you filter them out before analysis.

Further Reading

Frequently Asked Questions

How many participants do you need for card sorting? 15-30 participants is the sweet spot for most card sorting studies. Below 15, your data is too noisy to identify reliable patterns. Above 30, you hit diminishing returns — the patterns stabilize and additional participants rarely change the results. For open card sorts, aim closer to 30. For closed sorts, 15-20 is usually sufficient.

Who should participate in a card sort? Recruit people who match your actual target audience in terms of domain familiarity. You want typical users, not power users or complete novices. For a cooking website, recruit people who cook 2-3 times per week — not professional chefs or people who never cook. Screen for relevant experience but avoid domain experts who think about content differently than regular users.

Does participant quality matter more than quantity in card sorting? Yes. Five well-matched participants produce more useful data than 50 random ones. When participants don't represent your actual users, the sorting patterns reflect someone else's mental model — and you'll build navigation that works for the wrong audience. Always prioritize recruitment quality over hitting a target number.

Try it in practice

Start a card sorting study and see how it works

Related UX Research Resources

Explore related concepts, comparisons, and guides