How Many Participants Do You Need for Card Sorting?
The most common question from researchers running their first card sort: "How many people do I actually need?"
The honest answer: 15–20 for most studies, 30 if you want high confidence. Here's why, and when the number should change.
The Short Answer
| Goal | Participants needed |
|---|---|
| Sanity check / pilot your study | 5 |
| Spot basic patterns and groupings | 15–20 |
| Build a reliable similarity matrix | 20–30 |
| Publishable research / high confidence | 30–50 |
| Multiple subgroup comparisons | 50–100+ |
For most commercial UX projects — redesigning a nav, validating an IA, improving a help center — 20 participants is the sweet spot. Patterns stabilize, the similarity matrix becomes meaningful, and you've spent a reasonable amount of time and money.
Why Not More?
Card sorting follows diminishing returns fairly quickly. Research by Jakob Nielsen and others suggests that qualitative studies reach saturation around 15–20 participants for a homogeneous user group. After that, new participants mostly confirm what you already know rather than revealing new patterns.
Running to 100 participants will look impressive in a research report but rarely changes the actual navigation recommendations you make.
Exception: If you're comparing two distinct user groups (e.g., doctors vs. patients, UK vs. US users), you need enough participants per group — so 15–20 per segment, meaning 30–40 total minimum.
Why Not Fewer?
With fewer than 10 participants, the similarity matrix becomes noisy and individual outliers have outsized influence on the results. A cluster that looks meaningful with 8 participants might disappear entirely with 20.
Five participants is enough to validate that your study is functional and your cards are clear — but not enough to draw IA conclusions you'd act on.
The 5-Participant Pilot
Before running your full study, always run a 5-person pilot first. This catches:
- Cards that participants consistently misunderstand
- Category names that are ambiguous
- Study instructions that cause confusion
- Technical issues with the study link
FreeCardSort's AI-generated responses are a useful zero-cost alternative to a pilot — they simulate realistic sorting behaviour and populate your results page so you can check everything looks right before recruiting real participants.
When You Need More Than 30
Consider going above 30 participants when:
You're comparing user segments. Need reliable differences between US and EU users? 20 per group minimum. Between power users and new users? Same.
Cards are ambiguous. If your cards are abstract or jargon-heavy, you'll see higher variance in sorting, which requires more participants to identify reliable patterns.
Stakeholders need high confidence. Sometimes the research isn't just for you — it's to convince a VP or client. A 50-participant dataset is harder to dismiss than a 15-participant one, even if the patterns are identical.
You're running a tree test alongside. If you're using card sort results to build a tree test, higher participant counts give you more stable category names and hierarchies to test against.
Practical Calculation for Prolific
If you're recruiting via Prolific, here's a rough cost guide at ~$1.50/response:
| Participants | Approx. cost |
|---|---|
| 5 (pilot) | ~$10 |
| 15 | ~$30 |
| 20 | ~$40 |
| 30 | ~$60 |
| 50 | ~$100 |
Most commercial card sorts can be done for under $60, which makes Prolific accessible even for freelancers and small teams.
FreeCardSort has Prolific recruitment built in — you can set your exact participant target and launch directly from your study dashboard.
Summary
- 5 participants: Pilot only
- 15–20: Sufficient for most UX projects
- 30: High confidence, solid similarity matrix
- 50+: Multi-segment comparison or publishable research
When in doubt, start with 20. You can always run a second wave if the results are unclear.
Ready to start? Create your card sort study →