How To
12 min read

The product manager's guide to card sorting

How PMs can use card sorting to reduce support tickets, improve feature discoverability, and make better navigation decisions backed by user data.

By CardSort Team

You've seen it before: a feature ships, adoption is flat, and three weeks later support tickets start rolling in. "Where do I find the new export option?" "I didn't know we had bulk editing." The feature works fine. Users just can't find it.

This is an information architecture problem, and it's one of the most common — and most preventable — reasons features underperform. Card sorting is the fastest way to fix it, and you don't need a research background to run one.

This guide covers when card sorting is useful for PMs, how to run one in about an hour, how to read the results, and how to present findings in a product review without anyone's eyes glazing over.

Why PMs should care about card sorting

Card sorting is a method where participants organize labeled cards (your features, pages, or settings) into groups that make sense to them. The output is a data-backed map of how users expect your product to be structured.

Here's why that matters in PM terms:

Reduce support tickets. A significant percentage of support volume for mature products is navigation-related — users can't find what they need. Card sorting reveals where your product's structure diverges from users' mental models, so you can fix discoverability before it generates tickets.

Improve activation and time-to-value. If new users can't find core features during onboarding, they churn before experiencing value. Card sorting tells you which features are hardest to locate in your current structure and where users expect them to live instead.

Increase feature discoverability. That feature your team spent a quarter building? If it's buried three levels deep under a label nobody associates with it, adoption will be a fraction of what it should be. Card sorting surfaces these burial problems with data, not opinions.

De-risk navigation decisions. "Let's reorganize the sidebar" is a high-stakes decision that affects every user. Card sorting gives you evidence to back the new structure instead of relying on the HiPPO (highest-paid person's opinion) in the room.

Make roadmap conversations concrete. "Users find our navigation confusing" is vague. "73% of users grouped billing under Settings, but we have it under Account — that's a mismatch we can fix in one sprint" is actionable.

When to run a card sort

Not every product decision needs a card sort. Here are the specific scenarios where it earns its keep:

New feature launch

You're adding a capability and need to decide where it lives in the product. Should the new analytics dashboard go under "Reports," "Insights," or get its own top-level nav item? A card sort answers this in a day instead of a meeting loop.

App redesign or navigation overhaul

The highest-value moment for card sorting. Before you redesign, run an open card sort to see how users naturally group your features. Use those groupings as the starting point for the new navigation, then validate with a closed card sort or tree test.

Settings page cleanup

Settings pages are where features go to be forgotten. If your settings page has 30+ items in a flat list or a confusing hierarchy, a card sort with just those items reveals natural groupings that make settings findable.

Onboarding flow design

Which features do users associate with getting started? Card sorting surfaces this. If users consistently group "invite team members," "connect integrations," and "import data" together, that's your onboarding checklist — validated by users, not assumed by the team.

Help center or documentation reorg

If your help center search usage is high but article satisfaction is low, users probably can't browse to what they need. Card sort your article titles to find the right category structure.

The 1-hour card sort

You don't need a week-long research project. Here's a streamlined process that fits into a PM's schedule.

Step 1: Create your cards (15 minutes)

Open your product and list every feature, page, or menu item relevant to the question you're answering. If you're testing the full navigation, list everything in the top two levels. If you're testing a specific section like settings, list every item in that section.

Aim for 20 to 30 cards. Fewer than 15 doesn't produce enough signal — participants finish in two minutes and the groupings are obvious. More than 40 creates fatigue and messy data.

Each card should be a short label — the same words users see in the product. Don't add descriptions or explanations. If a label needs explanation to be understood, that's a finding in itself.

Where the cards come from:

  • Your product's current navigation and feature list
  • Features on the roadmap that need a home
  • Labels from competitive products (if three competitors call it "Dashboard" and you call it "Command Center," that's worth testing)

For that last source, a competitive analysis done beforehand saves time. If you've already audited competitor navigation, you have a ready-made list of labels to include.

Step 2: Choose your sort type (2 minutes)

  • Open card sort — participants create their own groups and name them. Use this when you're exploring: "How do users think about our features?" Best for early-stage navigation design.
  • Closed card sort — you predefine the group names and participants sort cards into them. Use this when you're validating: "Does our proposed navigation structure work?" Best for testing a specific design.

If you're short on time and already have a proposed structure, go closed. You'll get actionable results with fewer participants.

Step 3: Recruit participants (10 minutes of effort, then wait)

You need 10 to 20 participants. For a closed sort, 10 is workable. For an open sort, aim for 15 or more.

Where to find them fast:

  • Internal colleagues outside your team — customer success, sales, marketing. They aren't your end users, but they interact with your product daily from a different angle. Good for a quick signal.
  • Existing users — send a short email to active users. "We're improving our product navigation and would love your input. Takes 5 minutes." Response rates are typically 10 to 15%.
  • Customer advisory board — if your company has one, this is exactly what it's for.
  • Beta users or power users — people already invested in the product's direction.

Don't recruit only PMs and designers. You want the perspective of people who use the product to do their job, not people who think about product structure professionally.

Start a card sort in 5 minutes — no research experience needed.

Step 4: Launch and wait (5 minutes of effort, 1-3 days elapsed)

Set up your card sort in any card sorting tool. Share the link via email or Slack. Most participants complete a 25-card sort in 3 to 7 minutes, so responses come in quickly once people click the link.

Step 5: Read the results (30 minutes)

This is where PMs often feel out of their depth, but the core analysis is straightforward.

Reading results without a research background

Card sort tools generate several outputs. You don't need to understand all of them. Focus on two things:

Agreement rate

For closed card sorts, the agreement rate tells you what percentage of participants placed a card in the same category. Look for:

  • Cards with 80%+ agreement — these are settled. Users agree on where they belong. Don't overthink these.
  • Cards with 50-79% agreement — these are contested. Two or three categories are competing. Look at which categories participants chose and consider whether your labels are ambiguous.
  • Cards below 50% agreement — these are problem cards. Users have no consistent expectation of where this item belongs. The card label might be confusing, the item might not fit your category structure, or it might need to live in multiple places.

Problem cards

Problem cards are the most valuable output for a PM. They point directly at navigation decisions that will cause confusion. For each problem card, ask:

  • Is the label clear? If you call it "Orchestration" and users don't know what it means, renaming fixes the problem without restructuring anything.
  • Does the item span categories? Some features legitimately belong in multiple places. "Notifications" might be both a "Settings" item and a "Communication" item. The answer might be to surface it in both locations.
  • Is this a feature users don't understand yet? New or complex features often show low agreement because users haven't formed a mental model for them. This is an onboarding problem more than a navigation problem.

For open card sorts, focus on the similarity matrix — which cards participants consistently grouped together. Clusters of high similarity suggest natural navigation groups. If participants consistently group "billing," "invoices," and "payment methods" together but your product spreads them across "Settings" and "Account," that's a structural mismatch.

Making the case to stakeholders

You have results. Now you need to get a navigation change into the sprint. Here's how to present card sort data in product review or sprint planning without turning it into a research seminar.

Lead with the problem, not the method

Don't open with "We ran a card sort study with 18 participants using an open methodology." Open with: "23% of support tickets last month were users unable to find features. We have data on exactly which features are hardest to find and where users expect them."

Show the three strongest findings

Pick the three cards or groupings with the clearest signal:

  • "85% of users expected 'Export' to be under 'Reports,' not under 'Settings.' Moving it would take half a sprint."
  • "'Advanced Filters' had only 30% agreement — users genuinely don't know where to look for it. We need to surface it more prominently."
  • "Users consistently grouped these 5 settings together, but we split them across 3 different pages."

Connect to metrics stakeholders already track

Tie each finding to something already on the dashboard:

  • "Feature X adoption is 12%. Card sort data suggests 60% of users don't know it exists because of where it sits in the nav."
  • "Moving Export to Reports could reduce the 45 'where is export' support tickets we get monthly."
  • "Onboarding completion is 68%. Card sort data shows our onboarding steps map to 3 different navigation sections — users lose the thread."

Propose a specific change with a specific scope

"Based on this data, I'm proposing we move these four items in the next sprint. It's a nav-only change — no new features, no backend work. Estimated at 3 story points."

Stakeholders approve specific, scoped proposals. They stall on "we should rethink our entire navigation."

For a deeper playbook on presenting findings, see How to present UX research to stakeholders.

Common PM mistakes

Testing too many items

A 60-card sort takes 15 to 20 minutes. Participants rush, get fatigued, and produce noisy data. Scope your sort to the specific question you're answering. Testing the settings page? Include settings items. Don't throw in the entire product feature list.

Using internal jargon as labels

Your team calls it "DAU Analytics." Users call it "who's using the app." Card labels should use the language users encounter in the product or — better — the language they use themselves. If you ran user interviews before the card sort, pull terminology directly from transcripts.

Skipping competitive analysis

If you don't know how competitors structure similar features, you're testing your navigation in a vacuum. Users arrive with expectations shaped by every other product they use. A quick competitive audit — even 30 minutes reviewing three competitors' navigation — gives you context that makes your card sort results more interpretable. See why your card sort needs competitive analysis first.

Running one sort and calling it done

A single card sort gives you a snapshot. If you're doing a major navigation overhaul, run an open sort first (to discover natural groupings), design your new structure based on the results, then validate with a closed sort or tree test. Two rounds sound like twice the work, but the second round catches problems that would otherwise ship to production.

Ignoring the results you don't like

If 75% of users grouped your flagship feature with commodity features, that's uncomfortable but valuable. It means users don't perceive the feature as special based on its label or description. Resist the urge to dismiss results that conflict with the team's vision — that conflict is exactly the insight you're paying for.

Getting started

Card sorting fits into a PM workflow more naturally than most research methods. It's asynchronous (participants complete it on their own time), quantitative (results are data, not opinions), and fast (setup to results in under a week).

The complete guide to card sorting covers methodology in more detail. For a high-level overview of the method, see What Is Card Sorting? The Complete Guide. If you want to understand the broader context of how navigation testing fits into product decisions, start with Information Architecture in the UX glossary.

The biggest barrier isn't complexity — it's inertia. The first card sort you run will feel unfamiliar. The second will feel routine. By the third, you'll wonder how you ever shipped a navigation change without one.

Ready to Try It Yourself?

Start your card sorting study for free. Follow this guide step-by-step.

Related Guides & Resources

Explore more how-to guides and UX research tips