How-To Guides
9 min read

Card Sorting for Developers: Validate IA Before You Build

Developers use card sorting to validate information architecture decisions before committing to code. Learn when to run a card sort, how to interpret results as a developer, and how to pilot-test quickly before recruiting your full participant group.

By CardSort Team

Card Sorting for Developers: Validate IA Before You Build

Card sorting is one of the few UX research methods where the output directly maps to a technical decision: how to structure navigation, routes, documentation sections, or feature categories. If you're building a developer portal, redesigning settings, or reorganizing API documentation, a card sort can save you from shipping a navigation that developers will hate.

This guide is for developers who want to use card sorting as a practical design validation tool — not as a lengthy research exercise.


When Developers Should Run a Card Sort

Card sorting is most useful when you're making a structural decision that's expensive to reverse:

Developer portals and documentation sites You're building or reorganizing a docs site. Should Authentication be under "Getting Started" or under "Security"? Should Webhooks be under "API Reference" or "Integrations"? Card sorting answers these questions with data instead of opinions.

Settings and preferences architecture SaaS products grow settings over time until nobody can find anything. Before adding a new settings section — or refactoring an existing one — card sorting shows how users mentally organize configuration options.

CLI command grouping If you're building a CLI tool with more than 20 commands, a card sort on command names reveals how users think they should be grouped into subcommands or command groups.

Feature navigation in complex products When a product has 50+ features, developers disagree internally on where things belong. Card sorting gets external validation before internal opinions get baked into routes.


What to Put on the Cards

This is where most developers go wrong. The goal is to represent the content or features from the user's perspective, not your internal naming conventions.

For documentation / help centers:

  • Write card labels as topics users search for, not as section headers you'd use internally
  • Use natural language: "Set up two-factor authentication" rather than "2FA Configuration"
  • Include concepts, not just features: "How billing works", "Understanding rate limits", "Migrating from v1"

For product navigation:

  • Use the user-facing feature name, not the internal code name
  • Include both high-level categories and specific features as separate cards
  • Include things users might expect to find but don't exist yet — their grouping reveals expectations

For API documentation:

  • Use endpoint groups, not individual endpoints (you'll have hundreds of cards otherwise)
  • Include conceptual content alongside reference material: "Authentication overview", "Error codes", "Pagination"
  • Separate "reference" items from "guide" items to see if users make the same distinction you do

Card count: 20–50 is ideal. Under 20 gives you too little data to see patterns. Over 50 takes too long and fatigues participants.


Open vs. Closed: Which Format to Use

Open card sorting (participants create their own categories): Use when you're designing from scratch or your current structure is broken. Participants reveal their mental model — the groupings they create are the structure they expect.

Closed card sorting (you provide the categories): Use when you've already designed a structure and want to validate it. Drop your navigation sections in as category labels and see which items participants put where. High disagreement on a category means the label is confusing or the scope is wrong.

Hybrid card sorting (predefined categories + option to create new ones): Use when you have a partial structure you're confident in but want to see what's missing. Participants use your categories but can create new ones for items that don't fit. This is the most useful format for incremental refactoring.


Running the Study: The Developer's Fast Path

Step 1: Run a small pilot test

Before recruiting real participants, run a small pilot with 3-5 people to catch obvious structural problems. This provides quick feedback before investing in a full 15+ participant study.

What a pilot test reveals:

  • Cards with ambiguous labels that could go in multiple places
  • Cards that are clearly out of scope for your study
  • Category names that don't match how the content is described on the cards

Fix these before recruiting your full group. Your real data will be cleaner.

Step 2: Recruit developers, not general users

For developer tooling, your participants need to be developers. General users will give you noise. Where to find them:

  • Your product's developer Slack or Discord community
  • Dev-focused subreddits (posting a study link is generally well-received)
  • GitHub Discussions on your open source repo
  • Twitter/X, where developers are active and often willing to help
  • Colleagues at other companies — 15 external developers is more valuable than 15 internal ones

Step 3: Aim for 15–20 responses

For structural decisions, 15–20 participants is enough to see clear patterns. If you have multiple distinct user segments (e.g., frontend developers vs. backend developers vs. DevOps), aim for 15 per segment — they may have different mental models for the same content.


Interpreting the Results

Similarity matrix

The similarity matrix shows how often each pair of cards was grouped together. High co-occurrence (dark cells) means participants consistently see those items as related. Your navigation structure should reflect these clusters.

What to look for:

  • Items that cluster together strongly → keep them in the same section
  • Items that split across clusters → they might belong in multiple places, or your label is ambiguous
  • Items that consistently end up alone → they may not fit any category, or they need renaming

Dendrograms

The dendrogram visualizes the similarity matrix as a tree structure. Items that branch off together near the bottom are the strongest clusters. Use this to identify your top-level navigation sections.

Category name analysis (open sort)

In open card sorts, participants name their own groups. Read through all the category names — they are the words users expect to see in your navigation. If 12 out of 20 participants named a group "Getting Started" and you called it "Quickstart," rename your section.


Common Patterns and What They Mean

"Authentication" splits between security and setup categories Users are telling you that auth has two distinct phases: initial setup (which belongs near "Getting Started") and ongoing management (which belongs near "Security" or "Account Settings"). Consider separating them.

API reference endpoints grouped with conceptual docs Participants don't distinguish between "guides" and "reference" the way your docs are structured. This is the most common finding in developer doc card sorts. Consider flattening the structure or making the distinction more explicit in your navigation labels.

Configuration items scattered across multiple categories Your settings architecture has no clear organizing principle. Participants are grouping by mental model (workflow stage, user role, frequency of access) rather than the taxonomy you've built. Investigate what logic they're using.

New category created for items you thought fit elsewhere When participants consistently create a new category you didn't provide, they're telling you there's a gap in your structure. The new category name tells you exactly what that gap is.


Translating Findings into Code

Card sorting results inform structural decisions, but they don't make them. Here's how to go from data to code:

  1. Use the dendrogram to define your top-level navigation sections. The main branches of the tree = your primary nav.

  2. Use co-occurrence scores to decide what lives where. Items with >70% co-occurrence should be in the same section. Items with <30% co-occurrence that you were planning to group together need reconsideration.

  3. Document dissenting opinions. Card sorting will show consensus, but also show where participants disagreed. When 30% put something in Section A and 70% put it in Section B, note this — it means some users will be confused by your final choice, and you should consider the label for Section B to reduce that confusion.

  4. Commit to a structure before writing routes. Card sort data is most valuable when you treat it as a pre-commit review. Once you've written navigation components, redirect logic, and deep links, restructuring is expensive. Run the card sort before the routes are defined.


Example: Organizing a Developer Portal

You're building a developer portal for a payments API. Your draft navigation has:

  • Getting Started
  • API Reference
  • SDKs
  • Webhooks
  • Testing
  • Security
  • Changelog

You create a card sort with 35 cards representing everything in those sections. After 18 responses:

  • "Sandbox environment" and "Test API keys" are 94% co-occurring → keep in Testing
  • "HMAC signature verification" is splitting 50/50 between Security and Webhooks → it belongs in Webhooks (that's where developers need it in context), with a cross-link from Security
  • Participants created a "Guides" category containing all your how-to content across multiple sections → consider adding a top-level "Guides" nav item that surfaces tutorial content in one place
  • "Changelog" ends up alone in 60% of responses → participants don't know where it belongs; move it to footer navigation instead of primary nav

These findings take 20 minutes to implement in your information architecture. They save weeks of post-launch navigation complaints.


Setting Up a Developer Card Sort

  1. Go to freecardsort.com
  2. Create a study (no account required)
  3. Add your cards — one per item in your planned structure
  4. Choose open (for from-scratch design) or closed (for validation)
  5. Run a pilot test with 3-5 participants to validate your card labels
  6. Share the link with developers in your community
  7. Analyze the similarity matrix and dendrogram when you have 15+ responses

The study takes 10–15 minutes for participants. Results are available in real time.


Planning a developer portal or docs restructure? Run a card sort before writing routes. Create your study free →

Ready to Try It Yourself?

Start your card sorting study for free. Follow this guide step-by-step.

Related Guides & Resources

Explore more how-to guides and UX research tips