Templates
8 min read

Card Sorting Research Report Template: Free Download and Guide

A complete card sorting research report template you can copy and fill in. Includes executive summary, methodology, findings, similarity matrix interpretation, recommendations, and next steps. Free to use for any project.

By CardSort Team

Card Sorting Research Report Template

A card sort study produces data. A card sorting report turns that data into decisions. This template gives you the structure to write a report that stakeholders can act on — with the right level of detail for the audience and a clear path from finding to recommendation.

Copy this template, replace the placeholder text, and you'll have a complete research report.


Report Template


[PROJECT NAME]: Card Sorting Research Report

Date: [Month Day, Year] Researcher: [Your Name] Stakeholders: [Names and roles of primary audience]


Executive Summary

[Write this section last, after completing the rest of the report.]

We conducted a [open / closed / hybrid] card sorting study with [N] participants to understand how [target audience description] mentally organize [content/feature area being tested]. The study ran from [start date] to [end date].

Key findings:

  1. [Most significant finding in one sentence]
  2. [Second finding in one sentence]
  3. [Third finding in one sentence]

Primary recommendation: [The single most important action to take based on findings, in 1-2 sentences.]


Research Questions

This study was designed to answer:

  1. [Primary research question — usually: "How do users organize [content type]?"]
  2. [Secondary question, if applicable]
  3. [Additional question, if applicable]

Out of scope: This study was not designed to [note what questions the study cannot answer — e.g., "evaluate specific design solutions" or "measure task completion rates"].


Methodology

Study typeOpen card sorting / Closed card sorting / Hybrid card sorting
Number of cards[N] cards
Study duration[Average completion time] minutes per participant
Data collection period[Start date] – [End date]
Analysis methodSimilarity matrix, dendrogram, group naming analysis

Card selection: [Brief description of how cards were chosen — e.g., "Cards represent the 47 content items currently in the primary navigation, plus 8 items planned for the next release."]

Study instructions given to participants: [Copy the exact instructions participants saw, verbatim]


Participants

Total recruited[N]
Total completed[N]
Completion rate[%]

Screener criteria:

  • [Criterion 1 — e.g., "Must be a current user of the product"]
  • [Criterion 2]
  • [Criterion 3]

Recruitment method: [How participants were recruited — e.g., in-app message, email list, Prolific, user community]

Participant demographics (if collected):

  • Age range: [Range]
  • [Other relevant demographic — e.g., frequency of product use, industry, role]

Excluded responses: [N] responses were excluded because [reason — e.g., "completed in under 3 minutes, indicating participants did not engage meaningfully with the task"].


Findings

Finding 1: [Descriptive title]

What we found: [State the finding clearly — what do users do? What patterns emerged?]

Supporting data: [N]% of participants sorted [Card A] and [Card B] together (similarity score: [X]%). [Card C] was sorted with this group by [N]% of participants, suggesting a strong association.

Navigation implication: Users expect [finding explanation]. Currently, [Card A] lives in [Current Location] while [Card B] lives in [Different Location] — a structure that contradicts how users mentally organize this content.


Finding 2: [Descriptive title]

What we found: [Description]

Supporting data: The similarity matrix shows [Card X] and [Card Y] were sorted together by only [low %] of participants, indicating users do not see a strong connection between these items despite their current proximity in navigation.

Navigation implication: [Explanation of what this means for design]


Finding 3: [Descriptive title]

What we found: [Description]

Supporting data: [Quantitative evidence]

Navigation implication: [Design implication]


[Add additional findings as needed. Most card sort reports include 3–6 core findings.]


Outliers and Ambiguous Items

The following cards were sorted inconsistently — participants placed them in many different groups with no clear consensus:

CardAgreement RateNotes
[Card Name][X]%[Why this might be ambiguous — label clarity issue, multi-purpose feature, etc.]
[Card Name][X]%[Notes]

Implication: Items with low agreement rates are navigation risks. Consider renaming, reconsidering their placement, or surfacing them in multiple locations.


Visual Analysis

Similarity Matrix

[Insert screenshot or export of similarity matrix from card sort tool. Export from CardSort dashboard as PNG or CSV.]

How to read this: Each cell shows the percentage of participants who sorted the two items in that row and column together. Cells shaded darker indicate higher co-occurrence. Clusters of high-similarity cells indicate natural groupings.

Key clusters identified:

  • [Cluster name]: [List of cards] — [X]%+ co-occurrence
  • [Cluster name]: [List of cards] — [X]%+ co-occurrence
  • [Cluster name]: [List of cards] — [X]%+ co-occurrence

Dendrogram

[Insert dendrogram image]

How to read this: The dendrogram shows hierarchical clustering. Items that branch off at the bottom of the tree were sorted together most consistently across participants. Items that only merge near the top of the tree had weak associations.

Dendrogram interpretation: The strongest clusters (those forming at [low %] similarity threshold) are [description]. These represent the clearest groupings in the data.


Category Analysis (Open Card Sort Only)

In open card sorting, participants named their own categories. The most common category names were:

Category Name (or variant)% of participants who created this category
[Category name][%]
[Category name][%]
[Category name][%]
[Category name][%]

Naming insight: [What do participant-created category names tell you about the language users use? Where does it match or differ from your current labels?]


Proposed Information Architecture

Based on the card sort findings, we recommend the following navigation structure:

[Primary Category 1] (was: [Current label if different])

  • [Item 1]
  • [Item 2]
  • [Item 3]

[Primary Category 2] (new category, not currently in navigation)

  • [Item 1]
  • [Item 2]

[Primary Category 3]

  • [Item 1]
  • [Item 2]
  • [Item 3]

[Items not yet assigned — needs further research or consideration:]

  • [Item X] — sorted inconsistently; consider dual placement or label revision
  • [Item Y] — sorted with [Category A] in card sort but may also need to be discoverable from [Category B]

Recommendations

Recommendation 1: [Specific, actionable recommendation] Rationale: [Which finding supports this recommendation and why] Priority: High / Medium / Low Effort: [Estimated implementation complexity]

Recommendation 2: [Specific, actionable recommendation] Rationale: [Finding] Priority: High / Medium / Low Effort: [Effort]

Recommendation 3: [Specific, actionable recommendation] Rationale: [Finding] Priority: High / Medium / Low Effort: [Effort]


Limitations

  • Sample size: [N] participants. Results should be treated as [directional data / statistically reliable for the stated findings] and validated with [tree testing / usability testing / additional research].
  • Sample representativeness: Participants were recruited via [method]. [Note any bias this might introduce — e.g., "In-app recruitment means participants are active users; lapsed or potential users may sort differently."]
  • Card selection: The 47 cards in this study represent [X]% of all navigable content. Cards not included may sort differently than predicted by extrapolation.
  • Context effects: Card sorting removes real-world context — users sort cards on their own, without the visual design, surrounding content, or task goals they'd have when actually using the product.

Next Steps

ActionOwnerDeadline
Review findings with design and product teams[Name][Date]
Develop proposed navigation structure based on recommendations[Name][Date]
Run tree testing to validate proposed structure[Name][Date]
Present revised IA for stakeholder approval[Name][Date]
Design and develop navigation changes[Name][Date]

Appendix

A. Full Card List [List all cards included in the study]

B. Raw Similarity Matrix Data [Link to or attach full CSV export]

C. Study Link [Link to the study in CardSort for data verification]

D. Screener Questions Used [If a screener was used, include full screener text]


End of template.


Tips for Filling Out This Template

Write findings before recommendations: Base recommendations explicitly on findings. If you can't point to a specific finding that supports a recommendation, you don't have the evidence for it.

Include percentage numbers wherever possible: "71% of participants grouped A with B" is more defensible than "most participants grouped A with B."

Name outliers explicitly: Don't hide cards with low agreement. They're not evidence of a bad study — they're evidence of genuine navigation ambiguity that your design needs to address.

Keep the executive summary to one page: Leadership reads the executive summary and looks at the visual evidence. Write the rest of the report so it's there for those who need it, not because everyone will read it.

Send the dendrogram with annotations: Raw dendrograms are hard to read. Add labels and arrows to the image before including it. "This cluster is [Category Name]" makes your analysis legible to non-researchers.


Created a card sorting study you need to report on? Run it for free at freecardsort.com →

Ready to Use This Template?

Start your card sorting study with this template for free. No credit card required.

Related Templates & Resources

Explore more card sorting templates and UX research guides