You ran the study. You analyzed the data. You have clear findings that should change how the product works. And then you put it all in a 47-slide deck that three people skim during a meeting while checking Slack.
This is the most common failure mode in UX research: not bad research, but bad communication. The insight dies somewhere between the similarity matrix and slide 31.
Here is how to present UX research so stakeholders actually read it, believe it, and act on it.
The real problem is not your research
Most researchers default to documenting their process: here is what we did, here is how we recruited, here is every data point we collected, and oh by the way, here is what we think it means. This is backwards. Stakeholders do not care about your methodology. They care about what to do next.
The typical UX research report or stakeholder research presentation fails for three reasons:
- It buries the recommendation. The most important sentence is on page 12.
- It requires a meeting. If someone needs you in the room to explain the findings, the document failed.
- It tries to be comprehensive instead of useful. Completeness is the enemy of action.
A VP of Product has maybe 90 seconds to decide whether your research matters. A PM has five minutes. A developer has even less. Your UX research report needs to work at every one of those time scales.
Principle 1: Lead with the recommendation
Start with what should change. Not what you studied. Not how many participants you recruited. Not your research questions. The recommendation.
Instead of this:
"We conducted a hybrid card sort with 30 participants to evaluate the proposed navigation structure for the settings area..."
Write this:
"Merge Account Settings and Profile into a single section. 83% of participants grouped these together, and the current split is causing support tickets."
The first version is researcher-centered. The second is decision-centered. Stakeholders can read the second sentence and immediately know whether this matters to them and what they need to do about it.
Your methodology is not irrelevant — it just belongs later. Think of it as the evidence trail, not the opening argument.
Principle 2: One page, three numbers, one chart
For any study — whether it is a card sort, a usability test, an interview synthesis, or a survey — you should be able to distill the findings into:
- One top-line metric that captures the overall finding ("83% agreement on grouping Account and Profile together")
- One supporting metric that adds nuance ("Current split generates 12% of settings-related support tickets")
- One context metric that establishes credibility ("30 participants, matching our user demographics")
- One chart that makes the pattern visually obvious (a similarity matrix, a task-completion bar chart, a dendrogram — whatever tells the story fastest)
This is your executive summary. It fits on one screen. A stakeholder can absorb it in under two minutes. Everything else is supporting detail.
This does not mean your research is shallow. It means your communication is disciplined. The depth is there for anyone who wants it — you are just not forcing everyone to wade through it.
Principle 3: Make it shareable, not presentable
Here is a test: can someone who missed the meeting understand your findings by clicking a link? If the answer is no, your research has a distribution problem.
Slide decks require context. PDFs get lost in email threads. Confluence pages rot. The best format for sharing UX research findings is a live link that anyone can open, scan in two minutes, and forward to their team.
This is why the most effective research teams are moving away from decks and toward shareable web-based reports. A link in Slack does more work than a meeting on the calendar.
When you share card sort results or any other UX research, optimize for the person who will see it third-hand. They were not in the meeting. They did not read the Slack thread. They just got a link from their manager with "FYI — look at this." Your report needs to work for that person.
What shareable means in practice:
- No login required to view
- Loads in a browser, not a desktop app
- Key findings visible without scrolling
- Charts are interactive or at least self-explanatory
- The URL is permanent and always shows the latest data
Principle 4: Include the evidence trail
Leading with recommendations does not mean hiding your work. It means layering your communication so people can go as deep as they want.
The best UX research reports have a clear hierarchy:
- Recommendation — what to do (10 seconds to read)
- Key findings — why we think so (2 minutes to read)
- Supporting data — the evidence (10 minutes to review)
- Methodology — how we got here (for anyone who wants to audit)
Each layer links to the next. A stakeholder who trusts you stops at layer 1. A PM reads through layer 2. A skeptical engineer digs into layer 3. A fellow researcher checks layer 4.
The existence of layers 3 and 4 builds credibility even if nobody reads them. Knowing the evidence trail is there makes people more likely to trust the summary. It is like footnotes in a well-argued essay — most readers skip them, but their presence signals rigor.
Structuring a stakeholder research report
Here is a practical structure that works for any type of UX research, from card sorting to interview synthesis:
Executive summary (one screen)
- One sentence: what we studied and why
- The recommendation in bold
- Three key metrics
- One chart
This is the part 80% of your audience will read. Treat it accordingly.
Key findings (3-5 maximum)
Each finding follows this pattern:
- Finding statement — a clear, jargon-free sentence ("Users expect billing and subscription settings in the same place")
- Evidence — the data that supports it ("27 of 30 participants grouped these cards together; similarity score of 0.91")
- Implication — what this means for the product ("The current separation into two menu items creates unnecessary navigation")
Three to five findings is the right number. Fewer than three feels thin. More than five and stakeholders lose the thread. If you have eight findings, you either need to prioritize or you actually ran two studies.
Recommendations
Tie each recommendation directly to a finding. Be specific about what should change:
- Weak: "Consider restructuring the navigation"
- Strong: "Merge the Billing and Subscription sections under a single 'Plan & Billing' label, matching the mental model shown in the card sort data"
If you can, include effort estimates or flag dependencies. Stakeholders appreciate knowing whether a recommendation is a quick win or a quarter-long project.
Methodology appendix
Keep this brief. Study type, participant count, recruitment method, dates, any limitations. One paragraph is usually enough. Link to the raw data for anyone who wants to dig in.
For product managers new to card sorting, this section also serves as a gentle primer on how the method works without derailing the main findings.
Why live reports beat PDFs and slides
Static documents have a fundamental problem: they are snapshots. The moment you export a PDF or finalize a slide deck, the report starts aging. If new responses come in, if you refine your analysis, if someone asks a follow-up question that reveals a gap — you are creating version 2, and now nobody knows which file is current.
A live web-based report solves this:
- Always current. Update the analysis and every stakeholder sees the latest version at the same URL.
- Interactive. Stakeholders can explore the dendrogram, filter the similarity matrix, or drill into specific participant groups.
- Accessible. No special software. No file downloads. No "can you re-share that deck?"
- Trackable. You can see who viewed the report and when, which tells you whether your findings are actually reaching the right people.
This is the approach we built into CardSort. When you run a study — whether it is a card sort, tree test, survey, or interview — you can generate a findings report automatically and share it as a single link. The report includes the executive summary, key findings, visualizations, and the full evidence trail, all in a format stakeholders can scan without scheduling a meeting.
See a sample stakeholder reportCommon mistakes when presenting UX research
Presenting raw data instead of findings. A similarity matrix is not a finding. "83% of users grouped these items together" is a finding. The matrix is the evidence.
Using research jargon. Your stakeholders probably do not know what a dendrogram is. Say "this tree diagram shows how participants naturally grouped the items" instead.
Giving equal weight to every finding. Prioritize ruthlessly. Your most important finding should get three times the space of your least important one.
Waiting too long to share. Research that arrives a week after the decision was made is research that did not matter. Share preliminary findings fast, then follow up with the full report.
Making it a monologue. The best research presentations leave room for stakeholders to ask questions and connect findings to problems they are already thinking about. If you are presenting live, keep the formal portion to 15 minutes and leave 15 for discussion.
FAQ
How long should a UX research report be?
The executive summary should fit on one screen — roughly 200 words plus one chart. The full report, including findings, recommendations, and methodology, should be under 1,500 words. If you are writing more than that, you are including detail that belongs in an appendix or a separate document. Remember: the goal is action, not documentation.
Should I present research findings in a meeting or send a report?
Both, but in the right order. Send the report first. Give stakeholders time to read it asynchronously — even 30 minutes helps. Then use the meeting for discussion, not presentation. This way the meeting focuses on "what should we do about this" rather than "let me walk you through 40 slides." If you must present, keep it to 15 minutes of content and use the rest for Q&A.
How do I share card sort results with people who do not know what card sorting is?
Skip the method explanation entirely in the main report. Lead with what participants did ("we asked 30 people to organize these features into groups that made sense to them") and what they told us ("most people expect billing and account settings in the same place"). Link to a glossary page or primer for anyone who wants the methodological background. The results matter more than the technique, and user research methods are only as valuable as the decisions they inform.
Further reading
- What Is Card Sorting? The Complete Guide
- How to analyze card sort results — goes deeper on turning raw card sort data into the kind of findings stakeholders need
- The product manager's guide to card sorting — useful framing if your primary stakeholders are PMs
- User Research (UX Glossary) — a plain-language primer to share with stakeholders unfamiliar with research methods
- UX Research Methods (UX Glossary) — overview of methods and when to use each one