Most UX research advice is written for people who have a dedicated research team, a Qualtrics license, and a five-figure participant recruitment budget. That describes maybe 2% of the people who actually need to do research.
The rest of us are a designer who also handles research, a PM who got told to "validate this before we build it," or a founder who knows the product navigation is broken but can't articulate why. We have a week, maybe two. Our budget is somewhere between zero and the cost of a nice lunch.
This guide is for that reality. It walks through a complete four-phase research project — competitive analysis, discovery, validation, and reporting — with practical methods that cost nothing or close to it. The advice works regardless of which tools you choose, though we'll point out where a connected platform saves time over stitching together free tools.
The minimum viable research stack
Enterprise research platforms like Qualtrics, dscout, or UserZoom bundle dozens of capabilities you'll never touch and charge accordingly. For most projects, you need exactly four things:
- A way to capture competitive intelligence — what your competitors do, how they structure their products, what language they use.
- A way to talk to users — interviews or surveys, recorded and transcribed.
- A way to test your assumptions — card sorts or tree tests that produce quantifiable data.
- A way to communicate what you found — a shareable report that non-researchers can act on.
That's it. You don't need eye tracking. You don't need a 50-segment survey panel. You don't need a heatmap tool. Those are useful for specific questions at specific scales, but they aren't prerequisites for making informed decisions.
The question is whether you assemble these four capabilities from separate tools — Google Docs, Miro, a standalone card sort app, Google Slides — or use a platform that connects them. We'll cover both approaches.
Phase 1: Competitive analysis on zero budget
Competitive analysis is the most skipped phase in budget research, which is unfortunate because it's also the cheapest. You need a browser, a spreadsheet, and about two hours.
What to capture
Pick three to five direct competitors. For each one, document:
- Top-level navigation labels — the exact words they use for their main menu items. Screenshot these.
- Feature organization — how they group capabilities. Does "reporting" live under "analytics" or is it its own section?
- Terminology — do they say "workspace" or "project"? "Team" or "organization"? These word choices reflect user expectations you'll inherit or fight against.
- Information depth — how many clicks to reach core features? Where do they bury settings?
The structured approach
Create a spreadsheet with competitors as columns and navigation elements as rows. This comparison matrix reveals patterns: if four out of five competitors put billing under "Settings > Account," that's a user expectation you should think twice about breaking.
You can do this analysis in Google Sheets for free. The limitation is that your competitive data lives in a spreadsheet, disconnected from everything that follows. When you get to validation and want to test whether users agree with your competitor's navigation patterns, you'll be manually copying labels into your card sort tool.
In CardSort's Competitive Analysis phase, you capture the same data but those navigation labels are available as card sort cards later — one click to pull them into a study. That connection matters more than it sounds like it should, because the friction of manual transfer is where competitive insights get lost.
Time investment
Two to three hours for a thorough competitive audit of five competitors. This is the highest-ROI research activity you can do for free. Skip it and you'll design navigation in a vacuum, which usually means you'll reinvent what competitors already validated or repeat their mistakes without knowing it.
Phase 2: Discovery on zero budget
Discovery means talking to users to understand their problems, mental models, and language. It sounds expensive. It doesn't have to be.
The five-interview minimum
Jakob Nielsen's research showed that five usability testing participants uncover roughly 85% of usability problems. The same principle applies to discovery interviews: five conversations surface the dominant patterns. You won't capture every edge case, but you'll identify the themes that matter most.
Where to find five participants for free:
- Existing users — email your most active users. Most are happy to talk for 20 minutes, especially if you frame it as shaping the product's direction.
- Social communities — post in relevant Slack groups, Discord servers, or subreddits. Offer no compensation; you'll get people who genuinely care about the problem space.
- Hallway testing — for internal tools or B2B products, recruit colleagues from other departments. They aren't your target user, but they provide a useful outsider perspective on navigation and terminology.
- Customer support queue — your support team talks to confused users daily. Sit in on five support calls or review five conversation transcripts. This isn't a formal interview, but it reveals real friction points.
Running cheap interviews
You don't need a dedicated research tool for five interviews. Use Zoom or Google Meet (both have free tiers with recording). The key is structure:
- Prepare five to seven open-ended questions. "Walk me through how you'd find X" beats "Do you like how X works?"
- Record every session. Rely on auto-transcription — Zoom's built-in transcription is good enough for theme extraction.
- After all five interviews, spend one hour identifying themes. What problems came up in three or more conversations? What language did participants use consistently?
Those themes become the foundation for your validation phase. If three out of five users described their workflow as "tracking client feedback" rather than "managing feature requests," that tells you something about navigation labels.
Surveys as a discovery supplement
If you can't get interviews, a short survey (five to eight questions, mostly open-ended) distributed through your existing channels costs nothing. Google Forms works. Typeform has a free tier. The data is thinner than interview data, but it's better than assumptions.
The trap with surveys is asking leading questions or offering predefined answers too early. Keep discovery surveys open-ended: "What's the hardest part of [task]?" not "Rate the difficulty of [task] on a scale of 1-5."
Phase 3: Validation on zero budget
This is where you test whether your proposed information architecture actually makes sense to users. Card sorting and tree testing are the standard methods, and both are available for free.
Card sorting: minimum viable approach
A card sort gives participants a set of labeled cards (your navigation items, features, or content categories) and asks them to organize those cards into groups that make sense to them. It's the most reliable way to understand how users expect information to be structured.
What you need:
- 20 to 40 cards — each representing a feature, page, or content item. Fewer than 15 doesn't generate enough signal. More than 50 creates fatigue.
- 15 to 20 participants — card sorting requires more participants than usability testing because you're looking for statistical patterns, not individual behaviors. Below 15, your similarity matrix will be noisy.
- A card sorting tool — you can do paper card sorts in person, but remote card sorts reach more participants and handle the analysis automatically.
Several free card sorting tools exist. CardSort offers unlimited free studies with AI-generated participants to supplement real responses — useful when you can't recruit 20 people organically.
What to test:
If you did competitive analysis, pull in the navigation labels you captured. Test whether users group features the way your competitors do or the way you plan to. If you did discovery interviews, use the language participants actually used, not your internal terminology.
Tree testing: the validation complement
Tree testing is the inverse of card sorting. Instead of asking users to organize cards, you give them your proposed navigation tree and ask them to find specific items. It answers: "Can users find what they need in this structure?"
Free tree testing is harder to find than free card sorting, but CardSort includes it alongside card sorts. If you're using separate tools, Optimal Workshop offers a limited free tier for tree tests.
Minimum participant counts
Budget researchers often under-recruit because each additional participant feels like a cost (even if the cost is just the effort of asking). Here are the minimums for reliable data:
| Method | Minimum participants | Ideal participants |
|---|---|---|
| Card sort (open) | 15 | 30+ |
| Card sort (closed) | 10 | 20+ |
| Tree test | 20 | 50+ |
Below these thresholds, you'll see patterns but can't be confident they're stable. If you can only get 8 participants, run a closed card sort (where you predefine the categories) rather than an open one — closed sorts produce usable data with fewer participants.
Phase 4: Communicating findings without a slide deck
Research that stays in your head or in a messy Google Doc might as well not have happened. The final phase is producing something stakeholders can read, understand, and act on in under 10 minutes.
The one-page research summary
Forget 40-slide decks. Stakeholders want:
- What we did — two sentences on methodology. "We analyzed 5 competitor navigation structures, interviewed 5 users, and ran a card sort with 20 participants."
- What we found — three to five bullet points. Lead with the finding that most directly impacts the current decision.
- What we recommend — concrete next steps tied to findings. "Rename 'Workspace' to 'Project' based on 80% agreement in card sort results."
- Supporting data — similarity matrices, agreement percentages, key quotes. Available for anyone who wants to dig deeper, but not required reading.
Report formats that work
A shareable link beats an attachment. Stakeholders lose attachments. They don't lose a URL they can reopen during sprint planning.
If you're assembling tools manually, create a Google Doc with your summary and embed screenshots of your card sort results. It works, but you'll spend 30 to 60 minutes formatting.
CardSort generates a stakeholder report automatically from your competitive analysis, interview themes, and card sort results. The report is a shareable link that updates as you refine your analysis. For budget research, the time savings on reporting often matter more than the time savings on data collection.
Presenting to skeptical stakeholders
Some stakeholders distrust qualitative research or small sample sizes. Address this directly:
- "Five interviews isn't statistically significant." Correct, and it doesn't need to be. You're identifying patterns, not proving hypotheses. Five interviews that all surface the same confusion about your settings page is a signal worth acting on.
- "Users said they want X, but users don't know what they want." You're not asking users to design the product. You're observing how they think about information structure. Card sort data shows mental models, not feature requests.
- "We don't have time for research." A competitive audit takes two hours. A card sort runs in the background while you do other work. The question isn't whether you have time for research — it's whether you have time to rebuild the navigation after launch because nobody tested it.
Tool stack comparison
Here's what the same research project looks like with a fragmented approach versus a connected platform:
Fragmented approach (free but disconnected)
| Phase | Tool | Cost | Limitation |
|---|---|---|---|
| Competitive analysis | Google Sheets | Free | Data stays in spreadsheet |
| Interviews | Zoom + Google Docs | Free | Manual transcription and theme extraction |
| Card sorting | Standalone tool | Free tier (limited) | Results don't connect to competitive data |
| Tree testing | Different tool | Free tier (limited) | Separate login, separate results |
| Reporting | Google Slides | Free | Manual assembly, static document |
Total cost: $0 Total setup time: 2-3 hours across tools Total reporting time: 1-2 hours manual assembly
Connected platform approach
| Phase | Tool | Cost | Advantage |
|---|---|---|---|
| All phases | CardSort | Free (core) / $19/mo (Pro) | Competitive labels flow into card sorts. Interview themes inform study design. Report auto-generates. |
Total cost: $0-19/month Total setup time: 30 minutes Total reporting time: 5 minutes (auto-generated)
The fragmented approach works. Plenty of good research has been done in Google Docs. But if you're running UX research for startups repeatedly — not as a one-off but as a regular practice — the accumulated friction of copying data between tools, reformatting results, and manually building reports adds up to hours per project.
Frequently asked questions
How much does a basic UX research project cost?
Zero, if you use free tools and recruit participants from existing user bases or communities. The main cost is your time: expect to spend 15 to 25 hours across all four phases for a thorough project, or 5 to 8 hours for a focused study targeting a single question like "how should we organize our settings page."
Can I do meaningful research with fewer than 10 participants?
Yes, with caveats. Five interview participants surface most major themes. Five card sort participants reveal obvious grouping patterns but not subtle ones. If you have fewer than 10 card sort participants, run a closed sort (predefined categories) and focus on agreement rate for individual cards rather than trying to interpret the full similarity matrix.
What's the fastest affordable user research method?
A closed card sort with 15 to 20 participants. You can set it up in under 10 minutes, distribute it via link, and have results within a day or two. It won't tell you everything, but it will tell you whether your proposed navigation structure matches user expectations — which is the question that matters most for most product decisions.
Should I pay for research tools or invest in more participants?
Invest in participants first. The best tool in the world produces meaningless results with three participants. Once you consistently recruit 15 or more participants per study, the ROI of a paid tool (faster setup, automatic analysis, connected reporting) starts to justify itself. At $19/month, the break-even is roughly one hour of saved manual work per month.
Further reading
- What Is Card Sorting? The Complete Guide
- Free card sorting tools compared (2026) — detailed breakdown of free tiers, participant limits, and analysis features across card sorting tools.
- How to present UX research to stakeholders — tactics for communicating research findings to executives, PMs, and engineers.
- The complete guide to card sorting — everything you need to know about open, closed, and hybrid card sorts.
- Usability Testing (UX Glossary) — what usability testing is, when to use it, and how it complements card sorting.
Ready to try it? Run your entire research project free — from competitive analysis to stakeholder report. No credit card, no trial expiration, no participant limits on free studies.