How to Choose the Right UX Research Method
You know you need user research, but you are not sure which method to use. Card sorting? A survey? Interviews? Tree testing? Each method answers different questions, costs different amounts, and produces different types of evidence. Picking the wrong one wastes time and budget while leaving your actual question unanswered.
This guide provides a decision framework that matches your research question to the right method — and shows you how to combine methods for stronger results.
The Five Core Methods
Before choosing, you need to understand what each method does and what type of data it produces.
Card Sorting
What it does: Reveals how users naturally group and categorize content. Participants organize items into groups and (in open card sorts) create their own category names.
Data type: Qualitative (grouping patterns, user vocabulary) with quantitative elements (similarity scores, agreement rates)
Best for:
- Building a new information architecture from scratch
- Understanding user mental models for content organization
- Discovering what labels and terminology users expect
- Deciding how to structure navigation menus
Not for: Validating whether an existing structure works (use tree testing instead)
Participants needed: 15-30 Time to run: 3-7 days Cost: Low (unmoderated) to Medium (moderated)
Tree Testing
What it does: Measures whether users can find specific content within a navigation structure. Participants navigate a text-only hierarchy to locate items described in task scenarios.
Data type: Quantitative (success rate, directness, time to completion)
Best for:
- Validating a proposed or existing information architecture
- Identifying specific navigation paths that fail
- Measuring findability before investing in visual design
- Comparing two structural approaches head-to-head
Not for: Discovering how content should be organized (use card sorting first)
Participants needed: 15-30 Time to run: 3-7 days Cost: Low
Surveys
What it does: Collects structured responses from a large number of users through predefined questions. Scales efficiently for measuring attitudes, behaviors, and preferences.
Data type: Primarily quantitative with optional qualitative (open-ended responses)
Best for:
- Measuring satisfaction and attitudes at scale
- Understanding user demographics and behavior patterns
- Quantifying how widespread a known problem is
- Prioritizing features or improvements by user demand
- Collecting data from large user populations
Not for: Understanding why users feel or behave a certain way (use interviews) or evaluating navigation structure (use tree testing)
Participants needed: 50-200+ for statistical reliability Time to run: 3-14 days Cost: Low to Medium (depending on recruitment)
User Interviews
What it does: Uncovers motivations, pain points, workflows, and mental models through structured conversation. A researcher asks open-ended questions and follows interesting threads.
Data type: Qualitative (themes, quotes, stories, sentiment)
Best for:
- Exploring a new problem space you do not yet understand
- Understanding the "why" behind observed behaviors
- Discovering unmet needs and pain points
- Building empathy with target users
- Generating hypotheses for quantitative validation
Not for: Measuring how many users share a particular behavior (use surveys) or evaluating navigation (use tree testing)
Participants needed: 5-8 per user segment Time to run: 1-3 weeks (including analysis) Cost: Medium to High (researcher time per session)
Competitive Analysis
What it does: Evaluates how competitor products serve user needs through their design, navigation, and interaction patterns. Identifies gaps, conventions, and differentiation opportunities.
Data type: Qualitative (observations, patterns) with structured scoring
Best for:
- Understanding industry UX conventions users expect
- Identifying experience gaps no competitor fills
- Benchmarking your product against alternatives
- Informing card sort item lists and research planning
Not for: Understanding your own users' specific needs (use interviews or surveys)
Participants needed: 0 (researcher-driven) or 5-10 (if including user benchmarking) Time to run: 1-2 weeks Cost: Low to Medium (primarily researcher time)
The Decision Framework
Use these three questions to narrow your method selection:
Question 1: What type of answer do you need?
| You need to understand... | Best method |
|---|---|
| How users think content should be organized | Card sorting |
| Whether users can find content in your navigation | Tree testing |
| How many users share a behavior or attitude | Surveys |
| Why users behave a certain way | Interviews |
| How your UX compares to competitors | Competitive analysis |
If your question does not fit neatly into one row, you likely need a combination of methods.
Question 2: What are your constraints?
Short timeline (under 2 weeks): Surveys and unmoderated card sorts can be set up and completed quickly. Tree tests are similarly fast. Interviews require more scheduling coordination. Competitive analysis depends on how many competitors you include.
Limited budget: Unmoderated methods (card sorts, tree tests, surveys) are cheaper per participant than moderated methods (interviews). Competitive analysis costs mainly researcher time.
Small participant pool: Interviews work with 5-8 participants. Card sorts and tree tests need 15-30. Surveys need 50+. If you only have access to a small group, qualitative methods give you the most value per participant.
No existing IA: Start with card sorting to build structure, then validate with tree testing. Do not tree test a structure that was not informed by user research.
Existing IA with known problems: Tree test to identify specific failures. If problems are widespread, step back to card sorting to rebuild from user mental models.
Question 3: Where are you in the product lifecycle?
Discovery (exploring a new space): Start with interviews to understand user needs and workflows. Follow up with competitive analysis to understand the existing landscape.
Definition (structuring content or features): Use card sorting to build your IA, then tree testing to validate it. Surveys can help prioritize which content areas matter most.
Validation (testing specific designs): Tree testing validates navigation. Surveys measure satisfaction with specific features. Interviews explore why users struggle with particular flows.
Optimization (improving what exists): Surveys identify pain points at scale. Tree tests pinpoint navigation failures. Competitive analysis reveals where competitors have pulled ahead.
Method Combinations That Work
Single methods produce useful data. Combining methods produces evidence strong enough to drive confident decisions.
Card Sort then Tree Test (IA Workflow)
The classic IA validation workflow. Card sorting builds the structure from user mental models. Tree testing confirms the structure is navigable. This combination is essential for any significant IA project.
Timeline: 2-4 weeks | Participants: 30-60 total (different groups)
Survey then Interviews (Discovery Workflow)
Surveys identify what is happening across your user base. Interviews explain why it is happening. The survey findings tell you which interview questions to prioritize.
Timeline: 3-4 weeks | Participants: 50-200 (survey) + 5-8 (interviews)
Competitive Analysis then Card Sort (Strategy Workflow)
Competitive analysis reveals industry conventions and vocabulary. Those findings inform your card sort item list, ensuring you test real-world labels rather than internal jargon.
Timeline: 2-3 weeks | Participants: 15-30 (card sort only)
Interviews then Survey (Validation Workflow)
Interviews generate hypotheses about user needs and pain points. Surveys test whether those findings generalize across your broader user base.
Timeline: 3-5 weeks | Participants: 5-8 (interviews) + 100+ (survey)
Common Selection Mistakes
Defaulting to surveys for everything. Surveys are easy to run, but they cannot answer "why" questions. If you need to understand motivations or explore a problem space, interviews are the right tool despite being more resource-intensive.
Skipping card sorting and going straight to tree testing. Tree testing validates a structure, but if the structure was built on assumptions, you are validating the wrong thing. Card sort first when building something new.
Using interviews when you need scale. Five interview participants cannot tell you what percentage of your user base experiences a problem. If you need numbers, use a survey. If you need depth, use interviews. Know which one your decision requires.
Running one method when you need two. A card sort without a follow-up tree test leaves your IA unvalidated. A survey without follow-up interviews leaves you with patterns you cannot explain. Budget for method pairs when the decision is important.
Choosing methods based on what you know how to run. The method should match the question, not the researcher's comfort zone. If you have never run a tree test but your question is about findability, learn tree testing rather than substituting a survey.
Further Reading
- How to Run a Card Sort and Tree Test Together
- How to Write Effective UX Survey Questions
- How to Conduct User Interviews: A Beginner's Guide
- How to Run a Competitive UX Analysis
- UX Research on a Budget
Frequently Asked Questions
What is the cheapest UX research method? Surveys and unmoderated card sorts are the most cost-effective because they scale without requiring researcher time per session. A single survey can reach hundreds of participants at the cost of the tool subscription and any incentives. Interviews are the most expensive per participant due to the moderation time required.
Can I use multiple research methods on the same project? Yes, and combining methods is strongly recommended for important decisions. Each method has blind spots that another method covers. Card sorting reveals structure but not findability. Tree testing measures findability but not user vocabulary. Surveys quantify patterns but not reasons. Interviews explain reasons but not prevalence. Pairing methods produces well-rounded evidence.
When should I use qualitative vs quantitative research? Use qualitative methods (interviews, open card sorts) when you need to understand why users behave a certain way or explore a problem you do not yet understand well. Use quantitative methods (surveys, tree tests, closed card sorts) when you need to measure how many users share a behavior or validate a specific hypothesis with statistical confidence.