How to Plan and Run a Tree Test Study
Difficulty: Intermediate
Time Required: 2-3 hours for planning, 3-7 days for running
Tree testing is a powerful method for validating your information architecture before you build it. Unlike traditional usability testing, tree test studies strip away visual design to focus purely on findability—can users locate specific content within your proposed site structure? This guide will walk you through how to plan and execute a tree test study that delivers actionable insights for your IA validation process.
Tree testing (also called "reverse card sorting") presents users with a text-only hierarchy of your site's structure and asks them to find where they'd expect certain content or features to live. It's the perfect complement to card sorting and an essential step in findability testing.
What You'll Need
- A site structure or navigation hierarchy (can be from wireframes, existing sites, or card sort results)
- 5-8 realistic task scenarios based on user goals
- 15-30 participants who match your target audience
- Tree testing tool (TreeJack, Optimal Workshop, or similar)
- 30-45 minutes per participant for completion
- Spreadsheet for tracking results and analysis
Step 1: Define Your Research Questions
Start by clarifying exactly what you want to learn from your tree test study. Strong research questions drive better task design and more focused analysis.
Focus on specific findability challenges like: "Can users locate our return policy?" or "Do users look for pricing information under 'Services' or 'Products'?" Document 3-5 core questions that align with your IA decisions or known problem areas.
Example questions:
- Where do users expect to find account settings?
- Can users successfully locate technical documentation?
- Do users understand the difference between our "Solutions" and "Services" sections?
This step matters because vague research goals lead to generic tasks that don't help you make concrete IA improvements. Clear questions ensure every task serves a purpose.
Step 2: Prepare Your Tree Structure
Transform your information architecture into a text-only hierarchy that includes 2-4 levels deep. Remove any visual design elements, navigation aids, or promotional content that might appear on the actual site.
Use clear, descriptive labels and maintain consistent terminology throughout. If you have sections like "About Us," include the key subsections users would see (Team, History, Contact). Aim for 20-40 total items across all levels—enough to be realistic but not overwhelming.
Example structure:
Products
├── Software Solutions
│ ├── Project Management
│ └── Team Collaboration
├── Hardware
└── Pricing
Support
├── Help Documentation
├── Contact Support
└── Community Forums
Your tree should reflect the most current version of your proposed IA, incorporating insights from any previous card sorting or user research.
Step 3: Create Realistic Task Scenarios
Develop 6-8 task scenarios based on real user goals and your research questions. Each task should ask participants to find a specific piece of content or complete a particular action within your tree structure.
Write tasks as realistic scenarios, not abstract instructions. Instead of "Find the contact page," write "You have a question about billing and want to speak with someone from customer service. Where would you go?" This approach mirrors how users actually approach your site.
Strong task examples:
- "You're interested in trying the software but want to understand the costs first. Where would you look?"
- "You've been using the product for a month and want to connect with other users to ask questions. Where would you go?"
- "Your colleague recommended a specific feature, but you want to read more about how it works before signing up. Where would you look?"
Avoid leading language or tasks that are too obvious—you want to uncover genuine findability issues, not confirm what you already know works well.
Step 4: Recruit the Right Participants
Target 15-30 participants who represent your actual user base. For tree testing, you need fewer participants than traditional usability testing because you're measuring specific success/failure rates rather than observing complex behaviors.
Focus on recruiting people who would actually use your site or product in real life. If you're testing a B2B software site, recruit people in relevant professional roles, not general consumers. Consider factors like industry experience, technical comfort level, and familiarity with your product category.
Recruitment criteria to consider:
- Job role or industry (for B2B products)
- Experience level with similar products/services
- Geographic location (if relevant to your business)
- Device preference (mobile vs desktop usage patterns)
Screen participants with 2-3 qualifying questions to ensure they match your target audience without revealing the study's specific focus.
Step 5: Set Up and Launch Your Tree Test
Configure your chosen tree testing tool with your prepared structure and task scenarios. Most tools allow you to randomize task order to prevent order bias, and some offer features like time tracking and confidence ratings.
Set clear instructions for participants: explain that they're looking at a text-only site structure and should click through the hierarchy as if they were navigating a real website. Let them know they can backtrack if needed, but encourage them to follow their first instincts.
Key setup considerations:
- Enable task randomization to minimize order effects
- Set reasonable time limits (2-3 minutes per task)
- Include a confidence rating question after each task
- Add an optional comments field for participant feedback
- Test your setup with 1-2 colleagues before launching
Send your tree test link to participants with clear instructions about timing and device requirements. Most participants can complete the study in 15-20 minutes.
Step 6: Monitor and Analyze Results
Track completion rates as responses come in, but avoid making changes to your study once it's live. Look for patterns in both successful and unsuccessful paths—where do users get lost, and what alternative paths do they explore?
Focus on three key metrics: task success rates (did users find the correct location?), directness (how many clicks did it take?), and time to completion. Success rates below 70% typically indicate IA problems that need addressing.
Analysis priorities:
- Tasks with success rates below 70%
- High variation in paths taken (suggests unclear categorization)
- Tasks where users backtrack frequently
- Consistent failure points across multiple participants
- Comments revealing user mental models
Create a simple spreadsheet tracking success rates, average clicks, and common failure points for each task. This becomes your action plan for IA improvements.
Step 7: Implement Changes and Validate
Use your tree test results to refine your information architecture, focusing on the biggest failure points first. If users consistently looked for "Pricing" under "Products" but you had it under "Plans," consider the move.
For significant changes, run a follow-up tree test with a smaller group (8-10 participants) to validate your improvements. This is especially important if your initial success rates were below 60% for critical tasks.
Document your changes and the rationale behind them. This creates a clear trail from user research to design decisions, which helps justify your IA choices to stakeholders and informs future updates.
Tips and Best Practices
Keep tasks realistic: Base scenarios on actual user goals from support tickets, sales conversations, or previous research rather than assumptions about what users might want to do.
Test mobile-specific scenarios: If significant traffic comes from mobile devices, include tasks that reflect mobile user behaviors and contexts.
Balance breadth and depth: Include both broad exploratory tasks ("find information about pricing") and specific targeted tasks ("locate the API documentation").
Use participant quotes: Collect qualitative feedback through post-task comments or follow-up interviews to understand the "why" behind the numbers.
Common Mistakes to Avoid
Testing too early: Don't run tree tests on rough draft IAs. Your structure should be reasonably polished and reflect real labels you might use on the site.
Ignoring mobile users: Tree testing tools work on mobile devices, and mobile users often have different mental models for site navigation.
Over-testing edge cases: Focus most tasks on common user goals rather than rare scenarios that affect few users.
Changing mid-study: Resist the urge to modify tasks or structure while the study is running, even if you spot obvious issues.
Focusing only on failures: Successful paths tell you what's working well and should be preserved in your final IA.
Next Steps
After completing your tree test study, you'll have concrete data about your IA's strengths and weaknesses. Use these insights to refine your site structure before moving into visual design and prototyping.
Consider running a follow-up card sort study if your tree test revealed fundamental categorization problems, or move forward with first-click testing once you have wireframes ready.
Ready to validate your information architecture with real users? Tree testing provides the quantitative backing you need to make confident IA decisions and improve findability across your entire site.