How To
14 min read

Remote Card Sorting: Best Practices & Complete Guide (2025)

Learn how to run effective remote card sorting studies. Best practices for unmoderated card sorts, participant recruitment, and getting reliable results online.

By Free Card Sort Team

Remote Card Sorting: Best Practices & Complete Guide

Remote card sorting has become the standard for UX research. It's faster, cheaper, and often more reliable than in-person studies. Here's how to do it right.

Why Remote Card Sorting?

Advantages Over In-Person

✅ Faster Results

  • In-person: 1-2 weeks to schedule sessions
  • Remote: Get 30 responses in 2-3 days

✅ More Participants

  • In-person: 5-10 participants (scheduling limits)
  • Remote: 20-40 participants (no scheduling conflicts)

✅ Lower Cost

  • In-person: Travel, facility rental, incentives
  • Remote: Just incentives (often smaller)

✅ Geographic Diversity

  • In-person: Limited to local area
  • Remote: Reach users worldwide

✅ Natural Environment

  • In-person: Lab setting (artificial)
  • Remote: Users' actual environment

✅ Less Bias

  • In-person: Moderator influence
  • Remote: No researcher present

When Remote Doesn't Work

❌ Complex products requiring explanation ❌ Highly confidential content (security concerns) ❌ Elderly users unfamiliar with online tools ❌ Need for follow-up questions during activity

Solution: Use hybrid approach (remote card sort + video interviews)


Remote vs In-Person Card Sorting

FactorRemote (Unmoderated)In-Person (Moderated)
Setup time5-10 minutes2-4 hours per session
Participants20-40 easily5-10 typical
Cost$50-$300$500-$2,000
Timeline2-4 days1-2 weeks
Moderator biasNonePresent
Follow-up questionsNot possibleCan ask during
Completion rate70-85%~100%
Data qualityHigh (with good design)High (with good moderation)

Recommendation: Start with remote. Use in-person only if you need real-time follow-up.


Setting Up Remote Card Sorting

Step 1: Choose Your Tool

Must-have features:

  • ✅ Mobile-friendly interface
  • ✅ No login required for participants
  • ✅ Automatic randomization
  • ✅ Progress saving
  • ✅ Real-time results
  • ✅ Export to CSV

Recommended: Free Card Sort

  • Meets all criteria
  • Free for 3 studies
  • 5-minute setup
  • No participant friction

Alternatives:

  • Optimal Workshop (expensive but comprehensive)
  • UsabilityHub (basic features)
  • UserZoom (enterprise-focused)

Step 2: Create Clear Cards

Remote participants don't have you there to clarify. Cards must be self-explanatory.

✅ Good card names (clear, specific):

- Track My Order
- Return an Item
- Update Payment Method
- Contact Customer Support
- View Order History

❌ Bad card names (vague, confusing):

- Tracking
- Returns
- Payment
- Support
- History

Testing trick: Show cards to a colleague unfamiliar with your project. If they're confused, participants will be too.

Step 3: Write Foolproof Instructions

You won't be there to answer questions. Instructions must cover everything.

Essential elements:

  1. Welcome & context (1-2 sentences)
  2. Task explanation (what to do)
  3. Time estimate (set expectations)
  4. Reassurance (no right/wrong answers)
  5. Thank you

Template:

Welcome! Thank you for participating.

We're redesigning [Product Name] to make it easier to use.

YOUR TASK:
Please organize these [items] into groups that make sense to you.
Create category names that describe each group.

This will take about 10 minutes.

There are no right or wrong answers—we want to understand how
YOU think about these items.

Your input will help make [Product] better for everyone.

Thank you!

Pro tip: Test instructions with 2-3 people before full launch. Revise based on confusion.

Step 4: Optimize for Mobile

40-60% of remote participants will use mobile devices.

Mobile best practices:

  • ✅ Test on phone before launching
  • ✅ Keep card names short (2-5 words)
  • ✅ Use 30-40 cards max (mobile fatigue)
  • ✅ Choose tool with mobile-optimized UI
  • ✅ Avoid images unless necessary (slow loading)

How to test: Open study link on your phone, complete it yourself.


Recruiting Remote Participants

How Many Participants?

Open card sort: 20-30 participants

  • Patterns emerge around 15-20
  • 20-30 gives confidence
  • 30+ has diminishing returns

Closed card sort: 30-40 participants

  • Need more for statistical confidence
  • Validating hypothesis requires larger sample

Rule of thumb: More is better, but you'll see most patterns by 25.

Where to Find Participants

Option 1: Your Customer Base (Best!)

Pros:

  • ✅ Real users
  • ✅ Familiar with your product
  • ✅ Invested in improvements
  • ✅ Usually free or low incentive

How:

  • Email to customer list
  • In-app banner
  • Post-purchase survey invitation
  • Customer success outreach

Option 2: Research Panels

UserTesting.com

  • Pros: Fast (hours), quality screeners
  • Cons: Expensive ($30-$100 per response)

Respondent.io

  • Pros: Professional participants, good for B2B
  • Cons: $50-$200 per participant

Prolific.co

  • Pros: Academic quality, affordable ($5-$10/response)
  • Cons: Mostly consumer, limited B2B

Option 3: Social Media

Pros:

  • ✅ Free or low cost
  • ✅ Can reach niche audiences
  • ✅ High response if you have following

Cons:

  • ❌ Sample bias (followers may not represent users)
  • ❌ Harder to screen participants

Best platforms:

  • LinkedIn (B2B products)
  • Reddit (niche communities)
  • Twitter/X (design/UX community)
  • Facebook groups (consumer products)

Option 4: Friends & Family (Last Resort)

Only use if:

  • Testing general concepts (not specific product)
  • Can't access real users
  • Budget is $0

Warning: Results will be less reliable. Friends want to help, causing bias.

Incentives

Do you need incentives?

Yes, if:

  • ✅ Recruiting strangers
  • ✅ Study takes over 15 minutes
  • ✅ Targeting busy professionals
  • ✅ Want high completion rate

Maybe not, if:

  • ✅ Existing engaged customers
  • ✅ Study takes under 10 minutes
  • ✅ Users passionate about product
  • ✅ "Help us improve" motivation strong

Incentive amounts:

Study LengthCustomer ListGeneral Public
5-10 min$0-$5$5-$10
10-15 min$5-$10$10-$20
15-20 min$10-$15$20-$30

Incentive options:

  • Amazon gift cards (easiest)
  • Product discounts (for customers)
  • Donation to charity (altruistic participants)
  • Entry into raffle (if budget limited)

Screening Participants

Not everyone should participate. Screen for:

Demographics (if relevant):

  • Age, location, occupation
  • Only screen if it matters for your product

Experience level:

  • New users vs. power users
  • Mix both, or segment analysis

Device usage:

  • Desktop, mobile, or both
  • Important if device affects how they think

Sample screening questions:

1. Have you used [Product Type] in the past 6 months?
   ☐ Yes ☐ No

2. How often do you use [Product]?
   ☐ Daily ☐ Weekly ☐ Monthly ☐ Never/Rarely

3. Which best describes you?
   ☐ Beginner ☐ Intermediate ☐ Advanced

4. What device will you use for this study?
   ☐ Desktop/Laptop ☐ Mobile phone ☐ Tablet

Running the Remote Study

Pre-Launch Checklist

Before sending to participants:

  • Test study yourself on desktop
  • Test on mobile phone
  • Have 2-3 colleagues complete it
  • Check that all cards are clear
  • Verify instructions are understood
  • Confirm study link works
  • Set up tracking (if using analytics)
  • Prepare recruitment message

Launching the Study

Soft launch (recommended):

  1. Day 1: Send to 5 participants
  2. Monitor: Check first few responses
  3. Verify: Cards make sense, no confusion
  4. Fix issues: If problems arise, fix before full launch
  5. Day 2: Send to remaining participants

Full launch:

  1. Send link to all participants at once
  2. Monitor first 10 responses closely
  3. If major issues, pause and fix

Monitoring Responses

Check daily:

  • Number of completions
  • Completion rate (started vs. finished)
  • Time to complete (average)
  • Any cards with low agreement (confusion)
  • Drop-off points

Red flags:

⚠️ Completion rate under 60%

  • Study may be too long
  • Instructions unclear
  • Technical issues
  • Cards confusing

⚠️ Average time under 5 minutes (for 30-40 cards)

  • Participants rushing
  • Not taking it seriously
  • May need to filter out low-quality responses

⚠️ Average time over 20 minutes

  • Study too long
  • Cards unclear
  • Too many cards

⚠️ Uniform groupings (everyone creates same categories)

  • Cards may be too obvious
  • Might need more nuanced cards

⚠️ Random groupings (no pattern)

  • Cards unclear
  • Participants not understanding task
  • Need clearer instructions

Sending Reminders

When: 3-5 days after initial send

Who: People who clicked link but didn't complete

Message template:

Subject: Quick reminder: Help us improve [Product]

Hi [Name],

A few days ago, we invited you to participate in a 10-minute
study to help improve [Product].

We'd love your input! Your perspective will help make [Product]
better for everyone.

[Study Link]

This should take about 10 minutes. Thank you!

[Your Name]

When NOT to remind:

  • Already have target number of responses
  • Person explicitly opted out
  • More than 7 days have passed

Ensuring Data Quality

Spot Bad Responses

Signs of low-quality data:

  1. Completion time under 3 minutes (for 30-40 cards)

    • Likely rushed or random
  2. All cards in 1-2 categories

    • Not engaging with task
  3. Non-sensical category names

    • "asdf", "Group 1", "Random"
  4. Duplicate participants (same IP, completion pattern)

  5. Straight-line pattern (alphabetical grouping)

    • Minimum effort

Filter Out Bad Data

Most tools let you:

  • ✅ Review individual responses
  • ✅ Mark responses as invalid
  • ✅ Exclude from analysis
  • ✅ Export clean dataset

Guideline: Remove responses that are clearly rushed or random. But don't remove just because they differ—diversity is valuable!

Improve Response Quality

Before study:

  • ✅ Clear instructions
  • ✅ Time estimate
  • ✅ "No right/wrong answers" reassurance

During study:

  • ✅ Show progress indicator
  • ✅ Allow saving and returning later
  • ✅ Mobile-friendly interface

Screening:

  • ✅ Recruit engaged participants
  • ✅ Offer appropriate incentives
  • ✅ Screen for relevant experience

Analyzing Remote Results

Step 1: Review Completion Metrics

Questions to ask:

  • How many started vs. finished?
  • Average completion time?
  • Any drop-off patterns?

Good benchmarks:

  • Completion rate: 70-85%
  • Average time: 8-15 minutes (for 30-40 cards)
  • Drop-off: under 15%

Step 2: Examine Similarity Matrix

Look for:

  • Dark clusters (over 70% agreement) = strong groupings
  • Light areas (under 40%) = weak relationships
  • Isolated cards = doesn't fit anywhere

Tool: Most remote card sort platforms generate this automatically.

Step 3: Identify Popular Groupings

Questions:

  • What categories did users create?
  • How many categories (average)?
  • What category names were most common?
  • Any surprising groupings?

Expected:

  • 4-7 main categories (most common)
  • 60-80% agreement on core groupings
  • Some variation (that's normal!)

Step 4: Calculate Agreement

Agreement metrics:

High agreement (over 70%):

  • Strong consensus
  • Implement with confidence

Moderate agreement (50-70%):

  • General pattern with variation
  • Consider user segments

Low agreement (under 50%):

  • No consensus
  • Investigate further
  • Card may be unclear

Step 5: Document Insights

Create findings doc:

  1. Overview

    • Number of participants
    • Study type
    • Dates conducted
  2. Key findings

    • Top 3-5 patterns
    • Surprising insights
    • Low-agreement cards
  3. Recommendations

    • Proposed structure
    • Validation steps
    • Next research needed
  4. Appendix

    • Individual responses
    • Full similarity matrix
    • Participant comments (if collected)

Remote Card Sorting Mistakes

Mistake #1: Too Many Cards

Problem: 60+ cards takes 25+ minutes remotely Impact: Fatigue, low completion rate, rushed responses Solution: Limit to 30-50 cards, split into multiple studies if needed

Mistake #2: Unclear Instructions

Problem: No moderator to clarify Impact: Confused participants, random groupings, abandonment Solution: Test instructions with 3 people before launch

Mistake #3: No Mobile Testing

Problem: 40%+ of participants on mobile Impact: Poor experience, low completion, skewed results Solution: Test on phone before launching

Mistake #4: Wrong Participants

Problem: Recruited friends, not real users Impact: Results don't reflect actual user mental models Solution: Recruit target users, even small sample better than wrong people

Mistake #5: Not Monitoring Real-Time

Problem: Waited until end to check results Impact: Missed issues, wasted participant time Solution: Check first 5-10 responses, fix issues immediately

Mistake #6: Ignoring Drop-Offs

Problem: 50% completion rate, didn't investigate Impact: Biased results (only motivated people finished) Solution: If under 70% completion, figure out why and fix

Mistake #7: Over-Interpreting Noise

Problem: One person grouped oddly, changed entire design Impact: Focused on outlier instead of pattern Solution: Look for over 60% patterns, investigate outliers but don't over-index


Advanced Remote Techniques

Technique 1: Segmented Analysis

Analyze different user groups separately:

Examples:

  • New users vs. power users
  • Geographic regions
  • Desktop vs. mobile
  • Demographics (if relevant)

When to use: If you suspect different groups think differently

Technique 2: Hybrid Remote-Moderated

Process:

  1. Participants complete card sort remotely
  2. Follow up with 5-8 participants via video call
  3. Ask them to explain their groupings

Benefits:

  • Quantitative data from remote sort
  • Qualitative insights from interviews
  • Best of both worlds

Technique 3: A/B Testing Card Sets

Run two studies simultaneously:

Study A: Original card names Study B: Revised card names

Compare results to see which is clearer.

Technique 4: Multi-Round Studies

Round 1: Open sort to discover categories Round 2: Closed sort to validate findings Round 3: Tree testing to test findability

Timeline: 2-3 weeks total, far faster than in-person


Remote Card Sort Checklist

Setup Phase

  • Choose reliable tool (test mobile!)
  • Create 30-50 clear, specific cards
  • Write foolproof instructions
  • Test with 3 colleagues
  • Recruit 20-40 target participants
  • Set up incentives (if needed)

Launch Phase

  • Soft launch to 5 participants first
  • Monitor first responses
  • Fix any issues immediately
  • Send to all participants
  • Set reminder for day 3-5

Monitoring Phase

  • Check responses daily
  • Track completion rate (aim for over 70%)
  • Watch for confusing cards
  • Remove obviously bad responses
  • Send reminders after 3-5 days

Analysis Phase

  • Review similarity matrix
  • Identify popular groupings (over 70% agreement)
  • Note surprising findings
  • Calculate agreement scores
  • Document top 3-5 insights
  • Create recommendations

Frequently Asked Questions

Q: How long should participants have to complete? A: Keep study open for 5-7 days. Most responses come in first 2-3 days. Send reminder on day 3.

Q: What if completion rate is low (under 60%)? A: Common causes: study too long, unclear instructions, technical issues, wrong participants. Investigate and fix.

Q: Can I run card sort internationally? A: Yes! Remote makes this easy. Consider time zones when sending invites. Translate if needed.

Q: Should I allow participants to create unlimited categories? A: For open sorts, yes. Most people create 4-7. If someone creates 15+, that's still useful data.

Q: What if results are messy with no clear pattern? A: Could mean: (1) Cards unclear, (2) Content truly ambiguous (needs tags/search), or (3) Need more participants. Investigate.

Q: How do I compare remote vs. in-person results? A: Studies show 85-95% similarity. Remote is slightly noisier but much larger sample size compensates.


Related Resources


Ready to run your remote card sort?Start free study

Ready to Try It Yourself?

Start your card sorting study for free. Follow this guide step-by-step.

Related Guides & Resources

Explore more how-to guides and UX research tips