UX Research Term

A/B Testing

A/B testing (also called split testing) is a method of comparing two versions of a webpage, app feature, or element to determine which one performs better. Traffic is randomly split between versions, and performance is measured with data.

How A/B Testing Works

Step 1 - Hypothesis "Changing button color from blue to green will increase conversions"

Step 2 - Create Variations

  • Version A (Control): Original blue button
  • Version B (Variant): New green button

Step 3 - Split Traffic

  • 50% see version A
  • 50% see version B
  • Random assignment

Step 4 - Measure Results

  • Track conversions, clicks, time on page
  • Statistical significance determines winner
  • Implement winning version

What to A/B Test

High-Impact Elements:

  • Headlines and copy
  • Call-to-action buttons
  • Images and videos
  • Form length and fields
  • Navigation structure
  • Pricing presentation
  • Page layout

Don't test everything at once - isolate one variable

A/B Testing Metrics

Conversion Rate: Percentage who complete goal Click-Through Rate (CTR): Percentage who click Bounce Rate: Percentage who leave immediately Time on Page: How long users engage Revenue Per Visitor: Economic impact Form Completion Rate: For sign-ups, purchases

Statistical Significance

Why it matters:

  • Need enough data to trust results
  • Usually need 95%+ confidence
  • Small sample = unreliable results
  • Larger differences need less traffic to prove

Example:

  • Version A: 100 visitors, 10 conversions (10%)

  • Version B: 100 visitors, 11 conversions (11%)

  • Not significant - need more data!

  • Version A: 1000 visitors, 100 conversions (10%)

  • Version B: 1000 visitors, 150 conversions (15%)

  • Significant - B is clearly better!

A/B Testing + Card Sorting

Use together for IA optimization:

Card Sorting First: Discover user mental models

  • What categories make sense?
  • How should content be organized?
  • What labels do users understand?

A/B Test Implementation: Validate in production

  • Test old vs new navigation
  • Compare conversion rates
  • Measure task completion

Example: Card sorting reveals users prefer "Plans" over "Pricing". A/B test proves "Plans" converts 23% better.

Common Mistakes

Testing too many things: Can't tell what worked ❌ Stopping too early: Need statistical significance ❌ Ignoring segments: Different users behave differently ❌ No clear hypothesis: Just changing randomly ❌ Testing tiny changes: Button shade won't move needle ❌ Ignoring context: Seasonal effects, traffic sources

Multivariate Testing

A/B Testing: One element, two versions Multivariate: Multiple elements, multiple versions

Example MVT:

  • Test headline (2 versions)
  • Test image (2 versions)
  • Test button (2 versions)
  • = 8 total combinations

When to use:

  • MVT: High traffic sites
  • A/B: Most situations (simpler, clearer)

Tools for A/B Testing

Enterprise: Optimizely, VWO, Adobe Target Mid-Market: Google Optimize (free), Unbounce DIY: Custom code with analytics E-commerce: Built into Shopify, BigCommerce

Sample Size Calculator

Factors that determine test duration:

Traffic: More traffic = faster results Baseline Conversion: Lower conversion needs more traffic Expected Lift: Bigger changes prove faster Confidence Level: 95% is standard

Typical test duration: 1-4 weeks

Best Practices

One clear goal: Don't optimize multiple metrics ✅ Test high-traffic pages: Need sufficient sample ✅ Run full weeks: Account for weekly patterns ✅ Document everything: Learnings for future tests ✅ Test big changes: Small tweaks rarely matter ✅ Have a hypothesis: Know why you're testing

When NOT to A/B Test

Don't test if:

  • Too little traffic (need 1000+ visitors/week minimum)
  • Can't reach significance in reasonable time
  • Change is obviously better (accessibility fix)
  • Legal/compliance requirement
  • You're just guessing randomly

Better approaches:

  • Usability testing for qualitative insights
  • Card sorting for IA decisions
  • Analytics for behavior patterns

Real Examples

Obama Campaign 2008

  • Tested landing page variations
  • Winner increased sign-ups 40%
  • Generated $60M in additional donations

Booking.com

  • Tests everything constantly
  • "Only X rooms left!" messaging
  • Urgency increases bookings 12%

Amazon

  • Tested adding reviews
  • Increased conversions significantly
  • Now core to their strategy

A/B Test Your Navigation

After using card sorting to design navigation:

  1. Create control: Current navigation
  2. Create variant: Card sort-based navigation
  3. Define success: Task completion, conversions
  4. Run test: 2-4 weeks
  5. Measure impact: Data-driven decision

Optimize your IA with card sorting first, then validate with A/B testing at freecardsort.com

Try it in practice

Start a card sorting study and see how it works

Related UX Research Resources

Explore related concepts, comparisons, and guides