UX Research Term

Time on Task

Time on Task

Time on task measures how long a user takes to complete a specific action, recorded in seconds or minutes. It's one of the most straightforward usability metrics you can collect — start the clock when a user begins a task, stop it when they finish (or give up). The number itself is simple. The interpretation is where it gets interesting.

Key Takeaways

  • Use median, not mean: A few confused participants will skew your average. Median represents the typical user experience
  • Compare, don't benchmark: Time on task is most useful as a before/after comparison or competitive benchmark, not as an absolute target
  • Faster isn't always better: Tasks like comparing insurance plans or reading safety instructions should take time. Optimize for appropriate pace, not raw speed
  • Pair with task success rate: Fast completion means nothing if users completed the wrong task. A user who confidently clicks the wrong category in 3 seconds isn't a success story

What Time on Task Tells You

Time on task acts as a proxy for navigation friction. When users take 34 seconds to find your returns policy and you redesign the IA based on card sort data and that drops to 12 seconds, you've quantified the impact of better information architecture. That 22-second improvement across thousands of daily visitors translates directly into reduced support tickets and higher task completion.

The metric becomes especially powerful in tree testing. Strip away visual design, give participants a text-only hierarchy, and measure how long it takes them to navigate to the right node. Slow times in tree tests isolate IA problems from visual design problems — if users are slow to find "Billing" in a tree test, the issue is your category structure, not your button colors.

How to Measure It

Record start and end times for each task. Most testing platforms handle this automatically, but if you're running moderated sessions, use a stopwatch or timestamp your recordings.

What counts as "start": The moment the participant reads the task prompt and begins interacting. Not when they finish reading the task — when they make their first action.

What counts as "done": When the participant reaches the correct destination, completes the action, or explicitly gives up. Set a maximum time limit (typically 2-5 minutes depending on task complexity) so you're not waiting indefinitely on participants who are stuck but too polite to say so.

Handle outliers carefully. A participant who takes 180 seconds when everyone else took 15-20 probably got distracted, misunderstood the task, or encountered a technical issue. Don't just delete outliers — investigate them. Sometimes the slowest participant reveals a legitimate problem that others navigated around by luck.

Time on Task and Information Architecture

After you've run a card sort and built a new IA, time on task is your primary validation metric. The workflow:

  1. Measure time on task with the current navigation (baseline)
  2. Run a card sort to understand user mental models
  3. Restructure your IA based on card sort results
  4. Measure time on task again with the new structure

If the same tasks take less time with the new IA, your card sort data translated into a genuinely better structure. If times stay flat or increase, something went wrong in the translation from card sort clusters to navigation categories — revisit your agreement rates and check whether you built around high-agreement groupings or forced an organizational structure onto ambiguous data.

When Faster Isn't Better

Not every task should be fast. Product comparison pages, financial disclosures, medical information — these are contexts where quick task completion might mean users aren't reading carefully enough. If a user selects a health insurance plan in 8 seconds, they didn't compare options; they picked the first one.

For these tasks, track time on task alongside comprehension or confidence measures. The goal isn't minimum time; it's appropriate engagement. A well-designed comparison page might increase time on task while also increasing decision confidence and reducing post-purchase regret.

Further Reading

Frequently Asked Questions

What is a good time on task for usability testing? There is no universal benchmark because acceptable time on task depends entirely on the complexity of the task. Simple navigation tasks like finding a phone number should take under 15 seconds. Multi-step tasks like completing a checkout flow might reasonably take 2-3 minutes. The most useful approach is to compare time on task before and after design changes, or between your product and a competitor, rather than targeting an absolute number.

Should you use mean or median for time on task data? Use the median. Time on task data almost always has a right skew because a few participants take much longer than others due to confusion, distractions, or getting lost. The mean gets pulled upward by these outliers and misrepresents typical user experience. Median gives you the midpoint that better reflects what most users actually experience.

How does time on task relate to card sorting results? Card sorting builds the information architecture, and time on task measures whether that architecture works. After implementing navigation structures based on card sort results, run tree tests or usability tests and measure time on task. If users find items faster than they did with the old structure, the card sort data translated into effective IA. Persistent slow times point to categories or labels that need revisiting.

Try it in practice

Start a card sorting study and see how it works

Related UX Research Resources

Explore related concepts, comparisons, and guides