Time on task measures how long a user takes to complete a specific action, recorded in seconds or minutes. It's one of the most straightforward usability metrics you can collect — start the clock when a user begins a task, stop it when they finish (or give up). The number itself is simple. The interpretation is where it gets interesting.
Time on task acts as a proxy for navigation friction. When users take 34 seconds to find your returns policy and you redesign the IA based on card sort data and that drops to 12 seconds, you've quantified the impact of better information architecture. That 22-second improvement across thousands of daily visitors translates directly into reduced support tickets and higher task completion.
The metric becomes especially powerful in tree testing. Strip away visual design, give participants a text-only hierarchy, and measure how long it takes them to navigate to the right node. Slow times in tree tests isolate IA problems from visual design problems — if users are slow to find "Billing" in a tree test, the issue is your category structure, not your button colors.
Record start and end times for each task. Most testing platforms handle this automatically, but if you're running moderated sessions, use a stopwatch or timestamp your recordings.
What counts as "start": The moment the participant reads the task prompt and begins interacting. Not when they finish reading the task — when they make their first action.
What counts as "done": When the participant reaches the correct destination, completes the action, or explicitly gives up. Set a maximum time limit (typically 2-5 minutes depending on task complexity) so you're not waiting indefinitely on participants who are stuck but too polite to say so.
Handle outliers carefully. A participant who takes 180 seconds when everyone else took 15-20 probably got distracted, misunderstood the task, or encountered a technical issue. Don't just delete outliers — investigate them. Sometimes the slowest participant reveals a legitimate problem that others navigated around by luck.
After you've run a card sort and built a new IA, time on task is your primary validation metric. The workflow:
If the same tasks take less time with the new IA, your card sort data translated into a genuinely better structure. If times stay flat or increase, something went wrong in the translation from card sort clusters to navigation categories — revisit your agreement rates and check whether you built around high-agreement groupings or forced an organizational structure onto ambiguous data.
Not every task should be fast. Product comparison pages, financial disclosures, medical information — these are contexts where quick task completion might mean users aren't reading carefully enough. If a user selects a health insurance plan in 8 seconds, they didn't compare options; they picked the first one.
For these tasks, track time on task alongside comprehension or confidence measures. The goal isn't minimum time; it's appropriate engagement. A well-designed comparison page might increase time on task while also increasing decision confidence and reducing post-purchase regret.
What is a good time on task for usability testing? There is no universal benchmark because acceptable time on task depends entirely on the complexity of the task. Simple navigation tasks like finding a phone number should take under 15 seconds. Multi-step tasks like completing a checkout flow might reasonably take 2-3 minutes. The most useful approach is to compare time on task before and after design changes, or between your product and a competitor, rather than targeting an absolute number.
Should you use mean or median for time on task data? Use the median. Time on task data almost always has a right skew because a few participants take much longer than others due to confusion, distractions, or getting lost. The mean gets pulled upward by these outliers and misrepresents typical user experience. Median gives you the midpoint that better reflects what most users actually experience.
How does time on task relate to card sorting results? Card sorting builds the information architecture, and time on task measures whether that architecture works. After implementing navigation structures based on card sort results, run tree tests or usability tests and measure time on task. If users find items faster than they did with the old structure, the card sort data translated into effective IA. Persistent slow times point to categories or labels that need revisiting.
Explore related concepts, comparisons, and guides