Error rate is the percentage of users who take a wrong path, click the wrong category, or fail to complete a task correctly. In tree testing, it's the share of participants who navigate to the wrong endpoint. In usability testing, it includes wrong clicks, backtracking, and task failures. Error rate doesn't just tell you something is broken — it tells you exactly where.
Error rate exposes the gap between where users expect to find something and where you put it. If 40% of users click "Account Settings" when trying to find billing information, that's a 40% error rate telling you one of three things: the "Billing" label is buried or missing, "Account Settings" sounds like it should contain billing, or your IA splits a concept that users see as unified.
This is where error rate connects directly to card sorting. If your card sort showed 55% of participants putting "Billing" under "Account" and you overrode that data to create a separate "Payments" section, the 40% error rate in your tree test is the predictable consequence. Low agreement in the sort becomes high error in the navigation.
Binary errors: Did the participant reach the correct destination? Yes or no. This is the simplest and most common measurement, especially in tree testing. Divide wrong-destination participants by total participants.
Per-step errors: In multi-step tasks, track errors at each decision point. A participant might navigate correctly through three levels of hierarchy and make a wrong turn at the fourth. Per-step measurement reveals which specific branch of your IA is causing confusion, not just that confusion exists somewhere in the path.
First-click errors: The first click in a navigation task predicts overall success with roughly 85% accuracy. If a participant's first click is wrong, there's only a 15% chance they'll recover and complete the task successfully. Track first-click error rate separately — it's your earliest warning signal.
Individual error rates are useful. Error patterns across tasks are where the real insights live.
If one task has a 45% error rate while everything else sits below 10%, you have a localized labeling or placement problem. Fix that one category and retest.
If multiple tasks in the same section of your IA show elevated error rates (25-40%), the problem is structural. The entire section's organization doesn't match user mental models, and you need to revisit the card sort data for that content cluster.
If error rates are uniformly high (30%+) across the entire navigation, your IA needs a fundamental rethink — not a label tweak. Go back to an open card sort and let participants build the structure from scratch rather than trying to patch the existing one.
The most valuable use of error rate is measuring the before-and-after impact of an IA change driven by card sort data. Run a tree test on your current navigation, record error rates per task, then run the same tasks against the new structure built from card sort clusters.
Effective card sort-driven redesigns typically reduce error rates by 25-50% on previously problematic tasks. If you're not seeing that kind of improvement, check whether you implemented the high-agreement groupings faithfully or whether organizational politics watered down the card sort findings.
Watch for error migration: fixing one category's error rate sometimes increases errors on adjacent categories if the restructuring moved content in unexpected ways. Always test the full navigation, not just the sections you changed.
Error rate and task success rate are related but not redundant. Task success rate tells you the final outcome — did they get there? Error rate captures the journey — did they stumble along the way? A user who backtracks three times but eventually arrives at the right destination counts as a success in task success rate but reveals problems through error rate metrics.
Both metrics together expose a common pattern: users who succeed despite errors. These "struggled successes" feel frustrating to users and generate support tickets even though the task technically completed. If your task success rate looks healthy but error rate is high, your IA is survivable but not intuitive.
What is a good error rate in usability testing? For simple navigation tasks, aim for error rates below 10%. For complex multi-step tasks, error rates under 20% are typical of well-designed interfaces. Any task with an error rate above 30% signals a design problem that needs attention. However, context matters — a 15% error rate on a rarely used admin feature is less urgent than a 15% error rate on your primary checkout flow.
How do you calculate error rate? Divide the number of participants who made an error by the total number of participants who attempted the task, then multiply by 100. For example, if 12 out of 30 participants clicked the wrong category when trying to find billing information, the error rate is 40%. You can also calculate error rate per step in multi-step tasks to pinpoint exactly where users go wrong.
What is the relationship between error rate and agreement rate in card sorting? High error rates in tree testing often trace directly back to low agreement rates in the card sort that informed the navigation structure. If only 45% of card sort participants agreed on where a content item belongs, it is predictable that users will navigate to the wrong place when that ambiguous placement is implemented. Comparing error rate data from tree tests with agreement rate data from card sorts reveals which IA decisions need revisiting.
Explore related concepts, comparisons, and guides