Using Closed Card-Sorting to Evaluate Information Architectures

Abstract
A technique using closed card-sorting to evaluate candidate information architectures for a web site is described. Participants in an online card-sorting study are randomly directed to one of the architectures being evaluated. Each participant is shown the same cards but different categories to sort them into.

The basic data collected is simply which cards each user put into which groups. For any one architecture being tested, the data show what percentage of the participants put each card into each group. A better architecture is one where the participants were more consistent with each other in terms of which groups they put each of the cards into. The basic ‘score’ proposed for each card is the percentage associated with the ‘winning’ group (i.e., the group with the highest percentage)—the higher that percentage is, the better. A consistency score for each architecture tested can then be calculated by taking an average of these percentages across all the cards. A technique for correcting this score when the different architectures have different numbers of groups is also described.

Using Closed Card-Sorting to Evaluate Information Architectures (PDF, 86 kb)

You May Also Like

Comments

One response to “Using Closed Card-Sorting to Evaluate Information Architectures”

  1. Steffen Avatar

    I want to add another tool-based approach to evaluate card sorting results / navigation structures: C-Inspector (www.c-inspector.com).

    C-Inspector is a scenario-based application for testing sitemaps standalone. To perform a study, you have to upload your tree structure, define some scenarios (e.g., “Where would you look for today’s movie showtimes?”) and sent the test link to your participants. By analyzing both quantitative and qualitative results you can gain insight into the users’ mental models and identfy possible issues with labeling or grouping.

    C-Inspector covers issues like…
    * What categories have a poor findability?
    * How much time do users spent on a specific task?
    * How high is the break up rate?
    * On what navigation level do I lose most of my users?
    * There are alternative paths, what is the preferred one? (Converging branches)
    * What were the users’ choices (paths) before they found the right category?
    * How many attempts do the users need?

    Further questions? Just write me an email: contact@c-inspector.com.

    Cheers
    Steffen

Leave a Reply