Auditing

The norms

Auditing in UX world means testing a product against a set of norms or heuristics (names often used interchangeably). It’ll share some characteristics with design critique. Both are a form of expert review, but while design critique evaluates how well was the product designed, audit will determine whether certain requirements were met.

Before auditing, determine what norms will you audit against and what constitutes a pass. If you’re looking for an example, WCAG has a great “How to meet WCAG” document, showing what counts as a pass and fail of an accessibility guideline.

Examples of norms you can use for the purpose of auditing:

Developing your own norms

Norms I’ve listed should cover a vast majority of auditing cases, but there might be specific ones where you’d want to develop your own set of UX norms. When thinking about it, start with asking yourself “do I really wanna do it”. Sometimes it’ll be necessary. An example that comes to mind is testing whether content is in domain-specific plain language, a requirement more specific than in the mentioned before norms.

Your new norms should be:

  • different from existing ones — that comes as no surprise
  • specific enough — a little overlap between elements of a set of norms is to be expected, but try to have as little of it as you can
  • tested — before using, you’ll need to verify them

When thinking about it one can’t but notice norm development, in the UX world and outside of it, is extremely similar to development of psychometric tools. If you’d like to know more about it, get a book called Psychological Testing by Anne Anastasi and Susana Urbina.

Anatomy of audit report

Like any other research product, an audit is ment for consumption by both people with technical expertise, like researchers themselves, and by those who want actionable, plainly stated insights, like the decision makers. That’s why when I’m preparing a UX/A11Y audit, I structure it like this:

  • Mandatory elements
    • Executive summary — succint and actionable. In rare cases (depending on the scope of the audited product) this part might exceed 500 words.
    • Heuristic evaluation — you’ll begin this section with a list of used norms, and quantified presence and severity of the discovered issues. Next, include a detailed descriptions of those issues, with URL/path, screenshots or video examples and a severity score:
      • High — user blocked on their path to a goal or the task will be completed in a manner incongruent to the user’s intent
      • Medium — issue slows down progress noticeably, but is not a blocker
      • Low — issue that doesn’t count as a pass, but is unnoticeable by the user or is of minor inconvenience
  • Good to have
    • Cognitive walkthrough — validation of whether the product flow matches predicted mental model of it’s usage
    • Usability testing — it’s user-centered design after all, confirmatory usability testing will serve as corroboration of the audit’s results
    • Benchmarking — how well is your product doing when compared to the competition, shines a different light on the audit results

For the end, an important thing to consider. While norms or heuristics can guide the design process, a product can be audited when it can stand on its own merit — after production deployment, or as a MVP at an earliest. Audits are meant to be confirmatory, not exploratory, methods.

Leave a Reply

Your email address will not be published. Required fields are marked *