Search results

Why Rubrics Fail as a Means of Measuring Documentation Quality

by Tom Johnson on Oct 5, 2011
categories: technical-writing

Alice Jane Emanuel has an interesting post that details her methods for measuring the quality of documentation. The post consists of notes from a webinar she gave on the subject. Alice writes,

... I have never seen anything like what I envisage in my head, which closes the argument by creating a weight or optimal rating for each necessary element in the technical communication being reviewed. When you start to consider necessary elements you can look for concrete things, gauge how much they are needed, and look at how well that need is met.

Some of the categories she assesses include:

  • Document structure
  • Reference and navigation
  • Graphics and other visual elements
  • Accuracy and grammar
  • Terminology
  • Consistency
  • Clarity
  • Task orientation
  • Completeness

She explains,

Depending on what your document requires, you set a weight for each element in each category, with 1 as low and 5 as high...

For the full talk, see Measuring quality — The talk — Comma Theory.

I think her method is a good example of a functioning rubric. And I'm not singling out her approach at all. It's just that her post made me think more about rubrics in general. I have mixed feelings about them.

Before I became a technical writer, I worked for four years as a composition instructor (two years as a grad teacher, and two years as regular faculty). In both positions, the rubric always reared its head because you had to have some kind of criteria to evaluate student essays. Students wanted to know how they would be graded, and teachers wanted to avoid accusations of subjectivity.

For composition, the rubric usually assessed writing based on argument, organization, source integration, language, and some other elements. You can see the rubric I put together here when I was a composition teacher in Egypt.

Rubrics are popular in almost any event where judgements are made. For example, we're seeing a similar criteria with the deathmatch going on at MindTouch:

Here our judges will compare two competitors' product help and support communities against each other in the following criteria; User Experience, Social, Engagement and Findability. (See Death Match Round 1: Mozilla Versus IE.)

One of my issues with using rubrics to assess anything -- documentation, essays, support sites -- is that, at least for me, judgment is not so mechanical. Almost nothing can be broken down into a list of parts that, when properly assembled and in the right balance, create a perfect whole. I guess I lean more toward the "holistic rubric" camp, which has more general categories, each of which has a number of subpoints (which are not evaluated and scored individually).

Rubrics may provide a good reminder for writers as they're creating documentation. For example, including visuals and other illustrations in help would probably be a good idea, as well as adding a glossary and index and table of contents. It's also important to have simple sentences, to be free of jargon, and to clearly articulate concepts.

But if we want to measure effectiveness of something, it should be measured against its goal. The goal of documentation is not to score perfectly on a rubric. The goal of documentation (and any writing) is to meet the needs of its audience. The questions we should be asking of documentation are as follows:

  • Did the documentation meet the information needs of the audience?
  • Were users able to find answers to their questions quickly?
  • Did the documentation reduce the number of support calls coming in?
  • Did documentation increase usage of the product?
  • Were users pleased with the documentation?
  • How would users rate the documentation?

When weighed against this goal, the other criteria -- completeness, accuracy, grammar, terminology, clarity, etc. -- lose importance. The documentation might be full of spelling errors. It might be poorly organized. It might be ultimately incomplete and even lack visuals. But if it gets the job done and satisfies the users' needs, then the documentation has achieved its goal and should be rated higher than documentation that checks off a list of rubric categories.

Granted, it's much harder to evaluate documentation based on how it meets its end goal, because it requires you to somehow contact actual users. But I think that's the direction any rubric should go -- toward the user's perspective, not evaluated from an internal perspective.

This discussion leads to the larger problem of tech writing teams not having regular or close communication with users. If we did have more contact with users, our rubrics would naturally reflect more of a user's perspective. Since most of us lack this contact, the rubrics we create have a noticeable absence of user input.

About Tom Johnson

Tom Johnson

I'm an API technical writer based in the Seattle area. On this blog, I write about topics related to technical writing and communication — such as software documentation, API documentation, AI, information architecture, content strategy, writing processes, plain language, tech comm careers, and more. Check out my API documentation course if you're looking for more info about documenting APIs. Or see my posts on AI and AI course section for more on the latest in AI and tech comm.

If you're a technical writer and want to keep on top of the latest trends in the tech comm, be sure to subscribe to email updates below. You can also learn more about me or contact me. Finally, note that the opinions I express on my blog are my own points of view, not that of my employer.