Technical Communication Metrics: What Should You Track?

In 2004, when I returned from a teaching stint in Egypt and began working as a copywriter for a health company in Clearwater, Florida, my manager insisted that I track something related to my writing. We decided that I would track word count, because this was the easiest thing to track.

Each week, I graphed the number of words I published, and during a weekly meeting, I held up my graph. If the number decreased for the week, I formatted the arrow red. If it increased, I formatted the arrow black.

My graphs regularly alternated between black and red arrows, and I found the whole exercise somewhat amusing and ridiculous. But I went along with it, because everyone was tracking something. We all had to create these little charts that we held up in weekly status report meetings.

Despite my cavalier attitude toward this word count tracking, I can tell you that I wrote more words than my manager could process by far. After working there several months, I had built up such a mountain of content — press releases, radio pitches, product descriptions, newsletter articles, pamphlets, e-mail campaigns — that my output was undeniable in size. I do think that holding up the silly little graphs each week had some impact on my determination to write.

Lately I have been trying to figure out the right metrics for my role as a technical writer. A lot has been written about metrics and technical communication. Many technical writers have struggled to define meaningful metrics, either because of a requirement imposed by managers or otherwise. One of the most common goals with metrics is to connect writing activities to financial figures, since this allows technical writers to establish value in a quantitative way that speaks to senior leaders.

A few possible metrics

Despite the need for these metrics, coming up with a sound way to measure the value of technical writing is a problem that remains elusive as ever. The following are several possible ways to measure the value of technical writing:

Support costs. No group has more metrics associated with it than help desks. They meticulously track the number of calls coming in, the product the call is about, and the estimated cost of each call. If a software application receives 3,000 calls a month, and each support call costs $25, the software costs the company about $75,000 a month in support costs.

How documentation affects support costs can be tricky to estimate because a lot of factors come into play:

  • Support calls spike when a product is initially released.
  • Support calls drop over time as users become more familiar with the application.
  • Not all support calls are resolvable through help material.
  • It’s hard to determine the influence of help material without a similar application that has no help material.

Helpfulness ratings. Another technique might be to embed a “Was this topic helpful?” question in your documentation. Then count the number of people who indicated that the topics helped them. You could measure your own success based on the number of yes responses versus no responses. If you equated each “yes” response with the cost of a support call, you could make a case that documentation is saving the company that amount in support costs.

For example, if 100 people indicated that topics were helpful, and half of those people might have called the support center without the help ($25 a call), that means documentation contributed about $2,500.

Unfortunately, people are always more willing to engage in feedback when it’s negative rather than positive. We love to complain more than praise, so I’m not sure how accurate this method would be.

Page hits. Page hits to help material aren’t a direct indicator of success, but what if you’re publishing web articles that double as marketing collateral? If you take the tips and how-to’s from the help, you can publish these on your corporate blog. Hits to these articles can be quantified and converted into a financial figure.

For example, let’s say you publish an article that receives 10,000 hits. Google might charge 25 cents a hit in a pay-per-click campaign to generate an equal number of hits, so the financial worth of that article you published is at least $2,500, plus the effort to write the article. If you publish 50 articles like this a year, you’re contributing a value of about $125,000.

Word count. You can also estimate the value of your contribution by measuring your word output, and then multiply this output by a cost number. For example, let’s say that in one week, you write 5,000 words. In the freelance world, a writer might charge about $75 to write 200 words. Therefore you can estimate the value of your contributions at about $1,875 dollars for the week. Of course, this method assumes that research, SME interviews, explorations of the system, and other non-writing activities are all quantified in that initial word output.

Most of these measures connect with a financial value, but if establishing financial value isn’t important, you could still measure a great many things related to your role.

What you track, you focus on

One of the problems with metrics is that we technical writers are so detail-oriented, we often get discouraged by the inability to track our contribution to the bottom line with a fair degree of accuracy. Consequently, we often don’t track anything at all — while still remaining passionate that our contributions make a significant impact.

If we dismiss metrics because they are slippery and inaccurate, we are selling ourselves short. Here’s the secret about metrics: what you track, you soon care deeply about. At work I started tracking the number of words I publish every week. As a result, I’ve noticed that I have become more focused on my output. During the day, if I haven’t published anything, I start to feel lazy and unproductive. I have to get something out there, something written and published. I also don’t let half-written things languish. I see them through to publication.

But here’s the problem in tracking word count. Most of the non-writing activities I do lose their value because I am no longer tracking those activities. This can be an important consideration when you consider that technical writers don’t actually write much of the time. I don’t know if this is a travesty of the meeting-filled corporate life, or just the nature of the job. But take a look at this graphic that Mike Landry sent me last week:

What technical writers actually do

What technical writers actually do

As you can see, technical writers spend very little time writing, maybe at most 10 percent of their day. Often the more senior level you are, the less you write. This graphic accurately describes my life as a technical writer. It’s easy to let my day fill up with non-writing activities, such as the following:

  • Attending scrum meetings
  • Meeting with developers to talk about how an application works
  • Discussing content that interns are writing
  • Figuring out if I should jailbreak my iPhone to record mobile app screencasts
  • Discussing IP omissions with released applications
  • Talking about social media strategies we don’t have (e.g., should we engage shallowly on multiple channels, or deeply on one?)
  • Working on team mission statement
  • Printing out latest Intercom magazine issue on information architecture
  • Figuring out how to add captions to linked images in Mediawiki
  • Reviewing the latest changes to the wiki
  • Gathering accomplishment highlights for previous month
  • Reviewing forum posts from volunteer testers for applications in beta
  • Editing existing help to include gotcha notes and known limitations clauses
  • Strategizing about upcoming projects and wondering how to get funding
  • Responding to various e-mail messages to clear out inbox

And before you know it, it’s 5:30 pm and the day is over without having written or produced anything substantial. However, if you have a metric you’re tracking, you suddenly become accountable. All those non-writing activities lose their value. If your goal is to publish 1,000 words a day, everything else you do is no longer the first priority. That’s what’s interesting about metrics: what you choose to track changes how you prioritize the activity. Therefore, you must think carefully about what you want to track.

The question, then, is not only what can you track, but what should you track. I am convinced that everyone should track something. Tracking can help you dramatically improve your performance in what you track. This year, in reaching to find some meaningful metric, I am currently tracking anything I can easily measure. Since I play both technical writing and marketing roles at work, I have more things available to track: words published, overall site traffic, individual article traffic, articles published, volunteer word count, RSS followers, Twitter followers, Facebook likes, Google + joins, the number of social media updates, the number of articles published, the dates for publication, and the number of volunteers who joined projects.

Of all these metrics, what is the most important to focus on? What is it that turns the wheel to make all of these numbers move forward? What is the driving force that accelerates social media growth, new visitors, and subscribers? Published content plays a huge role in driving these numbers — good quality content that aligns with our reader’s/user’s interests. The more I publish, the more each of these numbers goes up. So are words published the most important thing to track?

The consensus is against word count

Despite the grueling focus that tracking word count provides, the literature on the subject is decidedly against word count as a metric in tech comm. In fact, I have not read a single article or blog post that recommends tracking word count at all. Here’s a bit of research on the subject that I’ve culled from Intercom, Technical Communication Journal, LinkedIn, and elsewhere:

Research indicates that no industry standards are available for technical writer productivity rates. Some practices, such as page counts, have proven to be counter-productive in our experience. If a writer is evaluated by number of pages, page counts may tend to increase to the detriment of quality. In many projects, reducing page count should be the goal. Page counts also do not take into account the varying complexity levels of different deliverables; realistically, it takes longer to produce a page of highly technical material compared to user help. (“Measuring Productivity,” by Pam Swanwick and Juliet Leckenby. Intercom. September/October 2010. p. 9. URL: http://intercom.stc.org/2010/09/measuring-productivity/)

One group sees productivity metrics as dangerous. They once tracked pages per year per writer, but found the algorithm of little value. They believe that focusing their efforts on customer satisfaction is more important than counting departmental pages-per-week throughput. (“Documentation and Training Productivity Benchmarks,” by John P. “Jack” Barr and Stephanie Rosenbaum. Technical Communication Journal. Volume 50, No. 4, Nov 2003. P. 470)

The obvious, easily measured metrics are generally not very useful. For instance, there’s a temptation to measure technical writers by the gross output or superficial productivity—pages per day or topics per hour, respectively. These metrics are seductive because they are easy to calculate. But it’s an axiom of management that people will focus on whatever is measured. If you judge people by page count, they will produce lots and lots of pages. (Many of us succumbed to the “make the font bigger” approach in high school to fill out required pages for writing assignments.) If you measure writers by the number of topics they produce, you can expect to see lots and lots of tiny topics. Furthermore, this raw measurement of productivity doesn’t measure document quality. (“Managing Technical Communicators in an XML Environment,” by Sarah O’Keefe.  Scriptorium.)

At Sabre Computer Reservation Company, for example, Blackwell (1995) shows how task analysis by a team of professional writers resulted in reduction of the number of pages in one manual from 100 to 20–and the consequent savings in production costs of nearly $19,000, more than paying for the effort that went into the task analysis. Note too the importance of measuring writing productivity in units other than pages per unit of time–had such a measure been used here, writer productivity would appear to have plunged. In fact, the rest of the article makes clear, the shorter manual was a great improvement on the original, and resulted in improved customer acceptance of the product.  (“Measuring the Value added by Technical Documentation: A Review of Research and Practice.” by Jay Mead. Third Quarter 1998, Technical Communication Journal. p. 361-2.)

A question posted on Linkedin – What metrics do you use for technical writing? – had a number of responses that dismissed page count as a measure as well:

If writers are evaluated on the number of pages they produce, they’ll give you as many pages as they can churn out, not as many as the deliverable needs in order to be optimally useful to readers. Douglass H.

I would first agree that page count has no place in the discussion. There is frequently an inverse relationship between quantity and quality — at least when viewed in the context of two versions of the same document. Paul M. A.

Measuring “page-count” is ridiculous. Writers can churn out pages and pages of drivel and gobbledygook to generate “pagecount”. Editors can delete pages and pages of content to generate “pagecount”.  A better measure is the readers’ view of the documentation. Do they like it? Can they use it to solve their problems?  Dave G.

As you yourself (and several responders) have commented already, page count is a terrible way to measure productivity, especially as shorter often means better. For example, I spent several months *reducing* a user guide from 240 pages t0 80, removing the repetition and waffle… The only useful matrices for documentation are those that measure comprehension. Julian M.

Finally, perhaps nothing is as persuasive against tracking word count as this simple Dilbert cartoon.

Metrics need to measure quality

Besides the points raised earlier, measuring word count has other drawbacks. If you’re focusing only on word count, you’re not focusing on the problem. You may just be publishing more and more text, without analyzing whether the text is solving a business need or problem.

If you’re going to measure word count (quantity), you need to include some other important factors into your metrics. Jack Barr and Stephanie Rosenbaum define productivity with this equation:

(Quality X Quantity) / Time = Productivity.

(“Documentation and Training Productivity Benchmarks,” by John P. “Jack” Barr and Stephanie Rosenbaum. Volume 50, No. 4, Nov 2003. See p.471)

This is the problem with measuring only word count: without a measure of your word count against some sort of result, there’s no way to determine whether you’re being productive. You can only be productive if you make progress toward an intended result. Merely publishing more content isn’t necessarily a worthy goal in itself. The goal needs to tie to a larger objective, such as reducing support costs, increasing customer satisfaction, or improving user performance. These are all measures of quality.

To judge quality, some connection with users must be factored into a measurement; otherwise the quality measure is hollow. (Technical writers could measure the quality of each other’s work, but it’s better to have the actual user’s feedback.)  The problem is that measuring the effect on users is hard to do, so we often skip it. Instead, we make a leap to believe that publishing content will affect users in a positive way.

The widespread omission of any kind of user testing with help content has been unfortunate. In our discipline, had we been conducting user tests with our help content all along, we would have probably abandoned many unproductive forms of documentation (for example, maybe the long manual) and sought other solutions earlier (such as multimedia instruction).

Saul Carliner notes that although testing for documentation usability is important for measuring quality, “the majority of those that did said that they test less than 10% of their products” (“What Do We Manage? A Survey of the Management Portfolios of Large Technical Communication Groups.” Technical Communication Journal, 51:1. Feb 2004. p. 52).

However you do it — testing content with users in usability labs, surveying satisfaction ratings among users, or embedding surveys into help topics — it’s important to measure quality through some kind of user interaction. Did our efforts reduce support calls? Were users pleased with the help material? Did it increase usage and adoption of the application? When you can introduce a quality measurement in the metric, it makes the tracking activity meaningful.

Can we still measure word count?

I wouldn’t dismiss measuring word count altogether. Instead, I would multiply word count by the weight of the deliverable. A quick reference guide might be multiplied by 10, and a video by 20 — or something similar. It wouldn’t be hard to draw up a list of deliverables and multiply them by an appropriate weight measure to render them into output units. I mentioned quick reference guides and videos, but I could equally include other activities, such as social media updates, branding of help platforms, or formulation of strategies.

There are plenty of metrics that incorporate complicated algorithms to define a unit of work. For example, Pam Swanwick and Juliet Leckenby use the following formula:

 (# topics or pages) x (complexity of deliverable) x (% of change)

 + (% time spent on special projects)

 x (job grade)

(“Measuring Technical Writer Productivity.” Feb 2011. Writing Assist, Inc. http://www.writingassist.com/newsroom/measuring-technical-writer-productivity/)

Converting any kind of output into a unit of work helps you avoid deprioritizing every non-writing task. (On the other hand, if you want to prioritize writing so that you don’t end up like the Mike Landry graphic shows — with writing occupying just a sliver of your day — then you might consider counting words alone. It all depends on what you want to prioritize.)

Since measuring quality is integral to a metrics analysis, I plan to embed surveys into help material to gather feedback from users. Our usability group uses Loop11 to conduct regular usability tests (such as this affinity diagramming study). If I could embed this Loop11 survey into a template on the wiki, and then insert the template into specific wiki help pages, this would help me assess the quality of the content. I could also bring in people off the street, so to speak, and ask them to evaluate the help material based on a list of questions, but my preference is for real users to assess the help in an actual scenarios.

Conclusion

In most articles, metrics are used by managers to evaluate employee performance. Or they’re used by tech comm departments to justify hiring and budgets. Few approach metrics as a way for individual contributors to establish a meaningful measure for their productivity and success. Yet for me, this is perhaps the main motivation I have for tracking metrics. I know that what I track, I can improve. And this improvement can take me to the next level.

I am interested to hear what metrics you track, and the results of the metrics in your company.

Adobe Robohelp Madcap Flare

This entry was posted in general on by .

By Tom Johnson

I'm a technical writer working for The 41st Parameter in San Jose, California. I'm primarily interested in topics related to technical writing, such as visual communication (video tutorials, illustrations), findability (organization, information architecture), API documentation (code examples, programming), and web publishing (web platforms, interactivity) -- pretty much everything related to technical writing. If you're trying to keep up to date about the field of technical communication, subscribe to my blog either by RSS or by email. To learn more about me, see my About page. You can also contact me if you have questions.

18 thoughts on “Technical Communication Metrics: What Should You Track?

  1. Pam Swanwick

    Hey, Tom. You cited Juliet Wells Leckenby’s and my article on “Measuring Productivity” in this post. I agree whole-heartedly with your conclusion and the difficulty in identifying helpful metrics for technical writers. As we discovered in our own research, it was quality that really mattered. However, we needed to devise a way to report our progress to management and to objectively evaluate our writers’ productivity, which is why we used the productivity formula. We explicitly noted that our metrics cannot evaluate quality.

    By the way, our productivity metrics formulas are in a spreadsheet that we’d be happy to share with your readers. It looks complicated, but having the writers maintain the spreadsheet was relatively simple. Anyone who would like a copy of the spreadsheet can contact me at pam dot swanwick at mckesson dot com.

    Please, let us know if you discover any reasonable quality metrics–the holy grail of technical writing (okay, not really, but it would help).

  2. Fer O'Neil

    I kept expecting an abrupt to the article and a “Tell me what you think” at the end but you actually tell us everything—very detailed post, great job. It made me think about my own ‘writing’ and how most writers don’t have enough time to write let alone write and measure their own writing. I did a quick count and I sent 38 work emails yesterday. I didn’t complete the two articles I really want to finish (i.e., I wrote zero words) but as you discuss in the post, I did write 1,876 words in my emails. I wrote questions for SMEs, edits to marketing content, edits to the web site, and even an email where I discuss the “burden of proof” as I explain my position on not changing an existing element in an article.

    So I agree that there are many ways to measure our work, and a ‘total satisfaction’ or some type of UX metric is probably the most useful for our profession.

  3. Larry Kunz

    You’re right, Tom. Good metrics for Tech Comm are notoriously hard to find. And word count is probably the worst metric of all — if for no other reason than the more words, the more it costs to translate them.

    What if we measure the number of tasks that our documentation enables the reader to do? One: physically install a router. Two: configure the router. Three: integrate the router with your network. Of course there are challenges with this approach, beginning with how do you count tasks? (You can increase your numbers really fast if you count defining the router to each and every device in the network as a separate task.)

    Still, if we can come up with a convention for defining a task — perhaps along the lines of the way user stories are defined in agile programming — we’ll have a promising metric.

    You mentioned reducing the number of support calls. That’s a great metric if we can persuasively show how our documentation affected the number of calls. It’s a great metric because it yields a bottom-line, dollars-and-cents value that we can use to build a business case.

    Finally, as documentation moves out of the publishing arena and into the collaborative arena, I hope that the work of companies like MindTouch will help guide us to finding good metrics like popularity and reader satisfaction. Over the next few years the topic of Tech Comm metrics is going to be fascinating — and it’ll probably lead us to some suprising results.

  4. Linda Oestreich

    Tom,
    I love the way you think. This article is a little longer than some of your posts, but it kept my interest all the way through. The topic has been bouncing around tech pub management circles for more than 20 years…no one has yet found a great way to measure value in techcomm. I think getting the details from the users is the only option…but we need to engage groups outside the pubs department to make that happen. Maybe we’ll finally get it figured out in the next 20 years…

  5. Steven Jong

    Very interesting article, Tom! I have studied documentation metrics for a long time, and I know you’ve made solid points. I would add a couple.

    First, it would be easy to measure our productivity if we worked on assembly line, like Lucy decorating chocolates in that famous comedy sketch. The closest we come to that kind of work would be if we were grinding out repetitive work, such as a stream of procedures, API descriptions, or answers to FAQs. As you point out, our productivity is actually affected by ancillary tasks. Some are just valueless interruptions, like reading email and answering the phone, plus lunch, bathroom breaks, and any number of other things. But every white-collar worker can make the same point. So what? The boss’s rejoinder to ancillary tasks is “What am I paying you for? Get back to work!” What indeed are we paid for? That’s what we have to measure. As Watts Humphrey said of software engineering, measuring output by lines of code is an awful metric, but it’s the only one around in that domain. I think there’s no way around it in our domain either: we’re paid to produce information products.

    The other point is that grinding out procedures is low-end, commodity technical writing work. What I would call technical communication is at a higher level: planning and organizing work, researching the best practice and approach, ensuring consistency among products or workers, and validating correctness and consistency. That kind of activity is not interruption, it is itself work. And such work is more valuable and lucrative than the repetitive stuff in all professions, including ours.

    1. Larry Kunz

      Steve’s comments remind me of something: while it’s hard to find valid metrics for technical communication, it’s harder still to find valid metrics that apply to individual technical communicators. The best we can hope for is to measure the productivity of teams – where, for example, one member does the “higher level” work that Steve describes and another cranks out the topics. Both activities are necessary; both will yield very different results when metrics are applied to them.

  6. Mark Lewis

    Great article Tom. STC Suncoast misses you. Yes. Juliet’s work on productivity metrics does a great job of taking writer capabilities into account. Another good source is Keith Schengili Roberts and the years of productivity metrics he has collected on his DITA projects at AMD. Keith is a great presenter and is the keynote at the 2012 DITA NA Con(http://www.cm-strategies.com/2012/abstracts.htm#keynote). I’ve collected links to their metrics papers and other relevant papers and discussions and stored them in a LinkedIn group called DITA Metrics(http://www.linkedin.com/groups?gid=2916249&trk=myg_ugrp_ovr). Originally the group was DITA specific, but I was finding many non-XML based metrics information that was still relevant so I now store those too. You’ll also find my white papers on DITA metrics, warehouse topics and translation savings. I don’t typically write about productivity, but rather focus on cost models that “predict” the cost of topics and projects. I encourage everyone to share their numbers and stories, but I know it’s challenging when propriety information is involved. Please share if you can. If your are considering moving to a content component authoring system, you can find ROI in most all of your processes if you take the time to analyze them: authoring, reviewing, editing and publishing. Therefore, I encourage you to read the “Roles and processes” chapter in the DITA 101. And if you want to prove to upper management the savings that are a possible, a great place to start is the “ROI” chapter(http://www.lulu.com/product/ebook/dita-101—second-edition/17383486?productTrackingContext=search_results/search_shelf/center/1). Check it out.

    1. Mark Lewis

      Yes. Credit to Pam Swanwick as well on that awesome “Measuring Productivity” paper. My apologies Pam, I usually think of Juliet because her table was next to mine at the Best Practices Showcase. Thanks again for all your hard work and contributions.

  7. Andrew Pelt

    Buy Facebook likes quick and easy from Fan Unlimited Fb Likes! Buy Facebook likes and Facebook fans that come from high quality real targeted users. Buy Facebook likes from us with fast and guaranteed delivery. We have delivered thousands of fans, followers, likes and views to thousands of pages. Boost your online social marketing efforts with quality, targeted Facebook marketing.

  8. Mohan Ramanathan

    Metrics – to measure or not? well this is an on going debate. In my company, we have a few measures because it is important for me to measure and know the effort that goes in. When you have 200 Technical writers and multiple documentation types you need to know the org capability to estimate and quote.

    Metrics have different connotations and use. Direct use is to measure and improve. There may be other such – relating to work and productivity.

    But there is another category – how do you measure effectiveness? How do you measure the number of service calls that came down or increase in sales because of the write up? These are intangible and these need to be measured and documented more than the individual’s productivity, otherwise the management has no clue why it is there in the first place. When we don’t have the evidence, then we become the first victim of the cost cutting measure. HP did this. HP dismantled the entire KM team across the globe in one sweep because they did not see any direct contribution – then realized the number of patents coming down, decreasing quality of services and so on.

    If Technical writing is contributing towards reducing the cost of sales, cost of delivery, cost of support or increasing innovation, ideas etc then we need to show how – that is why we are employed!

    1. Justin Ober

      One of the metrics that we instituted in order to track effectiveness was a “Ticket Deflection” score. It works like this: when an end-user goes through our Support Portal to open a ticket, we added an intermediate step to grab the ‘Topic’ that they enter into the ticket field. We take that input and run it into the Search functionality of our Knowledge Base. Any hits that come up are presented to the end-user BEFORE they submit the ticket request. We can measure the number of “abandoned tickets” as well as the page hits on KB articles and derive some useful information about how well our Knowledge Base is matching up with our users’ needs. It’s also helpful in terms of justification; we can say with some certainty that the KB content is freeing up our Support staff to work on ‘thornier’ issues, because the stuff that can be solved by end-user self-service via the KB is being handled outside of the ticketing procedure.

  9. Dean Kaflowitz

    I find word count to be the most useless of measurements.

    “Essential truth spoken concisely is true eloquence.”
    -Chaitanya Caritamrta

    I spend more time taking words out than putting them in. If I am measured by the word, then I will write more words and communication will suffer.

  10. John Turnbull

    One more kick at quantitative measures: content never goes away.

    The problem for the reader is that similar items of information become indistinguishable, though only one may be correct. The cost of research and verification increases quickly.

    The problem for the owner is that every text needs maintenance, if only to determine whether it should no longer be maintained. The costs other than the cost of creation are extended into an indefinite future.

    So, as some have suggested, the best tech writer may be the most ruthless tech editor.

    As for measuring, see ‘How to Measure Anything’ (Hubbard). No measurement is perfect. Imperfect measurements can tell you much more than you now know. That can represent extraordinary value.

    1. Tom Johnson

      Thanks for the tip on the Hubbard book. I agree that imperfect measurements can still be very insightful. Re maintenance costs of documentation, this is something that is especially problematic with wikis, since there’s not a strong sense of content ownership. People create a page and then leave it, while others wonder whether it’s still relevant.

  11. Pingback: Some Thoughts on DITA-based Technical Writing Metrics

Comments are closed.