Value arguments for docs and tech comm (Part II)
Value as financial ROI
In July 2017, easyDITA published an infographic on Measuring the Value of Technical Communication, based on a survey of 149 technical communicators.
This infographic kicked off some debate and helped me connect with academic research on the topic. In response to this infographic, Bob Watson wrote a post also titled Measuring the value of technical writing, responding to and evaluating this topic. Watson is an Assistant Professor of Technical Communication at Mercer University, and his research focuses on measuring and improving technical communication methods and practices, so measurement is a topic he’s passionate about.
Watson says “in a data-driven world, no data is easily equated with no value” — hence the importance of measurement. Watson brings up research from Ginny Redish in a flagship article, “Adding Value as a Professional Technical Communicator” (Technical Communication, 1995).
Here I’ll jump into Redish’s article a bit (outside the scope of Watson’s summary) and expand on some of the details. Redish defines value in terms of measurable return on investment (ROI):
Managers are interested in the “bottom line”; in costs and benefits, in return on investment. We add value when we contribute to improving an organization’s return on its investment in what we do. In considering this balance, we can focus either on reducing the investment or on improving the return on the investment.
Redish’s approach to attempt to measure the value against financial impact set the general direction around value for many years to come. Despite her focus on ROI, Redish properly acknowledges the difficulty of this approach. She notes how budgets within organizations are often independent, and savings in one group’s budget might not matter if your ROI is calculated from another group’s budget.
For example, Tech Docs probably saves Support a great deal of money (without Tech Docs, Support would have to write a lot more KB articles), but if Support and Tech Docs are driven by different budgets, grouped in separate divisions, making arguments about ROI might not resonate with these managers. Redish explains:
The manager who must get the manual to the printer on a certain schedule and within a certain cost is not responsible for whatever havoc the manual might cause later on …
Redish then explores the myriad ways you can measure the impact of documentation, dividing the measures between those that “increase benefits” and those that “reduce costs.” Some measurable benefits of documentation include the following:
- More sales
- Increased productivity
- More proposals won
- More awards won
- More documents (forms) returned
- More completed documents (forms) returned
And so on. Measures for reduced costs might include the following:
- Fewer support calls; lower support costs
- Less need for training; lower training costs
- Fewer requests for maintenance; lower repair costs
- Less time needed for translation; lower translation costs
- Less downtime for workers
- Less effort (time, lines of code, rework) needed when technical communicators are involved early than when they are not
And so on. (The lists are much longer in the article — I excerpted a selection only.)
Watson notes that tech writers have been unable to successfully follow through with these measurements, and so the issue of proving value wasn’t merely closed with this twenty-year-old article. Why is measurement so problematic? Watson says that one problem is that we aren’t starting with a dire situation where tech docs don’t exist. Instead, we’re already in a pretty good spot. He visually depicts this as follows:
He says that many of the measures (as reported by the easyDITA survey) which tech writers see as the value of tech comm — increased customer satisfaction, reduced support costs, and reduced training and development costs — position tech writing “a service or cost center — one that takes value instead of adds value.” The value position as a cost center is problematic because it will lead managers to try to further reduce costs:
When you’re a cost center, the only way to improve (or be recognized as improving) is by reducing costs — invariably YOUR cost. Is that what the company really wants or do they really want the value you add (but don’t realize it because you haven’t made it visible?
However, as managers attempt to reduce costs, they end up sacrificing the benefits of tech comm. If you reduce training costs, for example, you also reduce customer satisfaction.
In evaluating whether tech writers add value or simply reduce costs, he says adding value is a more attractive value proposition but more difficult to measure. He sums up the problem with this maxim:
The attraction of a cost-focus is that cost is easy to quantify, but as they say in statistics, not everything that can be counted, counts; and not everything that counts, can be counted [easily].
Value from knowledge creation
A number of other authors have undertaken similar efforts to study the value of tech comm, and articles and books are abundant on this topic. For example, in “Moving from Information Transfer to Knowledge Creation: A New Value Proposition for Technical Communicators” (Technical Communication, 2002), Michael Hughes sees the value proposition of tech comm not merely as information transfer but rather knowledge creation. Knowledge is information in action; it is an interpretation of the information or product within the user’s context (hence usually interpreted in the form of actionable tasks). Hughes depicts this knowledge creation as follows:
Hughes explains that much of the knowledge inside an organization is tacit, meaning so innate that the engineers assume everyone knows it; this tacit information “enables acts of unconscious competence.” On the other end of the stick, users often don’t even know what to ask. They are unaware of what they don’t know — they have unconscious incompetence. It is in this space — between unconscious competence and unconscious incompetence — that technical communicators create new knowledge that provides tremendous value. Hughes writes,
technical communicators find themselves between experts who cannot articulate what needs to be known (or even that it exists to be known) and users who do not know to even ask. —
The metaphor Hughes’ paints is “akin to a psychologist helping a patient unlock a hidden memory” (rather than a metaphor of a Nuremburg funnel, where knowledge is easily poured into users’ heads). In other words, technical writers help articulate the knowledge assets in the first place, not merely transcribe them from engineers.
Hughes argues that regardless of whether external users read the documentation, the articulation of this knowledge provides a strong value since this knowledge is what gives companies a competitive advantage. These knowledge assets, which make a company smarter and more efficient, are what accelerate company growth. Hughes cites research from a book on organizational learning in which the head of planning for Royal Dutch/Shell asserts:
The ability to learn faster than your competitors may be the only sustainable competitive advantage.
Because knowledge is the lifeblood of an organization, the knowledge assets that technical communicators create have strong value that can be hard to measure. For example, technical writers might promote this knowledge to inform product design, which then increases sales. The contributed knowledge assets may be even more important to the organization than it is to external users. Hughes explains:
This aspect of technical communicators’ contribution can be even more valuable than the documentation they provide end users. By expanding the technical communicator’s value proposition to include the creation of knowledge assets that improve the competitiveness of the enterprise, his or her value persists regardless of whether end users ever even look at the documentation.
This makes any detailed effort to measure clicks from users on doc pages a futile and misguided attempt to measure the value of documentation. It fundamentally misunderstands the value of knowledge creation that technical communicators contribute to an organization.
Hughes wrote his article in 2002 when the U.S. Bureau of Labor Statistics (BLS) Occupational Outlook Handbook defined technical writers as professionals who “take technical information and make it understandable to those that need it.” Hughes wanted to change the definition to describe technical writers as follows:
Technical communicators negotiate meaning within development communities and between those communities and user contexts, and they capture the resulting consensus as knowledge assets.
In 2010, the Bureau changed the definition to say that technical writers “communicate complex and technical information more easily. They also develop, gather, and disseminate technical information …” (Technical writers — Bureau of Labor Statistics). In other words, we’re not merely “information packagers”; we are “knowledge creators,” as Hughes argues. I’m sure Hughes and others helped transform this definition.
Value from perceptions from others
A 2014 article by Saul Carliner et al on What Measures of Productivity and Effectiveness Do Technical Communication Managers Track and Report? (Technical Communication, 2014) explains the futility of trying to measure tech comm’s input. After undertaking a massive study around metrics and productivity measures, with surveys across a broad range of managers in different companies and industries, and then analyzing and rigorously dissecting the results, the authors explain that measuring the financial value of tech comm quantitatively is “not done. And in the few instances when it is performed, the measures that have been developed are, at best, imprecise.”
If measurement is done, it’s often because upper management asks for it. Tech comm managers say informal reporting is more common and effective. Carliner et al write:
… because organizations develop individual communication products to achieve a unique set of objectives, quantifying the value added requires a unique methodology in each situation, one tailored to the specific value proposition of the communication product. As a result, although a general approach exists, no specific, standard methodology for calculating the value added really exists. When performed, these calculations of value require a significant data collection effort and a similarly complex calculation.
Even when organizations invest this effort, the results are only approximations, as accounting systems that track revenues and expenses can only track transactions that actually occurred. When technical communication products contain expenses, the costs that are saved were never incurred. So the accounting systems have nothing to track. The best that technical communicators can do in such situations is show trends in spending before and after publication of the content to suggest that the cost savings have, indeed, occurred (Carliner, 1998).
Furthermore, organizations often cannot unambiguously attribute that savings to the publication of the technical content. For the oft-cited benefit of technical communication of a reduction in calls to a help line, Spilka (2000) notes that several alternative explanations could exist, such as end users refusing to call because of previous bad experiences with the help desk or finding a helpful co-worker to replace both the help line and the manual (Kay, 2007). Another problem with demonstrating the value added by individual technical communication products and services is that the data can only be collected long after publication, sometimes as long as 6 months to 2 years.
After the authors describe the impracticality of measuring the ROI on tech comm, they look at what managers actually track and report on, such as pages written, projects completed, hours per page, hours of instruction, number of screens documented, and so on.
When it comes to measuring value, they conclude that —
The evidence from this study suggests that, instead of quantifiable measures, the most common means of assessing the productivity and effectiveness of technical communicators is word-of-mouth and service quality — that is, perceptions of the quality and responsiveness of the service provided by technical communicators to the people who hire them, not the users who ultimately benefit from the end products.
The idea that perceptions of quality by the people who hire technical writers (rather than the end users who use the products) carries more weight is fascinating and is an idea that I’ll return to later.
Carliner et al discourage any quantitative ROI and instead suggest the following for practitioners:
When assessing how others in your organization perceive the value of the technical communication services provided by your group, primarily focus on the word-of-mouth flowing through the organization and the quality of the service provided.
Although Redish’s article focuses on ways to determine ROI, Redish also suggests interacting with other groups to gather general perceptions (not actual metrics) around cost savings. If these groups have actual data, great, but even without data, Redish suggests asking these groups the following questions:
- How much documentation counts in decisions to buy
- How much more customers would pay for a useful document or for a product that had a useful manual or online help
- How much clients, subject-matter specialists, or managers believe they save by having technical communicators write, edit, or review documents instead of doing it themselves (in terms of hours or money) > * How much time and effort technical specialists and managers believe they save by having technical communicators on a project team from the beginning
Although Carliner et al conclude that measuring ROI of docs is impractical, it’s an idea that keeps resurfacing each time this discussion comes up. For example, at a recent STC dinner, I asked someone for their thoughts around the value issue, and their first response was to suggest that I start measuring ROI because that’s the only language business managers speak.
Imagine an attempt to actually measure ROI of tech docs. In a perfect experiment, suppose we have a new complex project we’re releasing, and as part of our measurement experiment, we eliminate docs for the product for the first month. Do we start logging the number of support cases? What happens when support adds some KB content (a form of docs) — already we’ve started to distort our measures. How do you measure the customers who turned down the project due to poor docs in the first place? How do you measure the time lost from other internal engineers who had to set up extensive meetings to train others on the product? How do you measure the time spent by field engineers who have to hold lengthy client sessions where they configure and set up the product for the client, and then write their own version of instructional notes?
Now suppose you release docs and measure hits. Even if you’re counting up metrics on docs, how do you measure the relative value of the hits based on customer impact? Maybe your one top-earning customer with the largest contract decides to skip the project due to its difficulty, and instead, you spend more time with amateur hobby developers who flood the forums with questions?
In summary, attempting a quantitative approach to measure ROI will likely be unsuccessful; instead, we should focus on shaping the perceived value of docs in the minds of business leaders.
Value from focusing on complex (not secretarial) work
The most recent issue of Technical Communication explores the value question as well, in Emily January Petersen’s Articulating Value Amid Persistent Misconceptions About Technical and Professional Communication in the Workplace (Technical Communication, 2017).
Petersen interviews 39 women in tech comm to see whether others in the company value their role and contributions as technical writers. She finds that many tech writers are essentially treated like secretaries, and their work is minimized as merely fixing grammar or making other cosmetic improvements. Petersen writes,
Overall, according to the interview data, TPC [Technical and Professional Communication] is misconceived as cosmetic, secretarial, unarticulated across disciplines, unnecessary, invisible, and unquantifiable, which means such workers often feel expendable.
By “unarticulated,” Petersen means the technical writer’s improvement or creation of the content isn’t noticed. Others often assume the tech writer merely got the content that way from engineers. Petersen explains,
Colleagues may engage with user guides or online help as much as customers do, but they might also think that Shirley’s [the tech writer’s] hand in creating that documentation was unnecessary and, therefore, she is adjunct to the real work of the company and product. She [Shirley] explained that this is frustrating because, “I sit in sales now, and I can hear them talking about my work every single day. That’s part of how they sell the product. … I know they use it.” She sees them using the documentation, but she has also heard that what she does is unnecessary.
Petersen highlights the contributions to knowledge that Hughes argued for. Petersen concludes that tech writers have to make a constant case for their value, correcting stereotypes and misperceptions across disciplines to make their value known and understood. One way to correct these stereotypes is by focusing on areas of complexity and collaboration. Overall, Petersen says “articulation is and must be a continued concern of the field as a whole.”
I’ve only scratched the surface on research done around value. As I said earlier, this topic has been the dominant theme in tech comm since the beginning. It is a question that largely remains unresolved — if tech writers provide such value to the organization, why is their value so regularly questioned? And how do we establish our value to business leaders in persuasive ways?
In the next sections, I’ll build on the research here to try to provide some answers.