What We Learn from Help Authoring Tool Surveys
Recently the Hat Matrix Blog (aka, the Mad Hatter) published the results of a tools survey that included 590 participants. The respondents were asked what authoring tools they used on a regular basis. They answered as follows:
- Flare: 40%
- Framemaker: 33%
- RoboHelp: 27%
- Author-it: 13%
- Dreamweaver: 13%
- HTML Help Workshop: 12%
- Madpak: 12%
- WebWorks ePublisher: 11%
- helpware FAR: 7%
- RoboHelp for Word: 7%
- Adobe Tech Comm Suite: 7%
I also recently published an informal survey in my blog's sidebar. I didn't intend for it to gather so many votes, but 987 people responded to the question about which authoring tool was right for them. They answered as follows:
- Flare: 46%
- RoboHelp: 18%
- Framemaker: 18%
- Author-It: 14%
- Word: 14%
- Other: 12%
In 2007, WritersUA also created a tools survey. They asked participants to rate their satisfaction with various tools, and from their ratings they calculated tool usage. The tool usage by the 606 respondents was as follows:
- Acrobat: 93%
- SnagIt: 57%
- RoboHelp: 56%
- Framemaker: 48%
- Paint Shop Pro: 44%
- Dreamweaver: 41%
- Photoshop: 40%
- Captivate: 31%
- HTML Help Workshop: 29%
- WebWorks Publisher: 27%
- Madcap Flare: 25%
What can we learn from these tool surveys? According to some, these surveys prove nothing. For example, John Daigle on the Yahoo HATT listserv writes:
Basically, the results [of these surveys] measure a self-sampled population. Whoever sees the announcement of the survey gets to decide whether they want to participate or not. There's no control whatsoever. (In fact, in most of the surveys I mentioned, you could vote more than once!!!) Understand. I am not criticizing any of these surveys. Surveys are fun. I just don't think folks should get hysterical and make of them more than they are. Even the use of the word "statistical" is a stretch. The thing I DON'T want to see is some crazy extrapolation from any of those 3 surveys that suggest the results have anything to do with MARKET SHARE.
John's main criticism of the surveys is the lack of representative sampling of those who participated. Many share the same opinion.
However, creating a survey that has a representative sampling of technical communicators across the globe is not only extremely difficult, it's next to impossible.
To find a representative sampling, you would need to somehow determine the whole, so you could tell if your sample represented it. According to Tom Gorski, STC's Director of Communications and Marketing,
Neither STC nor the Bureau of Labor Statistics can determine with any accuracy what that number [of technical communication professions in the world] is. I've seen estimates in the neighborhood of a couple hundred thousand. Part of the problem, of course, is that Technical Communicator is not a widely recognized position or profession. We're working on convincing the BLS to change their definition of technical writer to the broader term that is more fitting for today's challenges, but that will take time.
In other words, no one knows how many technical communicators there are in the world. The problem is partly due to the slippery and widely variant names for the profession. Are editors, illustrators, information architects, usability analysts, instructional designers, web designers, managers, e-learning professionals, and others who don't literally call themselves technical writers or technical communicators included in the representative sampling?
In short, because it's impossible to know the whole, it's also impossible to extract a representative sample of the whole. As such, any survey that attempts to gather a representative sampling of technical communicators is in trouble.
Beyond sampling errors, the survey questions themselves are prone to error. If you limit the possible answers to a finite set of choices, respondents may be forced into selections that don't represent their true answers. On the other hand, if you leave every question open-ended, the results are difficult to sort through and interpret.
Another challenge with surveys is avoiding assumptions. The question, "Which help authoring tool is best for you?" is different from the question, "What help authoring tool(s) do you use on a regular basis?" Preference doesn't require usage, and usage doesn't require preference. Hence, we shouldn't assume that the responses should be the same.
For example, I spent an entire year using RoboHelp at a company that required it, when I really wanted to try something else. A significant number of writers use the default tools their company purchases for them or requires them to use, or which they can afford. The writer's usage of the tool doesn't mean they prefer it. Conversely, I may lust after a certain help authoring tool and know that it's right for my needs, but I may not have the money, infrastructure, or time to implement the tool. So my preference doesn't imply usage. See how those two survey questions can look similar but yield different results?
Despite all the tricky errors inherent in surveys, I think tool surveys still provide value. The rates of error decrease as the number of respondents increases, because the sample grows larger (similar to how a poll of 10,000 random people is more valuable than a poll of 10 –- the sample grows closer to the whole). As long as you aren't targeting a small group from one sector of society, you can begin to see general trends and rough estimates.
I wonder how people who demand more rigorous, unbiased survey sampling would interpret the value of informal usability tests. For practical reasons, usability analyses are often done with a small group of four to five people, with a usability expert observing the users for an hour or so as they use an application. Despite the user sampling errors and the lack of comprehensiveness about the tests, usability experts can gain about 80% of what they would normally obtain from a more comprehensive, expensive usability analysis involving eye-tracking devices, keystroke logging, and screen recording software.
In other words, surveys aren't exact, but perhaps they can give us a rough idea of what we're trying to measure.
With those disclaimers, here's what I'm taking away from the tools surveys:
- The breadth of the tools demonstrates the difficulty of relying on a single tool -- there are tons of tools out there that people are using. Still, five main authoring tools dominate: Flare, RoboHelp, AuthorIt, Framemaker, and Word.
- Madcap is not only a major competitor to RoboHelp, but it now seems to now have an edge on it.
- It seems that, despite all the hype in the tool market for it, few people are using DITA.
- Madcap seems to have a strong investment in the perceived value of tool use. Judging from their marketing campaigns surrounding the surveys, they must feel that surveys (and tool popularity) influence a writer's decision to buy a particular tool.
- People can't change their toolset overnight (due to legacy documentation, training, deadlines, and other variables), but they do seem to be gradually shifting in the direction of Flare.
- Because generating a printable PDF document is the most common help authoring deliverable (according to another question in the HAT Matrix survey), a help authoring tool that can produce this deliverable will probably be sought with greater demand.
- You can use a variety of tools to get the same job done.
Finally, there is one observation that no one can deny: tools surveys are inflammatory among both vendors and users. This is no doubt because the surveys are influential, despite their flaws.
Are we done with surveys for a time? I think a good majority of people, especially those entering the field, will still ask which help authoring tool is the best, which help authoring tool should they learn. Links to the three surveys would be a good starting point in a response.
About Tom Johnson
I'm an API technical writer based in the Seattle area. On this blog, I write about topics related to technical writing and communication — such as software documentation, API documentation, AI, information architecture, content strategy, writing processes, plain language, tech comm careers, and more. Check out my API documentation course if you're looking for more info about documenting APIs. Or see my posts on AI and AI course section for more on the latest in AI and tech comm.
If you're a technical writer and want to keep on top of the latest trends in the tech comm, be sure to subscribe to email updates below. You can also learn more about me or contact me. Finally, note that the opinions I express on my blog are my own points of view, not that of my employer.