One of the forms of writing I like most is the essay, which is a form that has strong origins with 16th-century Michel de Montaigne. Montaigne saw essays as a kind of attempt or test of an idea and his judgement about it (see Montaigne's Moment).
Since I like the essay format, it's not surprising that I also like to test content. In fact, testing content constitutes one of the main characteristics of good technical writing. By testing content, we arrive at a document that is not only more accurate, but which also captures the details of the process in ways that get missed when a tech writer merely transcribes notes from a developer.
First, I should probably define what I mean by testing. In the scenario of technical writing, a test is a confirmation of how something works. Your task as a technical writer is mainly to explain how a technical product works in the context of a user's goals, so it should not be any surprise that testing one's instructions against the software is a core activity.
Testing is so fundamental to good tech writing that one might think everyone already does it. Yet testing is not as easy as one might expect, particularly because software in development is not fully functional. That is, the software usually doesn't work as it should at the time you're writing instructions.
Without fully functional software, the results of testing may be hard to analyze. Is the software broken, or is there a misunderstanding in how it works, or is the functionality not yet coded?
For example, let's suppose you're documenting how to create a gizmo. From an engineering spec, the developer notes that a certain call with a specific configuration yields the gizmo. As a technical writer, do you merely copy over the engineering spec, assuming that it's accurate and detailed as it needs to be?
You could, and whether it works or not is something the user will find out. But you could also test it first. Testing may require you to set up a web development environment or some IDE where you can run code. Or it may involve just clicking buttons in a GUI and assessing the results. You usually have a development or staging environment of some kind where you can play around with sample data.
When you test something, I guarantee that much of the time, it won't work. You'll get in there and find the parameters are confusing, or the process will require some prerequisites you didn't know about. As you actually go through the process, you'll find yourself asking all kinds of questions you didn't consider before. Most importantly, in testing, or doing, you'll be getting a first-hand perspective about what the user will experience. With this perspective, you can add notes that you wouldn't otherwise think to add without having done the test.
One reason I like testing is because I can usually answer my own questions about functionality. Rather than ask a developer for endless details, I can create tests and find out answers myself. Usually this works great, but sometimes it leads to some open-ended deliberations. If something doesn't work, you're left with a few options:
Even if the software does work, you have to be careful of false positives. You may have managed to get the result you wanted, but through an unintended, inefficient route. This is especially true with code samples.
When something isn't working as it should, and you triple-check to make sure you're doing things correctly, you either ask an engineer or log a bug about the problem (using a system like JIRA).
Before I log a bug, though, I usually confer with an engineer because sometimes a quick conversation can clear up an issue. If you log a bug, you create more overhead to deal with (more people look at the issue, and then it has to be managed through a JIRA life cycle).
After a brief conversation, if the engineer says it might be a bug, I prefer to log the bug myself. In the issue, be precise about the conditions that cause the error. Note the steps to reproduce it, include details about the scenario, the browser, the data -- whatever. More detail is critical because sometimes an engineer may not be able to reproduce the error you're seeing.
If the engineer can't reproduce the error, then in the engineer's mind, the bug doesn't really exist. Reproducibility is key to problem solving. In fact, in my experience, if the engineer can't reproduce the bug, the engineer often dismisses the bug entirely. If this happens, consider creating a short video (using Jing) to show the bug. This way the engineer will be able to reply the video to see the bug, and maybe he or she will see something about my configuration that you missed.
Unfortunately, you can't always test the product. I once worked as a technical writer documenting a storage array system. There wasn't exactly any test storage array I could play around with. The storage array cost hundreds of thousands of dollars and included advanced RAID arrays that were beyond the scope of what I was documenting.
In that situation, to create a "run book" for the network administrator (my task), I had to interview a network engineer and later ask him to review what I wrote. It was kind of painful and tedious -- not my favorite job as a technical writer. But still interesting.
Another scenario where testing isn't possible is when the product is so complicated or requires so much setup that it's beyond the tech writer's scope to test. In some scenarios you may be reduced to editing something that engineers write. Think about a product that involves a nuclear reactor or airplane engines. In these cases, you'll need to make sure quality assurance signs off on the instructions and gives their certification.
Another scenario that's hard to test is a scenario involving real data. In your tests, you usually use simple data to make sure basic functionality is there. But what if your users will use data hundreds or thousands of times that size?
Well, you tested it, and it worked, but you didn't test it with the right data. However, testing with more extensive data is more within the realm of the Quality Assurance (QA) team. A QA engineer can load up a system to simulate real data loads and determine how the product behaves under that stress.
Another element I rarely test as a technical writer is regression. When a new feature is introduced, how does the new feature impact all the previous features? We often add a new section to our documentation to accommodate the new feature without going back to test all the previous instructions to make sure they still function as designed.
However, again, this kind of regression testing is usually beyond the scope and domain of technical writers. One doesn't have infinite bandwidth for testing, and if we did, we would be doing the job of QA as well as tech pubs. Still, regression is probably good to be aware of. It's a best practice to consider testing a few areas that might be affected by the new feature.
Testing and tech pubs have always had a close relationship. Testing often relies on specific test cases (steps to test a feature) that look a lot like user instructions (steps to accomplish a task). People who create automatic documentation do so by automating test scripts into end-user documentation.
I'm sure I'm not the only one who has noticed some striking parallels between test cases and end-user documentation, only to wonder if there's not some duplication of effort between the two. To what extent can tech writers leverage test case steps to facilitate documentation?
While I don't think automated documentation is practical, I'd say that when possible, read through the test cases from Quality Assurance for the sake of information gathering. If you can look through test cases and get a better feel for how a product is supposed to work, it will make it that much easier to write end-user documentation.
In looking at test cases, remember that a QA engineer writes test cases to confirm whether a feature works or not, and a tech writer writes instructions to help users achieve a goal. Theoretically a user's goals and a product's features should be in alignment -- but sometimes an application is general enough to allow for a wide variety of uses, so the two purposes aren't always closely aligned.
I've found that writing and testing documentation almost always leads me to logging bugs. At a former job, I once expressed some concern that our software had too many bugs in it, and as a result, we were losing the confidence of the end-users. End-users started to expect that problems and other defects were commonplace with the software we released.
A QA engineer explained that in agile, testing is often done by the users themselves. He said, "Why should I test something when I have thousands of end-users who will test the product for me when we release it?" In a waterfall methodology this idea would be insane. But in agile methodology, the idea is to act quickly on the feedback you receive from people (e.g., bugs) and make updates to your software in a much more immediate timeframe. In this paradigm, every user is a tester.
This also means that if users don't complain about a feature, there are no bugs associated with that feature, or at least none worth fixing -- at least from a developer's point of view.
For technical writers, this move toward agile end-user testing means that software may have a lot more bugs in it than we originally assume. Developers and QA departments are anticipating and expecting submissions about bug reports, and as product experts writing information about every detail of the product, it would be expected that tech writers log a lot of bugs.
So when you find yourself in JIRA interacting a lot, recognize that this is all part of the agile process. (By the way, it's incredibly satisfying to find bugs. It means you've identified an issue that developers, managers, and other engineers missed. You know the product in more intimate detail than others in the company!)
As you log bugs and point out gaps in products, you can help the team create better products. In return, you'll be seen by the engineers as a more valuable member of the team, not just an afterthought or footnote. When they respect you more, they'll review your content more readily, and the whole process improves from this rapport and interaction.
In sum, it's a best practice to test everything. Stand by what you write, and don't publish anything if you can't verify that your instructions produce the result you promised. I can't always follow this ideal, but it's written in my tech writing philosophy.
Get new posts delivered straight to your inbox.
I'm a technical writer based in the California San Francisco Bay area. Topics I write about on this blog include technical writing, authoring and publishing tools, API documentation, tech comm trends, visual communication, technical writing career advice, information architecture and findability, developer documentation, and more. If you're a professional or aspiring technical writer, be sure to subscribe to email updates using the form above. You can learn more about me here.