Although I don’t work in road ecology or traffic engineering, the author somehow pulled me through 300 pages on this topic. He managed this not just through vivid language and diction, but by personally visiting places and telling stories about the specific challenges that animals, “carers,” forest service workers, and others faced as freeways and highways bisected and dissected their environments.
To use an analogy, suppose you’re a barista making espresso coffee. An AGI-capable robot trained as a barista is able to make all the coffee that a regular barista can make but twice as fast. Further, the Android barista can create exquisite espresso art in any shape that humans request, wowing them and making the experience novel. Soon the human barista is replaced. After all, the paying customer would rather pay $2.50 for a robot to make a latte instead of $5.00, especially when it tastes the same.
Most code samples in documentation are fairly basic, which is by design. Documentation tries to show the most common use of the API, not advanced scenarios for an enterprise-grade app whose complexity would easily overwhelm developers. (At that point, you end up with a sample app.)
With AI tools built directly into your authoring tool or IDE (such as VS Code), fixing simple doc bugs can become a mechanical, click-button task. Here’s the approach to fixing simple doc bugs:
(Note: The fact that I’m writing a book review on this topic might seem odd given that I usually focus on tech comm topics. However, I document APIs for getting map data into cars, so I sometimes read books related to the automotive and transportation domain. I also run a book club at work focused on these books.)
During the past few weeks, I’ve felt like my brain’s RPMs have been in the red zone. Granted, the constant stream of chaotic political news hasn’t helped—but regardless of current political events, I’m frequently checking the news, my email, and chat messages and operating in a mode that isn’t great. Reading long-form books has proven to be difficult. I run a book club at work focused on automotive and transportation books, and it took me two months to make it through a single book (granted, it was a 300-page historically dense book, but still).
“Biohacking” might be a pretentious cyber term for what is otherwise a straightforward experiment. For 10 days, I tracked my food and exercise levels while also wearing a continuous glucose monitor (CGM) to track my glucose levels. I then used AI to pair up the food + exercise with the glucose readings and perform an analysis about triggers for glucose spikes and recommendations to avoid them.
I want you to act as my AI stream journal (similar to a bullet journal), for the day. In this chat session, I’ll log 3 kinds of notes: tasks, thoughts, and events. Tasks are to-do list items. Thoughts are random ideas or notes I have. Events consist of food eaten, exercise, or descriptions of my internal states. The point is to have an easy way to dump all the scattered information in my head into a central log that you organize and analyze on my behalf.
It’s hard to imagine that documentation that checks all of the boxes in the quality checklist wouldn’t also score highly with user satisfaction surveys. Can you honestly see any documentation that legitimately satisfies all of these criteria as falling short with users?
And yet, to achieve the level of information quality, we didn’t have to rely on constant user surveys to gather feedback. By identifying best practices for content design (specifically for API/developer documentation), we’re able to increase the documentation quality in more self-sufficient, self-directed ways.
In my initial go-around with the quality checklist, I tried to move towards quantification by including scores and weights for each criteria, and then dividing the achieved number of points by the total points. From this scoring, I tried to move from qualitative to quantitative measurement.
However, in practice, I found that assigning scores for each section felt arbitrary and subject to personal whims. I don’t think others found the scores meaningful either. Instead, just having a checklist of criteria to consider was valuable enough. That’s why i stripped out the section on quantification in later revisions to the content — because I want the advice I give here to be helpful in practice, not just theory.
I’m not saying that some approach to quantifying documentation wouldn’t work, just that my approach did not. Also, recognize that the quality checklist has no official data to support it — instead, these best practices come from experience in the industry and from best practices that I and others have observed within the realm of developer documentation.
This is likely the problem with my approach: who’s to say that documentation needs each of these criteria to succeed? It’s possible that documentation might still be findable, accurate, relevant, and clear without many of these more concrete components. I don’t have any user-based research to say that docs should be this way, that they should have an overview, that reference material should follow a consistent structure, that tasks should be detailed in steps, or that error messages should be documented, etc. And honestly, I doubt any checklist can prove its objective value.
Remember that user surveys, which I criticized earlier as problematic, should both complement and confirm the criteria in the checklist. User surveys specifically for docs that rate highly with the quality checklist should also rate higher in satisfaction surveys than surveys for docs that score more poorly. But again, to establish a kind of correlation through surveys relies on a host of factors (objective, unbiased, unambiguous survey questions from a large sample of a representative audience across domains), which is likely difficult to pull off on a regular basis.
Overall, I am confident that few would object to most of the criteria in the checklist. Most of the checklist’s criteria would be agreed upon by both readers and writers with enough common ground as to be a practical guide for improving documentation quality. Also, the criteria should be seen as a first draft, a starting point that can be refined and improved, checked against industry standards, confirmed against docs that are loved by users, refined through constant feedback, and more.
About Tom Johnson
I'm an API technical writer based in the Seattle area. On this blog, I write about topics related to technical writing and communication — such as software documentation, API documentation, AI, information architecture, content strategy, writing processes, plain language, tech comm careers, and more. Check out my API documentation course if you're looking for more info about documenting APIs. Or see my posts on AI and AI course section for more on the latest in AI and tech comm.
If you're a technical writer and want to keep on top of the latest trends in the tech comm, be sure to subscribe to email updates below. You can also learn more about me or contact me. Finally, note that the opinions I express on my blog are my own points of view, not that of my employer.