Warner writes, “Writing is thinking. Writing involves both the expression and exploration of an idea, meaning that even as we’re trying to capture the idea on the page, the idea may change based on our attempts to capture it. Removing thinking from writing renders an act not writing” (11). Because writing involves thinking and reflecting, our engagement in writing changes us. Warner says, “If writing is thinking — and it is — then it must be viewed as an act of our own becoming” (73).
Although I don’t work in road ecology or traffic engineering, the author somehow pulled me through 300 pages on this topic. He managed this not just through vivid language and diction, but by personally visiting places and telling stories about the specific challenges that animals, “carers,” forest service workers, and others faced as freeways and highways bisected and dissected their environments.
To use an analogy, suppose you’re a barista making espresso coffee. An AGI-capable robot trained as a barista is able to make all the coffee that a regular barista can make but twice as fast. Further, the Android barista can create exquisite espresso art in any shape that humans request, wowing them and making the experience novel. Soon the human barista is replaced. After all, the paying customer would rather pay $2.50 for a robot to make a latte instead of $5.00, especially when it tastes the same.
Most code samples in documentation are fairly basic, which is by design. Documentation tries to show the most common use of the API, not advanced scenarios for an enterprise-grade app whose complexity would easily overwhelm developers. (At that point, you end up with a sample app.)
With AI tools built directly into your authoring tool or IDE (such as VS Code), fixing simple doc bugs can become a mechanical, click-button task. Here’s the approach to fixing simple doc bugs:
(Note: The fact that I’m writing a book review on this topic might seem odd given that I usually focus on tech comm topics. However, I document APIs for getting map data into cars, so I sometimes read books related to the automotive and transportation domain. I also run a book club at work focused on these books.)
During the past few weeks, I’ve felt like my brain’s RPMs have been in the red zone. Granted, the constant stream of chaotic political news hasn’t helped—but regardless of current political events, I’m frequently checking the news, my email, and chat messages and operating in a mode that isn’t great. Reading long-form books has proven to be difficult. I run a book club at work focused on automotive and transportation books, and it took me two months to make it through a single book (granted, it was a 300-page historically dense book, but still).
“Biohacking” might be a pretentious cyber term for what is otherwise a straightforward experiment. For 10 days, I tracked my food and exercise levels while also wearing a continuous glucose monitor (CGM) to track my glucose levels. I then used AI to pair up the food + exercise with the glucose readings and perform an analysis about triggers for glucose spikes and recommendations to avoid them.
I want you to act as my AI stream journal (similar to a bullet journal), for the day. In this chat session, I’ll log 3 kinds of notes: tasks, thoughts, and events. Tasks are to-do list items. Thoughts are random ideas or notes I have. Events consist of food eaten, exercise, or descriptions of my internal states. The point is to have an easy way to dump all the scattered information in my head into a central log that you organize and analyze on my behalf.
API doc survey: Do you test out the API calls used in your doc yourself?
One of the questions in my API doc survey was as follows:
Do you test out the API calls used in your doc yourself or rely mostly on QA validation and trust that the code works?
From 43 responses, here are the results:
This question wasn't such a good one because it could be interpreted in different ways. Some people said that QA handles the validation, to which I say of course. If you work in a software shop that uses tech writers instead of QA to validate the code, then you're in trouble. I was mostly wondering if tech writers tested out -- in an informal, minimal way -- all the calls that are in the documentation.
It's also important to keep in mind the API one is testing. It's easier to test out calls in a REST API if all of the endpoints are straightforward and somewhat independent. It's more difficult to test out classes in a Java API if the classes have interdependencies and workflows.
Additionally, many times you can't test a class or endpoint well without having sufficient data to return the right information, so there can be a lot of setup and preparation involved (during a period when time is running short).
Here are a few of my favorite responses from this survey question:
Yes, I always test API calls -- at least the basic, and sometimes with some combination of parameters. I publish with request and response examples. If we support JSON and XML I try to include examples of both. If there are any issues I report them and follow up, and the doc doesn't go out, if there's something significant, until it's fixed. So if the doc says it works, it works.
Whenever possible I like to be able to run and test code and write examples for use in the documentation. However this is not always possible, in some roles there simply hasn't been time, in other roles developers or QA write longer examples and I try and work with those people.
I try to test if I have time but sometimes my involvement is late and limited, and sometimes the API is too complicated to be able to quickly put together tests.
Is the hallmark of good doc testing?
I think one of the hallmarks of a good technical writer is the ability to test the code. It's almost always a best practice to walk through all the tasks and scenarios yourself.
However, besides the difficulty of setting up the right data for the scenarios to work, you also have challenges in the basic permutations of the way various endpoints can be used together. When I worked at Badgeville, there was tremendous variety in the way you could use different endpoints. It wasn't possible to test all the different ways the API could be used. That's a challenge for API documentation overall. Sure you have some core tasks that you design the API to accommodate, but really users have a lot of different endpoints, parameters, and other options in the way they can use the API. This makes testing more difficult. You only test a fraction of the way the API can be used.
About Tom Johnson
I'm an API technical writer based in the Seattle area. On this blog, I write about topics related to technical writing and communication — such as software documentation, API documentation, AI, information architecture, content strategy, writing processes, plain language, tech comm careers, and more. Check out my API documentation course if you're looking for more info about documenting APIs. Or see my posts on AI and AI course section for more on the latest in AI and tech comm.
If you're a technical writer and want to keep on top of the latest trends in the tech comm, be sure to subscribe to email updates below. You can also learn more about me or contact me. Finally, note that the opinions I express on my blog are my own points of view, not that of my employer.