In this tutorial, you'll learn how to use AI to populate documentation templates with the source material you've gathered. For example, API overviews often follow a highly structured template. This technique can be a quick way to get an initial draft of documentation, which you can then edit and review with SMEs.
One of the most successful strategies for using AI is to pass in an abundance of source material that can augument and inform the AI's responses. In this tutorial, I cover strategies for gathering this material, including what types of documents to look for, optimal ordering, pitfalls to look out for such as outdated or slanted information, and more.
For AI tools to generate accurate information for documentation you're writing, you need to pass in source material. This usually means meeting with engineers and product managers to gather information about the product. In this tutorial, I share prompts for turning those meeting transcriptions into organized, readable meeting summaries. These cleaned up summaries can then function as input context for documentation-oriented prompts.
Just as we need regular physical training to keep from physical decline, we also need regular training in our daily work. In this post, I reflect on the parallels between physical training and work training, resolving to find a regular rhythm for daily reflection and experimentation about work issues.
One of the advantages of recent Gen AI updates is the massive token input context. When you can pass in an entire set of documentation as an input, you have a much stronger possibility for powerful prompts. In these prompts, the reference docs can serve as a key source of truth. User guide content and drift out of date, but a freshly generated reference doc should be accurate to the code base, for the most part. From this source of truth, you can do all sorts of things, such as identify outdated content in the user guide, see what's new between outputs, get links in your release notes, and more. In this article, I share 8 quality control prompts you can use when passing in your entire reference docs.
In this essay, I explore the idea of seeing the unseen aspects of things. I discuss several authors on this topic: Rob Walker, an art critic; Viktor Shklovsky, Russian formalist literary critic; and Robert Pirsig, author of Zen and the Art of Motorcycle Maintenance. My main point is to avoid predictable, conditioned thought by pausing to ask questions about our experiences and the environment around us. In a world where prediction algorithms constantly direct us toward the most likely next word, pushing back and embracing creative ways of seeing and interpreting the world can inject new ideas and perspectives in ways that rejuvenate us.
My previous prompt engineering technique focused on creating release notes using file diffs. In this article, I explain how to use AI to link all the code elements, often referenced in release notes and other documentation, to their appropriate reference page. The technique basically involves providing your reference documentation in HTML form along with instructions to link all the code elements in Markdown syntax.
The following are interesting reads or listens related to tech comm. Topics include podcasts on RAG techniques for AI content development, OpenAPI reference guides, dead-end counterarguments, Lavacon in Portland, and AI cautiousness.
In this podcast episode, I talk with Keren Brown, VP of Marketing and Value at Zoomin Software, about strategies for technical writers to demonstrate their value within their organizations, especially in light of recent layoffs in the tech industry. We discuss aligning documentation work with high-priority initiatives, quantifying the impact of technical writing, and making this work visible to executive leaders. Keren also shares insights on the changing landscape of technical writing skills in the age of AI and the role of translation in modern documentation workflows. Overall, this podcast will show you how to establish yourself as a highly valuable resource within your company.
You can use AI prompts when creating biweekly release notes for APIs by leveraging file diffs from regenerated reference documentation. The file diffs from version control tools provide a reliable, precise information source about what's changed in the release. I also include a detailed prompt for using AI to analyze file diffs and streamline the release note creation process.
Two years ago, I started an experiment to reduce my smartphone usage. While I've reverted to using my smartphone regularly, I've learned a lot along the way. I've realized there's an inverse relationship between book reading and phone usage, and I've made a conscious effort to prioritize reading more books, especially with the reinforcement of book clubs. I've also accepted that while smartphones are necessary, it's the constant notifications that contribute to anxiety. By removing most social media and news apps, I've switched to a pull model for information, reducing my stress levels.
In response to Sarah Moir's post, 'Docs as code is a broken promise', I agree that Git's complexity can be a major hurdle for writers, especially when generating diffs for review. Simpler Git workflows and tools with visual interfaces for merging and diffs are essential for making the process smoother. Despite its challenges, I still prefer docs-as-code over proprietary tools because of its advantages, like using Markdown and generating diffs for review.
In this guest post, Diana Cheung explores how to learn AI by using deliberate practice to enhance her prompting skills. As a deliberate practice effort, she emphasizes intentional, systematic practice rather than mindless repetition, similar to how one would learn coding or other skills. In this post, she shares her attempts at using Claude.ai to work through editorial improvements to a GitHub project's API documentation.
Documentation scripts perform processes such as building reference documentation or doing other repeated processes with docs. This tutorial builds on the conceptual content in Use cases for AI: Develop build and publishing scripts. In this tutorial, I get more specific with strategies and techniques for prompts, walking through a prompt to build a script for generating reference docs.
Zoomin recently released a Technical Content Benchmark Report for 2024. This report explains the company's second benchmark report on documentation metrics, which analyzes data from 97.6 million user sessions across 136 countries. The report provides insights into average metrics like page views, bounce rates, time on page, GPT search usage, and more. In this Q&A with Rita Khait from Zoomin, she discusses how to interpret and use these benchmarks to set goals, improve content findability and performance, and demonstrate documentation's value to stakeholders and the business.