NotebookLM can automatically generate podcasts about any content you provide. In exploring these podcasts this week, I think they might complete a piece that was missing from the GenAI picture with tech comm: these podcasts can help you understand and learn the content you're working on, even when other AI techniques might shortcut that learning.
This podcast explores GenAI in technical documentation scenarios, highlighting the AI features and capabilities provided in Document360. I talk with Saravana Kumar, CEO of Kovai.co, which makes Document360, about how AI is changing search functionality and reducing support costs in knowledge bases. We discuss practical applications of AI for technical writers, including automated tagging, SEO optimization, glossary creation, and more. Saravana shares about AI agent workflows, conversational search experiences, automating screenshot captures, and much more.
In this article, I provide a strategy for one of the most challenging topics in prompt engineering with docs: using AI to help with code samples. At the extreme end, this could mean using AI to generate code samples from documentation; at the other end, it could mean describing code that’s already been written. My approach is more of a hybrid between the two. After describing a typical scenario, I explore the difference between test code and application code and why engineers are reluctant to provide code samples in documentation.
The following is a Q&A post with Paul Maass of Zoomin about Unified Knowledge. Paul says Unified Knowledge integrates diverse content sources to improve AI-powered service interfaces and customer support. He explains the challenges of content management, the role of AI in technical documentation, and gives insights into preparing for knowledge systems driven by AI.
After publishing a day in the life guest post, I've been thinking about what a day in my life might look like. The last time I wrote a post like this must have been a decade ago. But my brain has been thinking about these day-in-the-life details for some reason so I decided to jot some notes down about my day. In this post, I note the main activities of two days of my week (a Wednesday and a Friday). I then follow this up with a detailed analysis of how I spend my time, and why it's so challenging to get AI to accelerate doc work. In looking over my day, it's constantly fragmented with microtasks and unnecessary overhead, rather than an immersive focus on deep work.
The following is a guest post by Bonnie Denham. In this post, she responds to one of the most common questions people ask about technical writing careers: what does a typical day look like for a technical writer? I find it interesting to see how the day-to-day tasks differ not only by company and industry, but by software, specializations, and company environments.
I've always been keen on making goals, but only a few goals ever stick and become normalized as part of my daily activities. However, this past month, I've come across the idea of habit stacking. Habit stacking is the practice of adding a new habit onto an existing habit or routine, using it as the trigger for doing the new habit. Based on my recent experiences, it seems to be working.
This tutorial will help you understand task decomposition by guiding you through the process of creating a complex tree diagram that's too sophisticated for an AI tool to create at once. Whether you're creating tree diagrams or not, it doesn't matter. This is just an example of how to break down complex information into smaller chunks and pass it into AI.
One of the advantages of recent Gen AI updates is the massive token input context. When you can pass in an entire set of documentation as an input, you have a much stronger possibility for powerful prompts. In this tutorial, I share some quality-control prompts you can use that deal with entire doc sets as inputs, as well as explain some of the challenges in passing in an entire doc set.
You can use AI prompts when creating release notes for APIs by leveraging file diffs from regenerated reference documentation. The file diffs from version control tools provide a reliable, precise information source about what's changed in the release.
In this tutorial, you'll learn how to use AI to populate documentation templates with the source material you've gathered. For example, API overviews often follow a highly structured template. This technique can be a quick way to get an initial draft of documentation, which you can then edit and review with SMEs.
One of the most successful strategies for using AI is to pass in an abundance of source material that can augument and inform the AI's responses. In this tutorial, I cover strategies for gathering this material, including what types of documents to look for, optimal ordering, pitfalls to look out for such as outdated or slanted information, and more.
For AI tools to generate accurate information for documentation you're writing, you need to pass in source material. This usually means meeting with engineers and product managers to gather information about the product. In this tutorial, I share prompts for turning those meeting transcriptions into organized, readable meeting summaries. These cleaned up summaries can then function as input context for documentation-oriented prompts.
Just as we need regular physical training to keep from physical decline, we also need regular training in our daily work. In this post, I reflect on the parallels between physical training and work training, resolving to find a regular rhythm for daily reflection and experimentation about work issues.
One of the advantages of recent Gen AI updates is the massive token input context. When you can pass in an entire set of documentation as an input, you have a much stronger possibility for powerful prompts. In these prompts, the reference docs can serve as a key source of truth. User guide content and drift out of date, but a freshly generated reference doc should be accurate to the code base, for the most part. From this source of truth, you can do all sorts of things, such as identify outdated content in the user guide, see what's new between outputs, get links in your release notes, and more. In this article, I share 8 quality control prompts you can use when passing in your entire reference docs.