The isolation and loneliness of tech writing may get worse as AI accelerates
Isolation is something I've been thinking about lately. Although I have an abundant professional network and support probably 100+ engineers, PMs, and others, at times I do experience a sense of isolation in my role. I'm not sure if it's the holidays, or because now that I'm 50, I'm apparently at the bottom of the "U-shaped happiness curve," but I'm trying to understand how to navigate a world where my relationships with colleagues are increasingly transactional (purely information-based) and lack more social aspects. There are several reasons for isolation, and good reason to believe that AI will only increase our isolation. Read more »
Documentation theater and the acceleration paradox -- podcast episode 3 with Fabrizio Ferri-Benedetti
In this episode, Fabrizio (from passo.uno) and I discuss the concept of documentation theater with auto-generated wikis, why visual IDEs like Antigravity beat CLIs for writing, and the liberation vs. acceleration paradox where AI speeds up work but creates review bottlenecks. We also explore the dilemmas of labeling AI usage, why AI needs a good base of existing docs to function well, and how technical writers can stop doing plumbing work and start focusing on more high-value strategic initiatives instead (efforts that might push the limits of what AI can even do). This post also contains a lot of short clips and segments from the episode, along with article links and a transcript. Read more »
Recording, transcript, and notes for AI Book Club discussion of Yuval Noah Harari's Nexus
This is a recording of our AI Book Club discussion of Nexus: A Brief History of Information Networks from the Stone Age to AI by Yuval Noah Harari, held Nov 16, 2025. Our discussion touches upon a variety of topics, including self-correcting mechanisms, alien intelligence, corporate surveillance, algorithms, doomerism, stories and lists, democracy, printing press, alignment, dictator's dilemma, and more. This post also provides discussion questions, a transcript, and terms and definitions from the book. Read more »
Troubleshooting build processes with a fix-it mindset
One of the things I'm doing this week, which has thrown me off my content productivity track, is trying to fix some errors in build logs for my reference docs. I have an SDK that has 8 different proto-based APIs; each API has its own reference documentation. The build script I run (to generate the reference docs) takes about 20 minutes and creates the HTML reference documentation for each API. The only problem is that I recently realized that the build script has some errors. Read more »
Review of Yuval Noah Harari's "Nexus" — and why we don't need self-correcting mechanisms for "alien intelligence"
I just finished Yuval Noah Harari's Nexus: A Brief History of Information Networks from the Stone Age. The book provides a high-level analysis of information systems throughout history, with some warnings about the dangers of AI on today's systems. It's a remarkable book with many historical insights and interpretations that made history click for me. But the central idea of the book focuses on self-correcting mechanisms (SCMs) and how these SCMs are the linchpin of thriving democracies, so that's what I'll focus on in my review. The book also argues that AI is a form of alien intelligence that might incorrectly execute goals we don't want it to follow. Read more »
The difficulty of tracking and interpreting AI usage labels
Tracking and communicating AI usage in docs turns out to be not only challenging technically, but also potentially full of interpretive pitfalls. There seems to be a double-edged sword at work. On the one hand, we want to track the degree to which AI is being used in doc work so we can quantify, measure, and evaluate the impact of AI. On the other hand, if a tech writer calls out that they used AI for a documentation changelist, it might falsely create the impression that AI did all the work, reducing the value of including the human at all. In this post, I'll explore these dilemmas. Read more »
Why long-running tasks autonomously carried out by agentic AI aren't the future of doc work, and might just be an illusion
As AI agents become more capable, there's growing eagerness to develop long-running tasks that operate autonomously with minimal human intervention. However, my experience suggests this fully autonomous mode doesn't apply to most documentation work. Most of my doc tasks, when I engage with AI, require constant iterative decision-making, course corrections, and collaborative problem-solving—more like a winding conversation with a thought partner than a straight-line prompt-to-result process. This human-in-the-loop requirement is why AI augments rather than replaces technical writers. Read more »
Guest post: Generative AI, technical writing, and evolving thoughts on future horizons, by Jeremy Rosselot-Merritt
In this thoughtful guest post, Jeremy Rosselot-Merritt, an assistant professor at James Madison University, wrestles with generative AI and its impact on the technical writing profession. Jeremy examines risks such as decisions being made by leaders who don't understand the variety and complexity of the tech writer role, or the perceived slowness of output from human writers compared to the scale of output from LLMs. Overall, Jeremy argues that Gen AI is another point on a long timeline of tech writers adapting to evolving tools and strategies (possibly now emphasizing context engineering), and he's confident tech writers will also adapt and continue as a profession. Read more »
|