Review of Yuval Noah Harari's "Nexus" — and why we don't need self-correcting mechanisms for "alien intelligence"
Review of Yuval Noah Harari's

I just finished Yuval Noah Harari's Nexus: A Brief History of Information Networks from the Stone Age. The book provides a high-level analysis of information systems throughout history, with some warnings about the dangers of AI on today's systems. It's a remarkable book with many historical insights and interpretations that made history click for me. But the central idea of the book focuses on self-correcting mechanisms (SCMs) and how these SCMs are the linchpin of thriving democracies, so that's what I'll focus on in my review. The book also argues that AI is a form of alien intelligence that might incorrectly execute goals we don't want it to follow.

The difficulty of tracking and interpreting AI usage labels

Tracking and communicating AI usage in docs turns out to be not only challenging technically, but also potentially full of interpretive pitfalls. There seems to be a double-edged sword at work. On the one hand, we want to track the degree to which AI is being used in doc work so we can quantify, measure, and evaluate the impact of AI. On the other hand, if a tech writer calls out that they used AI for a documentation changelist, it might falsely create the impression that AI did all the work, reducing the value of including the human at all. In this post, I'll explore these dilemmas.

Why long-running tasks autonomously carried out by agentic AI aren't the future of doc work, and might just be an illusion
Why long-running tasks autonomously carried out by agentic AI aren't the future of doc work, and might just be an illusion

As AI agents become more capable, there's growing eagerness to develop long-running tasks that operate autonomously with minimal human intervention. However, my experience suggests this fully autonomous mode doesn't apply to most documentation work. Most of my doc tasks, when I engage with AI, require constant iterative decision-making, course corrections, and collaborative problem-solving—more like a winding conversation with a thought partner than a straight-line prompt-to-result process. This human-in-the-loop requirement is why AI augments rather than replaces technical writers.

Guest post: Generative AI, technical writing, and evolving thoughts on future horizons, by Jeremy Rosselot-Merritt

In this thoughtful guest post, Jeremy Rosselot-Merritt, an assistant professor at James Madison University, wrestles with generative AI and its impact on the technical writing profession. Jeremy examines risks such as decisions being made by leaders who don't understand the variety and complexity of the tech writer role, or the perceived slowness of output from human writers compared to the scale of output from LLMs. Overall, Jeremy argues that Gen AI is another point on a long timeline of tech writers adapting to evolving tools and strategies (possibly now emphasizing context engineering), and he's confident tech writers will also adapt and continue as a profession.

Changing the AI narrative from liberation to acceleration
Changing the AI narrative from liberation to acceleration

The most frequent story told about AI is that it will free us up from mundane tasks and allow us to focus on more impactful, strategic work. But the liberation part of that story might be misleading. In this post, I argue that AI's true effect is to accelerate the entire competitive landscape, increasing the pace of work for everyone. In this new, sped-up world, companies that replace human workers with AI for short-term gains, assuming that the pace of change is static, may find themselves falling behind in the long term.

Medium CEO explains how AI is changing writing
Medium CEO explains how AI is changing writing

I recently listened to How AI Is Changing Writing — with Tony Stubblebine from the Big Technology podcast, hosted by Alex Kantrowitz. This was one of the more interesting and relevant episodes for me. I embedded the interview below and also added my own summary of the important points and my analysis.

Making it easy for people to review your changelists (Doc bug zero series)
Making it easy for people to review your changelists (Doc bug zero series)

The basic idea of doc bug zero, as I explained in Defining bug zero, is to clear out all the tickets in the doc issue queue, essentially to finish all your documentation work. Doing so would be the ultimate statement about the productivity gains from AI. Despite my attempts to get to bug zero, it still eludes me. I'm realizing that there's an art to working through a bug queue, and AI can only take me so far. Good project skills are also needed. One of those skills, which I'll address in this post, is making it easy for people to review the changelists, or pull requests. (The terminology used in my area of doc work is changelists, or CLs, so that's how I'll refer to them here.)

MCP servers and the role tech writers can play in shaping AI capabilities and outcomes -- podcast with Fabrizio Ferri Beneditti and Anandi Knuppel
MCP servers and the role tech writers can play in shaping AI capabilities and outcomes -- podcast with Fabrizio Ferri Beneditti and Anandi Knuppel

In this podcast episode, Fabrizio Ferri Benedetti and I chat with guest Anandi Knuppel about MCP servers and the role that technical writers can play in shaping AI capabilities and outcomes. Anandi shares insights on how writers can optimize documentation for LLM performance and expands on opportunities to collaborate with developers around AI tools. Our discussion also touches on ways to automate style consistency in docs, and the future directions of technical writing given the abundance of AI tools, MCP servers, and the central role that language plays in it all.

Recording of AI book club session of 'Hands-On Large Language Models: Language Understanding and Generation', by Jay Alammar and Maarten Grootendorst
Recording of AI book club session of 'Hands-On Large Language Models: Language Understanding and Generation', by Jay Alammar and Maarten Grootendorst

This is a recording of our AI book club discussion of Hands-On Large Language Models: Language Understanding and Generation by Jay Alammar and Maarten Grootendorst, held Oct 19, 2025. The book differs from other books in the series in that it's a more technical exploration of how LLMs work, without any ethics discussions. It's less narrative and more engineering-oriented. Our discussion focuses on understanding of conceptual details and whether, to use an analogy, understanding the plane's engine helps pilots fly the airplane better.

Switching from Commento to LinkedIn for Blog Comments

After using the Commento commenting service on my blog for about 5 years, I've decided to remove it. The vast majority of comments were already happening on LinkedIn, so I'm embracing that platform for discussions going forward.

Podcast: How AI is changing the role of technical writers to context curators and content directors
Podcast: How AI is changing the role of technical writers to context curators and content directors

In this conversational podcast, Fabrizio Ferri Benedetti (Passo.uno) and I talk about the impact of AI on the technical writing profession. We tackle the anxiety, seen and felt almost everywhere, but especially on Reddit, within the community about job security and analyze the evolution of the technical writer's role into a more strategic context curator or content director. We also cover practical applications of AI, such as using agents markdown files to guide language models (with style overrides or API reference contexts), and the role documentation plays in improving AI's outputs (Fabri's phrase AI must RTFM).

Two strategies to succeed when AI seems to be eroding jobs around you

This past year in the tech comm community, there's been a lot of angst about job security with AI. In this post, I argue that our roles are shifting from writers to content directors. In this new role, the skills we need for success are two-fold. I propose that we focus on developing (1) deep subject matter expertise and (2) tools expertise. I also share my optimistic view about why technical writers will remain essential in a future with ever-expanding technology. The tldr for that argument is that even as AI might remove some jobs, the exponential growth of tech will create more opportunities and needs for documentation. Additionally, the accuracy of AI tools highly depends on the quality of the documentation.

Book review of 'Empire of AI: Dreams and Nightmares in Sam Altman's OpenAI' by Karen Hao
Book review of 'Empire of AI: Dreams and Nightmares in Sam Altman's OpenAI' by Karen Hao

In my AI Book Club, we recently read Empire of AI: Dreams and nightmares in Sam Altman's OpenAI, by Karen Hao. In this post, I'll briefly share some of my reactions to the book. The main focus in my review is to analyze Hao's treatment of the mission-driven ideology around AGI that explains many of the motivations for the workers at OpenAI and similar AI companies.

Recording of AI Book Club discussion of Karen Hao's Empire of AI: Dreams and Nightmares in Sam Altman's OpenAI
Recording of AI Book Club discussion of Karen Hao's Empire of AI: Dreams and Nightmares in Sam Altman's OpenAI

This is a recording of the AI Book Club discussion about Karen Hao's Empire of AI: Dreams and Nightmares in Sam Altman's OpenAI. The discussion is an informal, casual discussion with about half a dozen people online through Google Meet. You can also read a transcript and other details about the book here.

Defining bug zero and two obstacles: Reducing review time and gathering context

In my previous post about achieving bug zero, I introduced the goal and some motivations for it, but I didn't fully articulate the whole connection to AI. I also didn't explain much of what a doc bug queue is in my context, or why it even matters. In this post, I'll define doc bugs in more depth and explore two major obstacles to accelerating documentation work: review time and context gathering.