The cost of speaking up: Thoughts on "The War on Words" by Greg Lukianoff and Nadine Strossen

I recently read The War on Words: 10 Arguments Against Free Speech—And Why They Fail by Greg Lukianoff and Nadine Strossen, as part of the Seattle Intellectual Book Club. The book counters various arguments against free speech, overall presenting the case that free speech should be protected in most cases except obviously damaging scenarios like defamation, fraud, or immediate physical harm. In this post, I reflect on the costs of speaking openly, especially in the age of cancel culture, where you might not run afoul of the law but can still lose your job or face intense online animosity. Read more »

The isolation and loneliness of tech writing may get worse as AI accelerates

Isolation is something I've been thinking about lately. Although I have an abundant professional network and support probably 100+ engineers, PMs, and others, at times I do experience a sense of isolation in my role. I'm not sure if it's the holidays, or because now that I'm 50, I'm apparently at the bottom of the "U-shaped happiness curve," but I'm trying to understand how to navigate a world where my relationships with colleagues are increasingly transactional (purely information-based) and lack more social aspects. There are several reasons for isolation, and good reason to believe that AI will only increase our isolation. Read more »

Documentation theater and the acceleration paradox -- podcast episode 3 with Fabrizio Ferri-Benedetti

In this episode, Fabrizio (from passo.uno) and I discuss the concept of documentation theater with auto-generated wikis, why visual IDEs like Antigravity beat CLIs for writing, and the liberation vs. acceleration paradox where AI speeds up work but creates review bottlenecks. We also explore the dilemmas of labeling AI usage, why AI needs a good base of existing docs to function well, and how technical writers can stop doing plumbing work and start focusing on more high-value strategic initiatives instead (efforts that might push the limits of what AI can even do). This post also contains a lot of short clips and segments from the episode, along with article links and a transcript. Read more »

Recording, transcript, and notes for AI Book Club discussion of Yuval Noah Harari's Nexus

This is a recording of our AI Book Club discussion of Nexus: A Brief History of Information Networks from the Stone Age to AI by Yuval Noah Harari, held Nov 16, 2025. Our discussion touches upon a variety of topics, including self-correcting mechanisms, alien intelligence, corporate surveillance, algorithms, doomerism, stories and lists, democracy, printing press, alignment, dictator's dilemma, and more. This post also provides discussion questions, a transcript, and terms and definitions from the book. Read more »

Troubleshooting build processes with a fix-it mindset

One of the things I'm doing this week, which has thrown me off my content productivity track, is trying to fix some errors in build logs for my reference docs. I have an SDK that has 8 different proto-based APIs; each API has its own reference documentation. The build script I run (to generate the reference docs) takes about 20 minutes and creates the HTML reference documentation for each API. The only problem is that I recently realized that the build script has some errors. Read more »

Review of Yuval Noah Harari's "Nexus" — and why we don't need self-correcting mechanisms for "alien intelligence"

I just finished Yuval Noah Harari's Nexus: A Brief History of Information Networks from the Stone Age. The book provides a high-level analysis of information systems throughout history, with some warnings about the dangers of AI on today's systems. It's a remarkable book with many historical insights and interpretations that made history click for me. But the central idea of the book focuses on self-correcting mechanisms (SCMs) and how these SCMs are the linchpin of thriving democracies, so that's what I'll focus on in my review. The book also argues that AI is a form of alien intelligence that might incorrectly execute goals we don't want it to follow. Read more »

The difficulty of tracking and interpreting AI usage labels

Tracking and communicating AI usage in docs turns out to be not only challenging technically, but also potentially full of interpretive pitfalls. There seems to be a double-edged sword at work. On the one hand, we want to track the degree to which AI is being used in doc work so we can quantify, measure, and evaluate the impact of AI. On the other hand, if a tech writer calls out that they used AI for a documentation changelist, it might falsely create the impression that AI did all the work, reducing the value of including the human at all. In this post, I'll explore these dilemmas. Read more »

Why long-running tasks autonomously carried out by agentic AI aren't the future of doc work, and might just be an illusion

As AI agents become more capable, there's growing eagerness to develop long-running tasks that operate autonomously with minimal human intervention. However, my experience suggests this fully autonomous mode doesn't apply to most documentation work. Most of my doc tasks, when I engage with AI, require constant iterative decision-making, course corrections, and collaborative problem-solving—more like a winding conversation with a thought partner than a straight-line prompt-to-result process. This human-in-the-loop requirement is why AI augments rather than replaces technical writers. Read more »

Redocly
Seattle, Washington, USA
Don't like these emails? Unsubscribe
Content from idratherbewriting.com.