Use cases for AI: Create summaries
Providing summaries of content is one of the most useful and powerful capabilities of AI chatbots powered by large language models (LLMs), like ChatGPT, Bard, and Claude. As such, AI chatbots can significantly help tech writers in a variety of documentation-related tasks, such as generating summaries at the top of each document, generating product overviews that summarize features and capabilities, and helping tech writers process content more quickly from long articles, bugs, meetings, and other documents.
- Summary patterns in tech docs
- Rich input sources make good summaries possible
- Speed reading
- Adjusting the simplicity levels of the explanation
- Experimenting with summaries
- The verdict? Claude’s responses are better than mine, but also problematic
- The summary of summaries
- Conclusion
Summary patterns in tech docs
At every level in tech docs, summaries provide distilled information about the next level below it. These summaries help users navigate information. Specifically, good tech docs follow summary patterns like this:
- Product overviews provide summaries of the chapters/sections they contain.
- Chapter/section overviews provide summaries of the topics they contain.
- Topics provide summaries of the subheadings they contain. They also provide a table of contents that outlines the content below.
- Subheading titles summarize the paragraphs they contain.
- Topic sentences provide summaries of the paragraphs they introduce.
When information follows this pattern, the docs become easy to skim and read. You can read through a product overview before deciding whether the product is relevant to your needs. You can skim through a chapter overview to decide if the chapter is relevant. You can skim through subheadings to decide whether the sections are worth reading, and so on.
In short, summaries facilitate information scanning and retrieval in key ways. Tech docs that lack summaries are hard to read. That’s why the ability to reduce and distill vast amounts of information down to its essence is a key principle for simplifying complexity.
Summarizing even a 2,000 word topic into 100 words can be mentally taxing, as summaries require writers to comprehend the whole and then simplify those details into a concise and accurate representation (like making a poem out of a novel).
To create summaries, I often make mini-outlines of the topic’s key points and then try to integrate the various points into smooth-flowing sentences. It’s not an easy task.
Rich input sources make good summaries possible
Can AI chatbots help with summaries? Yes, AI tools excel at summaries partly because the input source (the content) is provided to the LLM. This makes them less likely to hallucinate. In contrast, when the AI tool has to draw upon its training of patterns across the Internet to create data, the chances of making bogus/spurious connections are much higher.
Additionally, when writers are familiar with the topic’s content (for example, when they wrote the content that they want AI to summarize), they can easily assess whether the summary is on target.
With Claude.ai, the input source length can be much larger than with other LLMs — about 100k tokens, or 60,000 words. This allows you to paste in nearly any type of content for Claude to summarize.
There’s even a workaround if you have longer content requirements. To summarize lengthier material (> 60k words), chunk it up into pieces, then generate summaries of the pieces, then generate a summary of the summaries.
For example, the comprehensive PDF of my API course (containing all chapters) is too long for Claude to process, as it’s nearly 1,000 pages. Because of this, I’ve broken this content down into individual chapters, each under 10 MB in length (which is Claude’s max PDF size). If you want to summarize all 1,000 pages, you could summarize each individual chapter, compile the chapter summaries, then ask an AI tool to summarize the chapter summaries. (More on this later on in this article.)
Speed reading
Besides authoring scenarios, another way tech writers can use AI summaries is in consuming content. In How Should I Use A.I. Chatbots Like ChatGPT? New York Times columnist Kevin Roose says AI can provide SparkNotes for everything. He says,
One of the most powerful abilities of A.I. language models is quickly summarizing large amounts of text.
I’ve used A.I. programs to “read” long articles for me, and summarize dense academic papers. Usually they get the broad strokes right, even if some of the details are missing.
This power really shines when you combine it with other A.I. abilities, like transcribing audio and video recordings. Suddenly it becomes possible to condense a two-hour podcast into bullet points, or get a concise summary of a work meeting while you’re still in the meeting.
If you’re bored of reading this newsletter, you could ask ChatGPT or Bing to summarize the rest of it for you, or to rewrite it as a jaunty limerick. I promise I won’t be offended.
Asking AI chatbots for summaries of long articles is how I frequently use AI as well. Here are a few scenarios where I ask AI to summarize content:
- Summarize a Reddit or Hacker News thread that has a lengthy number of comments.
- Summarize a long New York Times article that takes too long to get to the point.
- Summarize a journal article from a publication such as Technical Communication Journal (getting to the conclusion and argument more quickly).
- Summarize the comments thread on a lengthy bug.
Adjusting the simplicity levels of the explanation
Beyond merely providing summaries, AI tools are excellent at simplifying complex concepts, making the content understandable for second-grade reading levels or similar. For example, suppose you paste in an academic journal article and ask it for a summary, but the summary is still opaque and hard to understand. You could ask the AI to explain it like you’re 5 years old, or explain it like you’re a high schooler, or to just provide a simpler explanation. AI tools often do an amazing job at simplification.
Kevin Roose explains how he uses ChatGPT to simplify concepts:
For the last few months, I’ve been using ChatGPT as my personal tutor every time I get curious about a new topic. If it’s, say, fractional reserve banking, I’ll start by asking ChatGPT to “explain fractional reserve banking at a high school level.” (Or, if I’m still confused, “at a middle school level.”)
ChatGPT and other A.I. chatbots do well with these types of prompts, using their skills at natural language processing, as long as you don’t ask about very recent events or extremely obscure topics. Calling on an infinitely patient, zero-cost tutor to guide me through new subjects feels like a superpower. Used properly, they can be amazing teaching tools. (How Should I Use A.I. Chatbots Like ChatGPT?)
Kevin Roose and Casey Newton have a tech podcast called Hard Fork, which covers AI and other tech topics. It’s my favorite podcast to listen to.
Additionally, I often ask AI tools to provide concrete examples as a way of clarifying concepts. Reading 10 concrete examples of any concept usually clears it up.
Experimenting with summaries
To experiment with summaries, I decided to compare Claude’s summaries against the chapter summaries in my API course. I already had chapter descriptions written for the 18 chapters in the course. The chapter descriptions preface each chapter. I uploaded each chapter PDF to Claude and asked it to do the following:
Here are the results. My original description is “Tom” and the AI-written description is “Claude.” Which summaries are better?
Chapter 1: Introduction to REST APIs
Tom: REST APIs are flourishing in the marketplace, and the web is becoming a mashup of interconnected APIs. REST APIs consist of requests to and responses from a web server. Job prospects are hot for technical writers who can write developer documentation. This course will help you break into API documentation, especially if you complete the many portfolio-building activities.
Claude:
Chapter 2: Using an API like a developer
Tom: Playing a brief role as a developer will help you understand developer needs better, as well as what developers typically look for in API documentation. Developers often use tools such as Postman or curl to make calls. They look at the structure of the response, and they dynamically integrate the needed information into web pages and other applications.
Claude:
Chapter 3: Documenting API endpoints
Tom: Reference documentation for API endpoints consists of five general sections: resource descriptions, endpoints and methods, parameters, sample requests, and sample responses and schemas. To document the reference endpoints of an API, provide detailed information for each of these sections.
Claude:
Chapter 4: OpenAPI spec and generated reference docs
Tom: The OpenAPI specification provides a formal way of describing your REST API and includes all the reference sections mentioned in the previous section, Documenting API endpoints. Display frameworks such as Swagger UI can parse the OpenAPI specification and generate interactive documentation that lets users try out endpoints while learning about the API.
Claude:
Chapter 5: Step-by-step OpenAPI code tutorial
Tom: If you want to build the OpenAPI spec by hand in a text editor, this tutorial will help you get started. Even though documentation isn’t necessarily linear, I break down the various sections in the OpenAPI YAML file and go through them line by line. Approaching the OpenAPI file manually requires a more advanced understanding of the spec, YAML or JSON syntax, and how to approach strategies specific to your API.
Claude:
Tom: Testing your documentation is critical to providing accurate, thorough information. With API and developer docs, due to the high level of complexity and engineering requirements, technical writers might be inclined to simply take information that engineers give them and incorporate it wholesale, without personally testing it. Merely playing an editorial/publishing function, however, can reduce your role to that of an engineer’s secretary.
Claude:
Chapter 7: Conceptual topics in API docs
Tom: While reference topics in APIs generally receive the most attention, the conceptual topics, such as getting started tutorials, information about authorization, rate limiting, status and error codes, quick reference guides, and other topics constitute about half of the documentation. These topics are usually handled by technical writers more than engineers. You can evaluate the quality of API documentation in part by looking to see whether it includes these conceptual topics.
Claude:
Tom: Documenting code poses one of the most challenging tasks for technical writers, especially if the writers don’t have an engineering background or familiarity with the code. Even without deeper engineering expertise, you can still document code — everything from code snippets to full-fledged sample apps. This section explores techniques and strategies for documenting code.
Claude:
Chapter 9: The writing process
Tom: The writing process refers to the way you write and develop content. There are five main steps to the writing process: planning, information gathering, writing, reviewing, and publishing. Although these aspects of the writing process are general to almost every type of writing, in this section I tailor the information to be specific to writing technical documentation.
Claude:
Chapter 10: Publishing API docs
Tom: API documentation often follows a docs-as-code workflow, where the tools to author and publish documentation align closely with the same tools developers use to write, manage, build, and deploy code. Docs-as-code involves using lightweight formats such as Markdown, collaborating through Git or other version control, building your doc site with a static site generator, and deploying it through a continuous build model, where the build happens on the server when you push commits to a particular branch.
Claude:
Tom: This section focuses on various publishing tools and tutorials related to those tools. The chapter continues similar themes of publishing from the previous chapter but dives into the details of several tools, a publishing workflow, case study, and more.
Claude:
Chapter 12: Thriving in the API doc space
Tom: Getting an API documentation job and thriving requires you to demonstrate your technical aptitude through a writing portfolio. The portfolio should include samples of documentation written for developers. One way to build this portfolio is by working on an open-source project. You also need to live in a tech hub where API documentation jobs are available, such as California, Texas, New York, or Virginia. Overall, thriving in the developer documentation space requires you to continually learn a healthy dose of code, which can be challenging.
Claude:
Chapter 13: Native library APIs
Tom: Native library APIs refer to Java, C++, or other programming-specific APIs. In this model, rather than making requests across the web for the information, you download a library of code and integrate it into your project. The library is compiled directly into your application’s build (rather than accessed through web protocols as with REST APIs). Although this type of API is less common, I include it here in part to clarify what makes REST APIs so different from native library APIs.
Claude:
Chapter 14: Processes and methodology
Tom: The process for managing developer portal documentation includes a number of different tasks outside of content development. The role of a DX (developer experience) content strategist plays a key role in defining workflows, standards, user flows, processes, and other aspects of the site. workflows for different types of documentation. In this section, I cover details for how to manage the doc work, including processes for fixing doc bugs versus tackling larger doc projects. I also explain processes for releasing SDKs, managing external contributions, and more.
Claude:
Chapter 15: Metrics and measurement
Tom: Metrics and measurement addresses ways to measure API documentation quality and how to track your progress on improvement. You can use the quality checklist here to review essential components of documentation and decide how your API docs measure up. The checklist can be a way to investigate, analyze, and interrogate your documentation from another perspective and discover ways to improve it.
Claude:
Chapter 16: AI and API documentation
Tom: AI tools can help speed up authoring, demystifying code, automating publishing tasks, populating templates, and more. This section will look at how to use AI tools within the context of API documentation.
Claude:
Tom: This section contains definitions of glossary terms. It also contains term groups so you can learn all the key terms within a specific subject area.
Claude:
Chapter 18: Additional resources
Tom: This section contains additional exercises and information, such as more activities for calling APIs, or more info about alternative specifications. I also archive topics here that don’t seem to fit well anywhere else.
Claude:
The verdict? Claude’s responses are better than mine, but also problematic
Most of the time, I like Claude’s descriptions more than mine. This is partly because summaries are hard to write, and I dislike writing them. I would gladly hand the pen to Claude to craft a summary from a chapter PDF that I’ve written anyway. AI tools just seem to do a better job at distilling large amounts of information into summaries than I do — at least for the time I’m willing to put into the summary task. I’d rather save my cognitive bandwidth for content creation and development.
That said, Claude’s responses (which are similar to other AI chatbot responses) are problematic: they describe content from an external perspective, with phrases such as “This chapter discusses …” or “The chapter includes …” Remember back in high school when nearly every essay started out with an introductory paragraph that described what the author was going to say instead of just saying it? For example,
“In this essay, I explore widgets. I argue that widgets are great, and that everyone needs widgets. I list the main reasons why people fail to adopt widgets, and then how we can overcome these reasons. Finally, I provide some examples of widgets in real use cases.”
When I later became a composition teacher, I learned to despise these constructions. I would regularly write, Don’t tell me what you’re going to say, just say it! Unsurprisingly, this is Claude’s problem too. Claude describes the chapters from a third-party perspective, as if it’s a robot analyzing what someone else has written rather than making the points more directly.
Additionally, when you come to recognize what AI-written summaries look like, you can more easily identify them. Someone recently sent me a newsletter that had all the hallmarks of AI-written summaries, and I could barely get through the paragraphs without having this same composition-teacher reaction.
Although I liked Claude’s summaries, I ended up rewriting the summaries in a more direct, active way as much as possible. I couldn’t entirely omit the “This chapter discusses…” convention, but I tried to. It was helpful to have Claude’s summary a starting point. I also tried to train ChatGPT to switch the perspective of these passages more toward the approach I wanted, but it usually failed and I had to take over the wheel. At any rate, to read my fixes for the summaries— see the Course organization in the introduction.
The summary of summaries
I mentioned earlier that a workaround to the summary limits in Claude is to paste in various summaries and then ask Claude to generate a summary of those summaries. Here’s the outcome with all 18 chapter summaries pasted into Claude.
This book provides a comprehensive guide to creating effective documentation for REST APIs. It approaches the topic from the perspective of a developer first using an API, then as a technical writer documenting the API. The book covers foundational concepts like making API calls, analyzing responses, and displaying data dynamically. Core skills for API documentation are then addressed, including documenting API endpoints, parameters, requests, and responses. Best practices are provided for reference docs, conceptual topics, tutorials, code samples, and testing.
The book explores tools and specifications like OpenAPI that power interactive documentation through validated machine-readable specs. It gives tutorials for creating OpenAPI files manually and with visual editors. Workflows for publishing API docs are covered in detail, including docs-as-code tools like static site generators, version control with Git, continuous deployment, and reviewing with pull requests.
Additional topics include documenting native library APIs like Javadoc, evaluating docs through quality checklists, leveraging AI for documentation tasks, creating glossaries, and managing documentation processes. The book aims to provide both strategic advice and practical skills for technical writers creating developer documentation for REST APIs. It combines big picture concepts with realistic tutorials and activities centered around sample APIs. Readers will gain the knowledge needed to create clear, usable API documentation that empowers developers to use APIs effectively.
I could definitely see this copy written on a jacket cover. Again, the style follows the convention of a third-party analyzing the content, which I dislike, but my pet peeve for this style might not be entirely reasonable.
Conclusion
In the case of writing summaries, AI tools work best when starting with an input source. If I were to ask Claude to write a chapter based on the summary, the results are much less compelling. For example, I took the summary of chapter 2 and asked Claude to actually write the chapter content. Here’s my prompt.
Prompt:
Here’s Claude’s response:
Chapter 4: Exploring a REST API as a Developer
Introduction
Understanding the developer experience is crucial for writing effective API documentation. This chapter provides hands-on practice using a sample REST API to see firsthand how developers interact with APIs. We will go through the end-to-end workflow of getting API keys, making requests, processing responses, and displaying data. These activities will build empathy for developer needs and requirements. In the next chapter, we'll switch roles and begin documenting this API, bringing hard-won lessons from the developer perspective.
Getting Set Up
Let's first get set up to start making requests. For this exercise, we will use the SpaceX API (https://api.spacex.com), a free REST API with data on SpaceX launches, rockets, and more.
To begin, you'll need to register for an API key. The SpaceX API uses token authentication, with the key passed in an Authorization header. Registering provides a test token we can use.
Now we're ready to start making requests. We'll use Postman, a popular GUI tool for testing APIs. Postman has built-in support for authorization. Install Postman and import the SpaceX API from the app library to get started.
Making GET Requests
Let's start with a simple GET request to retrieve all SpaceX launches. The endpoint is https://api.spacex.com/v4/launches
.
Enter this URL in Postman. Then click Params and enter your API key in the authorization field. Send the request.
You should get a 200 OK response with launch data in JSON format. The response body contains an array of launch objects. Each object provides details like launch date, rocket name, success indicator, and more.
We can pass parameters to filter the response. Let's add launch_success=true to get only successful launches. Update the URL:
https://api.spacex.com/v4/launches?launch_success=true
Now the response only contains successful launches. We could also filter by other fields like launch year or rocket ID.
Making POST/PATCH Requests
So far we've only done basic GET requests. Let's try a POST to create a resource. The /launches endpoint supports POST to add a new launch.
Switch to the POST tab in Postman. Set the request body to JSON for a new launch. Provide launch details like name, date, and rocket ID.
Send the request. You should get a 201 Created response with the ID of the new launch. We can GET that launch ID to confirm it was added.
For updating resources, we use PATCH requests. The /launches endpoint supports PATCH to modify launches.
Let's change the launch date we created. Switch the request to PATCH and set the ID path param. Provide the new date in the body. Send the request to update the launch.
Processing JSON Responses
Until now we've looked at JSON responses in Postman. But in an actual app, how do we work with the JSON programmatically?
Let's use a little JavaScript code to parse a launch response. We'll access fields from the JSON object.
First make a GET request using curl and save the response:
curl -H "Authorization: Bearer <your_key>" https://api.spacex.com/v4/launches/latest > response.json
Now open response.json and load it into a JavaScript variable:
const response = require('./response.json');
This parses the JSON into a JavaScript object. We can access fields directly:
const launchName = response.name; const launchDate = response.date_local;
We could build UI or pass data to other functions. Processing JSON is critical for working with API data.
Displaying API Responses
Let's combine what we've learned to display launch data on a webpage. We'll:
- Make API call
- Process response
- Insert data into HTML
First, we set up a simple HTML page with placeholders for the data:
<html> <h1>Latest SpaceX Launch</h1> <h2>Launch: <span id="name"></span></h2> <p>Date: <span id="date"></span></p> </html>
Next, we use fetch() in JavaScript to GET the latest launch and process the response:
fetch('https://api.spacex.com/v4/launches/latest', { headers: { 'Authorization': 'Bearer <your_key>' } }) .then(response => response.json()) .then(data => { document.getElementById('name').textContent = data.name; document.getElementById('date').textContent = data.date_local; });
This makes the API call, parses the response to JSON, and inserts launch details into the page. The result is a simple page dynamically populated from API data!
Conclusion
This chapter walked through hands-on API exploration from authentication to displaying data. Key lessons for writing documentation include:
- Guide developers through all steps from getting credentials to processing responses
- Provide examples developers can easily adapt like Postman requests
- Explain how to parse and work with JSON programmatically
- Include sample code for using API data in apps and sites
Approaching documentation from this developer perspective will ensure you provide what developers need to use the API successfully. In the next chapter, we'll apply these lessons as we move to the other side and begin documenting the API.
As you can see, this isn’t actual content. This is why I haven’t recommended using AI tools to write content from scratch like this.
Also, note that although Claude can take a large input, its outputs tend to be brief. If you really wanted to try generating content based on an outline, it would be better to the approach it as I described in Step 2: Expansion of detail in the thematic analysis chapter. This approach involves asking for a high-level outline, then proceeding through each of the points with detailed requests rather than requesting the entire content in one response.
Despite how poor the content generation is here, starting a draft this way can give you a lot of ideas and something to work with. But don’t expect much of the AI-written chapter. In sum, AI chatbots are decent at summarizing content you provide it, but not so much at creating original content.
About Tom Johnson

I'm a technical writer / API doc specialist based in the Seattle area. In this blog, I write about topics related to technical writing and communication — such as software documentation, API documentation, visual communication, information architecture, writing techniques, plain language, tech comm careers, and more. Check out my API documentation if you're looking for more info about that. If you're a technical writer and want to keep on top of the latest trends in the field, be sure to subscribe to email updates below. You can also learn more about me or contact me. Finally, note that the opinions I express on my blog are my own points of view, not that of my employer.
155/178 pages complete. Only 23 more pages to go.