Quality checklist for API docs (simplified html) -- comprehensive version

FINDABILITY

Findable in search

The content is indexed in a general search engine and findable when you create searches with the product name and some key tasks.

Assessment:

Release notes present

Release notes track changes to the product and documentation.

Assessment:

Site-specific search available

The doc site has a site-specific search that lets users search within the documentation site itself.

Assessment:

Developer portal hub context

If you have lots of documentation relevant to users, the documentation is organized in a developer portal of some kind.

Assessment:

UI pages link to docs

If there's a UI (such as with an administrative console), links within the UI connect users to the relevant documentation.

Assessment:

Main organization isn't an FAQ

The content doesn't have an endless FAQ with information that should have been integrated into more logical places in the document.

Assessment:

Version selection is available

If content has multiple versions, the versions are called out visibly in the topic and might have a selector or link allows users to navigate to the other versions.

Assessment:

Easy path to top 5 pages

There’s an easy path for users to find the top 5 most-visited pages on the site. This requires you to look at metrics to determine these pages, and then assess the flow to those pages.

Assessment:

Topics have article-like length

Each topic consolidates much of the necessary information about the subject in a standalone, modular way. The topics don't fragment the subject into too many small topics that require the user to bounce around, losing context. At the same time, entire guides aren't shoved into single topics, making them eternally long. The length of topics should be about like a web article (for example, 500-3,000 words long).

Assessment:

Content follows task-based organization

Content is organized primarily by task and from a user's perspective rather than solely by doc type. The exception is that reference content is often consolidated on its own in an auto-generated output.

Assessment:

Developer portal has IA flow

The information architecture (IA) in the developer portal organizes content within a larger context or story, even if it's just a list of links under various categories.

Assessment:

ACCURACY

Steps are accurate

The steps in the tasks accurately lead to the results promised by the task, without missing any details. For example, if the instructions say to click a button name, the button is named the same way in the interface. If the instructions say to use a class, the class is spelled as it appears in the code library, etc.

Assessment:

Code samples work

Code samples that can be copy and pasted actually work.

Assessment:

Content reviewed within past year

Content has been reviewed by a subject matter expert within the past year. Ideally, each topic should include metadata such as the last-reviewed timestamp, last author, and the group that owns the content.

Assessment:

Timestamps are visible

The documentation provides a visible timestamp of the last time it was edited so that users can gauge how current the documentation is.

Assessment:

No broken links

Links point to correct pages or are appropriately handled by redirects to equivalent pages.

Assessment:

Instructions are consistent

Information isn't repeated in confusing, redundant, or inconsistent ways. For example, the documentation doesn't explain how to do a task one way in Topic A but then a different way in Topic B. If content is re-used, the re-use is usually single-sourced to reduce inconsistency.

Assessment:

Screenshots are up to date

Screenshots and other visuals are up to date with the state of the interface (if there is one).

Assessment:

Deprecated features are noted

Features that are no longer supported (or which have been deprecated) are clearly noted as such in the documentation. Preferably, if a feature has been deprecated, a migration path to an alternative solution is provided.

Assessment:

App code matches doc code

Code in sample apps matches the code described in the documentation. The sample app hasn't evolved in ways that no longer match the documentation.

Assessment:

RELEVANCE

Key use cases are documented

The documentation doesn't just provide reference information (e.g., auto-generated API documentation) but also explains how to use the API with tutorials guiding users through common use cases and journeys. The content should address the *most common* use cases intended for the product.

Assessment:

Code samples exist

Code samples showing sample ways to use the API (or similar tools) are provided. Ideally, the code samples are available in the user's target language. This might mean providing multiple code samples.

Assessment:

Support options noted

Options for contact or support are provided, even if the support merely involves posting to a peer-monitored forum.

Assessment:

Reference material is interactive

The docs allow users to interact with the functions, API calls, or other commands often using their custom data. Typically, with REST APIs, an API explorer allows users to try out requests using their own data. Other sites might use Jupyter Notebooks to run code that can be customized. Seeing responses with your own data can make the API seem more relevant.

Assessment:

Feedback options available

A feedback option is included in the docs so readers can relay feedback about their experience. When users provide feedback, they get the sense that someone is listening. This helps users feel more relevant.

Assessment:

Docs are published when features released

Documentation is available at the same time the feature is released and made available to users, not after the fact.

Assessment:

GitHub is actively monitored

If the product has a code repository on GitHub (or similar), this repository is actively monitored when users log issues or submit pull requests. Users aren't simply ignored.

Assessment:

CLARITY

Product overview page answers "wh" questions

The overview explains the big picture and describes the problem that the tool or service addresses. Common who/what/where/why questions are answered here.

Assessment:

Access and authorization explained

Details about how to get access, permissions, and authorization to use the API are provided. For example, this topic might cover how to authorize an API call with API keys.

Assessment:

Overview addresses use cases

The overview provides a high-level description of the main use cases or business objectives of the product. This allows users to get a sense of what the API is all about.

Assessment:

Overview has architectural diagram and explanation

The overview has a diagram of the main components and how they interact. This provides users with a glimpse of the whole.

Assessment:

Overview has index of assets that the product offers

If there's an SDK or developer kit that users can download, the contents of this download are described. This is similar to product instructions that start by identifying all parts that should have arrived in a package.

Assessment:

Subsystems have their own overview pages

For larger systems that might have multiple subsystems (e.g., groups of APIs for different scenarios), these subsystems have their own landing pages that resemble the higher-level overview (with use cases, diagrams, getting started links) but scoped to that specific subsystem.

Assessment:

Getting started tutorial exists

A getting started tutorial is provided for users to get started in an end-to-end way with the product, producing a sample output that builds their confidence. This topic might provide info on how to sign up, register, get API keys or permissions, and start using the API. (This topic might link to the authorization topic but is more comprehensive in scope. The purpose of this topic is frictionless onboarding.)

Assessment:

Sandbox environment available

A sandbox environment is available for users to try out the tool/system/API on their own and get a better understanding of how it works.

Assessment:

README file exists in code repo

If the product has a code repository, the repository contains a README file that provides information similar to the overview (but abbreviated), including information on how to contribute if applicable. (The README might simply link to the documentation overview but also include unique sections such as how to contribute in the repo.)

Assessment:

Tasks have prerequisites sections

Each task has a "Prerequisites" section that explains knowledge requirements, tool requirements, essential concepts, etc., necessary for completing the task.

Assessment:

Tasks have numbered steps or sections

Tasks are articulated through a series of specific, usually numbered steps or sections that reduce the complexity of the task. More complex tasks might require more granular and numerous steps or sections so as to reduce the complexity. (This is the primary way that technical writers simplify complexity.)

Assessment:

Sample app available

In more robust docs, a sample app might serve as a reference implementation for users.

Assessment:

Code sample has inline comments

Code samples are appropriately commented, either with inline comments, comments that preface or follow the sample, or both. Long code samples with no explanation aren't just dropped in.

Assessment:

Any role-based permissions are noted

If there are different roles and permission levels that correspond to varying capabilities, these roles are noted in all places they apply, such as within the individual API reference. For example, if only admin roles can use a certain endpoint, this is noted in the endpoint's reference and in any places that use the endpoint.

Assessment:

Steps are detailed rather than general

Specific steps for completing tasks are described in detail rather than glossed over in general ways — unless the task is something the audience is already familiar with. Overall, the documentation strikes a balance between providing enough detail for users to get the job done but not including too much detail that the reader is inundated with extraneous information.

Assessment:

Workflow maps connect topics in larger flows

Larger tasks that span multiple topics have a workflow map (or similar mechanism) that connects the topics together.

Assessment:

Visuals aid in areas of confusion

In places that are particularly confusing, appropriate visuals or additional examples have been added to reduce confusion.

Assessment:

Support cases are minimal for the product

Users can follow the documentation by themselves without relying on peer or customer support. A large number of support cases or forum threads containing questions that could have been answered in the documentation would indicate that the users aren't self-sufficient through the docs.

Assessment:

Technical level is appropriate to audience

The documentation's technical level is appropriate to the *target audience* but might not serve every possible audience (for example, total newbies to a programming language might struggle with documentation intended for developers already experienced in that language). Usually, general concepts in a programming language that you assume the audience knows are not explained in the documentation. Instead, your company's product, configuration, and usage are covered in the context of the programming language. One exception is when the implementation requires a non-standard process or workflow that merits some explanation.

Assessment:

Experiential learning paths are available

The documentation provides opportunities for experiential/opportunistic users to start learning immediately through code and trial/error, and for more systematic users to learn by reading concepts first.

Assessment:

Doc recommend the simplest path when multiple options exist

If there are multiple paths to a solution, the documentation focuses on the simplest path (though other possibilities might be briefly mentioned).

Assessment:

Docs call out relevant sections in a sample app

In cases where a sample app complements the documentation as a reference implementation, the documentation should refer to different aspects of the sample app.

Assessment:

COMPLETENESS

Reference docs follow industry standards

For native library APIs (or other API types), reference docs (auto-generated from source code comments) are available. This might mean Javadoc, Doxygen, OpenAPI outputs like Swagger or other reference docs specific to the library. The reference docs should be populated and generally follow tagging standards.

Assessment:

Parameter docs have complete info

Parameter documentation typically includes a description, data type, min/max values, sample values, and optional/required usage.

Assessment:

Reference content has consistent structure

Reference material such as APIs follow a common structure within each topic, mostly following a request-response type structure. Typical sections include descriptions, parameters, sample requests or usage, and sample responses.

Assessment:

Error messages are documented

Error messages that users can encounter are documented and discoverable through search. This supports the opportunistic/experiential user behavior.

Assessment:

Responses includes both sample and schema (REST APIs)

The response documentation for REST APIs provides both a sample response and schema. The response provides an example of what might be returned, while the schema defines all possible elements that might be returned and describes attributes such as data types and whether the elements are required or optional in the response.

Assessment:

Troubleshooting section exists

The documentation has a troubleshooting section (either standalone or included within the section/topic it relates to) that provides information on how to solve common problems. The troubleshooting information indicates where things might go wrong and how to fix them. In other words, the documentation doesn't just describe the happy path.

Assessment:

Limitations/thresholds documented

If there are limitations and thresholds (for example, max number of API calls, max file sizes, max string lengths, max/min parameter values, max server load), these limitations are called out in the docs.

Assessment:

Locale limitations noted

If a feature is available only in certain contexts (locales, languages, platforms, roles, versions), that information is noted clearly in the feature. For example, an API that is only available for enterprise versions might have a label that says "Enterprise Version Only," or if only available for a particular platform, might say "Linux Only" or the equivalent.

Assessment:

Unhappy paths are documented

If there are pitfalls or other traps, gaps, and gotchas to avoid, these are noted in the documentation rather than hidden from the user. A section called Known Limitations often contains this information. The documentation doesn't lie or mislead the user but rather is transparent, honest, and helpful even if it means exposing the product's warts and revealing problems users will like encounter.

Assessment:

READABILITY

Grammar isn't distracting

Sentences are grammatically correct and read well, without distracting the user or calling attention to the language.

Assessment:

Placeholder text in code is visually apparent

In code samples, placeholder text that needs to be customized is clearly indicated to the user. It's not confusing what is code and what needs to be changed, like `APIKEY`.

Assessment:

Sidebar nav has consumable organization at a glance

The sidebar navigation lets users take in a sense of the whole while also allowing users to expand more details as desired. The sidebar isn't a massive list of seemingly endless scrolling and expansion + expansion + expansion but rather divides up doc sets into logical groups, like chapters in a book. For systems with large numbers of topics, progressive disclose techniques might be implemented across primary, secondary, and tertiary levels of information.

Assessment:

Sidebar nav highlights the current topic

As the user navigates each topic, the sidebar navigation makes it clear where the user is in the navigation (for example, the topic highlights clearly and the navigation sticks open at that level). Breadcrumbs might also help establish site context.

Assessment:

Context remains consistent when navigating

When a user clicks topics in the navigation, the UI doesn't shift context in jarring ways, such as unexpectedly taking the user to another doc set or changing stable navigation areas like the sidebar and header (which should be consistent for every page). This jarring navigation often happens when sidebar entries point to topics in other doc sites. If this is the case, the external links have an icon indicating the link takes them to another site.

Assessment:

Doc types have consistent names across product docs

Common topics have similar names across doc sets in the developer portal. For example, the Overview, Getting Started, Troubleshooting, Glossary, Release Notes, and Reference are named consistently to help users understand how to navigate the site. One doc set shouldn't call a topic "Latest updates" and "First steps" while another uses "What's new" and "Quickstart."

Assessment:

Subheadings are descriptive

The subheadings are descriptive enough to allow users to get a gist of the topic by merely reading the subheadings. The subheadings also follow a parallel structure so they can be more easily scanned and read.

Assessment:

Topics start with summaries

Each topic usually has a short summary below the title that encapsulates the main purpose of the topic. This helps users get a sense of the topic at a glance.

Assessment:

Glossary exists

Unfamiliar words and jargon are defined in a glossary. At times, the glossary terms are linked to their glossary definitions.

Assessment:

Glossary entries match the actual terms used in the content

Glossary terms (as defined in the glossary) are actually used consistently across the documentation. For example, one doc set doesn't use a certain term while another uses a synonym of the term, with the admin UI using yet another term. If the glossary lists a term for a particular concept, the documentation content consistently uses that term.

Assessment:

Code samples have proper formatting and highlighting

The formatting in code samples follows standard white spacing, line breaks, and other syntax for the language. Code syntax highlighting appropriate to the language has been applied to increase the code's readability.

Assessment:

Responsive view presents content in a readable way

The content can be read on a mobile device (e.g., iPhone) in a usable way. For example, the responsive view allows users to navigate the sidebar links and view code samples.

Assessment:

Navigation mechanisms are consistent across docs

Navigation mechanisms work consistently across all docs in the developer portal. For example, in one set of docs, if top-level folders expand to show child items rather than opening to their own page, the same behavior is found in other docs.

Assessment:

Sentences and paragraphs are somewhat short

Sentences are somewhat short, paragraphs are relatively small, and subheadings are frequent. A readability score will place the content at the high-school level, not college.

Assessment:

Language uses active voice

The language uses active voice (where warranted) with clear subjects and verbs positioned closely together.

Assessment: