Second-level documentation quality assessment

FINDABILITY

Topics have article-like length

Each topic consolidates much of the necessary information about the subject in a standalone, modular way. The topics don't fragment the subject into too many small topics that require the user to bounce around, losing context. At the same time, entire guides aren't shoved into single topics, making them eternally long. The length of topics should be about like a web article (for example, 500-3,000 words long).

Assessment:

Score: 0-5

Content follows task-based organization

Content is organized primarily by task and from a user's perspective rather than solely by doc type. The exception is that reference content is often consolidated on its own in an auto-generated output.

Assessment:

Score: 0-5

Developer portal has IA flow

The information architecture (IA) in the developer portal organizes content within a larger context or story, even if it's just a list of links under various categories.

Assessment:

Score: 0-5

ACCURACY

Steps are accurate

The steps in the tasks accurately lead to the results promised by the task, without missing any details. For example, if the instructions say to click a button name, the button is named the same way in the interface. If the instructions say to use a class, the class is spelled as it appears in the code library, etc.

Assessment:

Score: 0-5

Instructions are consistent

Information isn't repeated in confusing, redundant, or inconsistent ways. For example, the documentation doesn't explain how to do a task one way in Topic A but then a different way in Topic B. If content is re-used, the re-use is usually single-sourced to reduce inconsistency.

Assessment:

Score: 0-5

Screenshots are up to date

Screenshots and other visuals are up to date with the state of the interface (if there is one).

Assessment:

Score: 0-5

Deprecated features are noted

Features that are no longer supported (or which have been deprecated) are clearly noted as such in the documentation. Preferably, if a feature has been deprecated, a migration path to an alternative solution is provided.

Assessment:

Score: 0-5

Functional code samples

Code samples that can be copy and pasted actually work.

Assessment:

Score: 0-5

App code matches doc code

Code in sample apps matches the code described in the documentation. The sample app hasn't evolved in ways that no longer match the documentation.

Assessment:

Score: 0-5

RELEVANCE

Docs are published when features released

Documentation is available at the same time the feature is released and made available to users, not after the fact.

Assessment:

Score: 0-5

GitHub is actively monitored

If the product has a code repository on GitHub (or similar), this repository is actively monitored when users log issues or submit pull requests. Users aren't simply ignored.

Assessment:

Score: 0-5

CLARITY

Any role-based permissions are noted

If there are different roles and permission levels that correspond to varying capabilities, these roles are noted in all places they apply, such as within the individual API reference. For example, if only admin roles can use a certain endpoint, this is noted in the endpoint's reference and in any places that use the endpoint.

Assessment:

Score: 0-5

Steps are detailed rather than general

Specific steps for completing tasks are described in detail rather than glossed over in general ways — unless the task is something the audience is already familiar with. Overall, the documentation strikes a balance between providing enough detail for users to get the job done but not including too much detail that the reader is inundated with extraneous information.

Assessment:

Score: 0-5

Workflow maps connects topics in larger flows

Larger tasks that span multiple topics have a workflow map (or similar mechanism) that connects the topics together.

Assessment:

Score: 0-5

Visuals aid in areas of confusion

In places that are particularly confusing, appropriate visuals or additional examples have been added to reduce confusion.

Assessment:

Score: 0-5

Support cases are minimal for the product

Users can follow the documentation by themselves without relying on peer or customer support. A large number of support cases or forum threads containing questions that could have been answered in the documentation would indicate that the users aren't self-sufficient through the docs.

Assessment:

Score: 0-5

Technical level is appropriate to audience

The documentation's technical level is appropriate to the *target audience* but might not serve every possible audience (for example, total newbies to a programming language might struggle with documentation intended for developers already experienced in that language). Usually, general concepts in a programming language that you assume the audience knows are not explained in the documentation. Instead, your company's product, configuration, and usage are covered in the context of the programming language. One exception is when the implementation requires a non-standard process or workflow that merits some explanation.

Assessment:

Score: 0-5

Experiential learning paths

The documentation provides opportunities for experiential/opportunistic users to start learning immediately through code and trial/error, and for more systematic users to learn by reading concepts first.

Assessment:

Score: 0-5

Docs favor simplest path

If there are multiple paths to a solution, the documentation focuses on the simplest path (though other possibilities might be briefly mentioned).

Assessment:

Score: 0-5

Docs call out relevant sections in sample app

In cases where a sample app complements the documentation as a reference implementation, the documentation should refer to different aspects of the sample app.

Assessment:

Score: 0-5

COMPLETENESS

Error messages are helpful for troubleshooting

Error messages help users course correct by providing helpful hints for addressing the error. This supports the opportunistic/experiential user behavior.

Assessment:

Score: 0-5

Unhappy path documented

If there are pitfalls or other traps, gaps, and gotchas to avoid, these are noted in the documentation rather than hidden from the user. A section called Known Limitations often contains this information. The documentation doesn't lie or mislead the user but rather is transparent, honest, and helpful even if it means exposing the product's warts and revealing problems users will like encounter.

Assessment:

Score: 0-5

READABILITY

Placeholder text in code is visually apparent

In code samples, placeholder text that needs to be customized is clearly indicated to the user. It's not confusing what is code and what needs to be changed, like `APIKEY`.

Assessment:

Score: 0-5

Navigation mechanisms consistent across docs

Navigation mechanisms work consistently across all docs in the developer portal. For example, in one set of docs, if top-level folders expand to show child items rather than opening to their own page, the same behavior is found in other docs.

Assessment:

Score: 0-5

Sentences and paragraphs are somewhat short

Sentences are somewhat short, paragraphs are relatively small, and subheadings are frequent. A readability score will place the content at the high-school level, not college.

Assessment:

Score: 0-5

Glossary entries match terms used

Glossary terms are used consistently across the documentation. For example, one doc set doesn't use a certain term while another uses a synonym of the term, with the admin UI using yet another term.

Assessment:

Score: 0-5

Language uses active voice

The language uses active voice (where warranted) with clear subjects and verbs positioned closely together.

Assessment:

Score: 0-5

Grammar isn't distracting

Sentences are grammatically correct and read well, without distracting the user or calling attention to the language.

Assessment:

Score: 0-5