Each topic consolidates much of the necessary information about the subject in a standalone, modular way. The topics don't fragment the subject into too many small topics that require the user to bounce around, losing context. At the same time, entire guides aren't shoved into single topics, making them eternally long. The length of topics should be about like a web article (for example, 500-3,000 words long).
Assessment:
Score: 0-5
Content is organized primarily by task and from a user's perspective rather than solely by doc type. The exception is that reference content is often consolidated on its own in an auto-generated output.
Assessment:
Score: 0-5
The information architecture (IA) in the developer portal organizes content within a larger context or story, even if it's just a list of links under various categories.
Assessment:
Score: 0-5
The steps in the tasks accurately lead to the results promised by the task, without missing any details. For example, if the instructions say to click a button name, the button is named the same way in the interface. If the instructions say to use a class, the class is spelled as it appears in the code library, etc.
Assessment:
Score: 0-5
Information isn't repeated in confusing, redundant, or inconsistent ways. For example, the documentation doesn't explain how to do a task one way in Topic A but then a different way in Topic B. If content is re-used, the re-use is usually single-sourced to reduce inconsistency.
Assessment:
Score: 0-5
Screenshots and other visuals are up to date with the state of the interface (if there is one).
Assessment:
Score: 0-5
Features that are no longer supported (or which have been deprecated) are clearly noted as such in the documentation. Preferably, if a feature has been deprecated, a migration path to an alternative solution is provided.
Assessment:
Score: 0-5
Code samples that can be copy and pasted actually work.
Assessment:
Score: 0-5
Code in sample apps matches the code described in the documentation. The sample app hasn't evolved in ways that no longer match the documentation.
Assessment:
Score: 0-5
Documentation is available at the same time the feature is released and made available to users, not after the fact.
Assessment:
Score: 0-5
If the product has a code repository on GitHub (or similar), this repository is actively monitored when users log issues or submit pull requests. Users aren't simply ignored.
Assessment:
Score: 0-5
If there are different roles and permission levels that correspond to varying capabilities, these roles are noted in all places they apply, such as within the individual API reference. For example, if only admin roles can use a certain endpoint, this is noted in the endpoint's reference and in any places that use the endpoint.
Assessment:
Score: 0-5
Specific steps for completing tasks are described in detail rather than glossed over in general ways — unless the task is something the audience is already familiar with. Overall, the documentation strikes a balance between providing enough detail for users to get the job done but not including too much detail that the reader is inundated with extraneous information.
Assessment:
Score: 0-5
Larger tasks that span multiple topics have a workflow map (or similar mechanism) that connects the topics together.
Assessment:
Score: 0-5
In places that are particularly confusing, appropriate visuals or additional examples have been added to reduce confusion.
Assessment:
Score: 0-5
Users can follow the documentation by themselves without relying on peer or customer support. A large number of support cases or forum threads containing questions that could have been answered in the documentation would indicate that the users aren't self-sufficient through the docs.
Assessment:
Score: 0-5
The documentation's technical level is appropriate to the *target audience* but might not serve every possible audience (for example, total newbies to a programming language might struggle with documentation intended for developers already experienced in that language). Usually, general concepts in a programming language that you assume the audience knows are not explained in the documentation. Instead, your company's product, configuration, and usage are covered in the context of the programming language. One exception is when the implementation requires a non-standard process or workflow that merits some explanation.
Assessment:
Score: 0-5
The documentation provides opportunities for experiential/opportunistic users to start learning immediately through code and trial/error, and for more systematic users to learn by reading concepts first.
Assessment:
Score: 0-5
If there are multiple paths to a solution, the documentation focuses on the simplest path (though other possibilities might be briefly mentioned).
Assessment:
Score: 0-5
In cases where a sample app complements the documentation as a reference implementation, the documentation should refer to different aspects of the sample app.
Assessment:
Score: 0-5
Error messages help users course correct by providing helpful hints for addressing the error. This supports the opportunistic/experiential user behavior.
Assessment:
Score: 0-5
If there are pitfalls or other traps, gaps, and gotchas to avoid, these are noted in the documentation rather than hidden from the user. A section called Known Limitations often contains this information. The documentation doesn't lie or mislead the user but rather is transparent, honest, and helpful even if it means exposing the product's warts and revealing problems users will like encounter.
Assessment:
Score: 0-5
In code samples, placeholder text that needs to be customized is clearly indicated to the user. It's not confusing what is code and what needs to be changed, like `APIKEY`.
Assessment:
Score: 0-5
Navigation mechanisms work consistently across all docs in the developer portal. For example, in one set of docs, if top-level folders expand to show child items rather than opening to their own page, the same behavior is found in other docs.
Assessment:
Score: 0-5
Sentences are somewhat short, paragraphs are relatively small, and subheadings are frequent. A readability score will place the content at the high-school level, not college.
Assessment:
Score: 0-5
Glossary terms are used consistently across the documentation. For example, one doc set doesn't use a certain term while another uses a synonym of the term, with the admin UI using yet another term.
Assessment:
Score: 0-5
The language uses active voice (where warranted) with clear subjects and verbs positioned closely together.
Assessment:
Score: 0-5
Sentences are grammatically correct and read well, without distracting the user or calling attention to the language.
Assessment:
Score: 0-5