Using an API like a developer

1.0 Documenting REST APIs

Documenting REST APIs course

In this course on writing documentation for REST APIs, instead of just talking about abstract concepts, I contextualize REST APIs with a direct, hands-on approach.

You’ll learn about API documentation in the context of using some simple weather APIs to put a weather forecast on your site.

As you use the API, you’ll learn about endpoints, parameters, data types, authentication, cURL, JSON, the command line, Chrome’s Developer Console, JavaScript, and other details associated with REST APIs.

The idea is that rather than learning about these concepts independent of any context, you learn them by immersing yourself in a real scenario while using an API. This makes these tools more meaningful.

After you use the API as a developer, you’ll then shift perspectives and “become a technical writer” tasked with documenting a new endpoint that has been added to an API.

As a technical writer, you’ll tackle each element of a reference topic in REST API documentation:

Diving into these sections will give you a solid understanding of how to document REST APIs.

Finally, you’ll dive into different ways to publish REST API documentation, exploring tools and specifications such as API Blueprint, Swagger, RAML, readme.io, Jekyll, and more.

You’ll learn how to leverage templates, build interactive API consoles so users can try out requests and see responses, and learn different ways to host and publish your documentation.

In summary, this course is divided into three main topics:

Learn with a real example and context

Because the purpose of the course is to help you learn, there are many activities that require hands-on coding and other exercises. Along with the learning activities, there are also conceptual deep dives, but the focus is always on learning by doing.

Note that this course is intended to be sequential, walking you through a series of concepts and activities that build on each other. But you can skip around as you want if you’re already familiar with a concept.

No programming skills required

As for the needed technical background for the course, you don’t need any programming background or other prerequisites, but it will help to know some basic HTML, CSS, and JavaScript.

If you do have some familiarity with programming concepts, you might speed through some of the sections and jump ahead to the topics you want to learn more about. This course assumes you’re a beginner, though.

Note that some of the code samples in this course use JavaScript. JavaScript may or may not be a language that you actually use when you document REST APIs, but most likely there will be some programming language or platform that becomes important to know.

JavaScript is one of the most useful and easy languages to become familiar with, so it works well in code samples for this introduction to REST API documentation.

What you’ll need

Here are a few things you’ll need in this course:

Stay updated

If you’re taking this course, you most likely want to learn more about APIs. I publish regular articles that talk about APIs and strategies for documenting them. You can stay updated about these posts by subscribing to my free newsletter at https://tinyletter.com/tomjohnson1492.

1.1 The market for REST API documentation

Course focus is on REST APIs

The API landscape is diverse. To get a taste of the various types of APIs out there, check out Sarah Maddox’s post about API types.

API Types

Despite the wide variety, there are mostly just two main types of APIs most technical writers interact with:

With native library APIs, you deliver a library of classes or functions to users, and they incorporate this library into their projects. They can then call those classes or functions directly in their code, because the library has become part of their code.

With REST APIs, you don’t deliver a library of files to users. Instead, the users make requests for the resources on a web server, and the server returns responses containing the information.

REST APIs follow the same protocol as the web. When you open a browser and type a website URL (such as http://idratherbewriting.com), you’re actually making a GET request for a resource on a server. The server responds with the content and the browser makes the content visible.

This course focuses mostly on REST APIs because they’re more accessible to technical writers, as well as more popular and in demand. You don’t need to know programming to document REST APIs. And REST is becoming the most common type of API anyway.

Programmableweb API survey rates doc #1 factor in APIs

Before we get into the nuts and bolts of documenting REST APIs, let me provide some context about the popularity of the REST API documentation market in general.

In a 2013 survey by Programmableweb.com (which is a site that tracks and lists web APIs), about 250 developers were asked to rank the most important factors in an API. “Complete and accurate documentation” ranked as #1.

Programmableweb survey

John Musser, one of the founders of Programmableweb.com, has also emphasized the importance of documentation in some of his presentations. In “10 reasons why developers hate your API,” he says the number one reason developers hate your API is because “Your documentation sucks.”

Your API documentation sucks

Since 2005, REST APIs are taking off in a huge way

If REST APIs were an uncommon software product, it wouldn’t be that big of a deal. But actually, REST APIs are taking off in a huge way. Through the PEW Research Center, Programmableweb.com has charted and tracked the prevalence of web APIs.

Growth in web APIs

eBay’s API in 2005 was one of the first web APIs. Since then, there has been a tremendous growth in web APIs. Given the importance of clear and accurate API documentation, this presents a perfect market opportunity for technical writers. Technical writers can apply their communication skills to fill a gap in a market that is exploding.

Because REST APIs are a style not a standard, docs are essential

REST APIs are a bit different from the SOAP APIs that were popular some years ago. SOAP APIs (service-oriented architecture protocol) enforced a specific message format for sending requests and returning responses. As an XML message format, SOAP was very specific and had a WSDL file (web service description language) that described how to interact with the API.

REST APIs, however, do not follow a standard message format. Instead, REST is an architectural style, a set of recommended practices for submitting requests and returning responses. In order to understand the request and response format for the REST API, you don’t consult the SOAP message specification or look at the WSDL file. Instead, you have to consult the REST API’s documentation.

Each REST API functions a bit differently. There isn’t a single way of doing things, and this flexibility and variety is what fuels the need for accurate and clear documentation. As long as there is variety with REST APIs, there will be a strong need for technical writers.

The web is becoming an interwoven mashup of APIs

Another reason why REST APIs are taking off is because the web itself is evolving into a conglomeration of APIs. Instead of massive, do-it-all systems, web sites are pulling in the services they need through APIs. For example, rather than building your own search to power your website, you might use Switftype instead and leverage their service through the Swiftype API.

Rather than building your own payment gateway, you might integrate Stripe and its API. Rather than building your own login system, you might use UserApp and its API. Rather than building your own e-commerce system, you might use Snipcart and its API. And so on.

Practically every service provides its information and tools through an API that you use. Jekyll, a popular static site generator, doesn’t have all the components you need to run a site. There’s no newsletter integration, analytics, search, commenting systems, forms, chat e-commerce, surveys, or other systems. Instead, you leverage the services you need into your static site.

CloudCannon has put together a long list of services that you can integrate into your static site.

services for static websites

This cafeteria style model is replacing the massive, swiss-army-site model that tries to do anything and everything. It’s better to rely on specialized companies to create powerful, robust tools (such as search) and leverage their service rather than trying to build all of these services yourself.

The way each site leverages its service is usually through a REST API of some kind. In sum, the web is becoming an interwoven mashup of many different services from APIs interacting with each other.

Job market is hot for API technical writers

Many employers are looking to hire technical writers who can create not only complete and accurate documentation, but who can also create stylish outputs for their documentation. Here’s a job posting from a recruiter looking for someone who can emulate Dropbox’s documentation:

As you can see, the client wants to find “someone who’ll emulate Dropbox’s documentation.”

Why does the look and feel of the documentation matter so much? With API documentation, there is no GUI interface for users to browse. Instead, the documentation is the interface. Employers know this, so they want to make sure they have the right resources to make their API docs stand out as much as possible.

Here’s what the Dropbox API looks like:

Dropbox API

It’s not a sophisticated design. But its simplicity and brevity is one of its strengths. When you consider that the API documentation is more or less the product interface, building a sharp, modern-looking doc site is paramount for credibility and traction in the market.

API doc is a new world for most tech writers

API documentation is mostly a new world to technical writers. Many of the components may seem foreign. For example, all of these things differ from traditional documentation:

When you try to navigate the world of API documentation, the world probably looks as unfamiliar as Mars.

API doc world is like Mars

Learning materials about API doc are scarce

Realizing there was a need for more information, in 2014 guest-edited a special issue of Intercom dedicated to API documentation.

STC Intercom issue focused on API documentation

This issue was a good start, but many technical writers have asked for more training. In our Silicon Valley STC chapter, we’ve held a couple of workshops dedicated to APIs. Both workshops sold out quickly (with 60 participants in the first, and 100 participants in the second). API documentation is particularly hot in the San Francisco Bay area, where many companies have REST APIs.

In 2014, the STC Summit in Columbus held its first ever API documentation track.

Technical writers are hungry to learn more about APIs. To help address this ongoing need, I’ve created this material to teach technical writers about API documentation.

1.2 What is a REST API?

An API is an interface between systems

In general, an API (or Application Programming Interface) provides an interface between two systems. It’s like a cog that allows two systems to interact with each other.

Spinning gears. By Brent 2.0. Flickr.

In an API workshop by Jim Bisso, an experienced API technical writer in the Silicon Valley area, Bisso said to consider your computer’s calculator. When you press buttons, functions underneath are interacting with other components to get information. Once the information is returned, the calculator presents the data back to the GUI.

calculator

APIs often work in similar ways. But instead of interacting within the same system, web APIs call remote services to get their information.

Developers use API calls behind the scenes to pull information into their apps. A button on a GUI may be internally wired to make calls to an external service. For example, the embedded Twitter or Facebook buttons that interact with social networks, or embedded Youtube videos that pull a video in from youtube.com, are both powered by web APIs underneath.

APIs that use HTTP protocol are “web services”

In general, a web service is a web-based application that provides information in a format consumable by other computers. Web services include various types of APIs, including both REST and SOAP APIs. Web services are basically request and response interactions between clients and servers (a computer makes the request, and the web service provides the response).

All APIs that use HTTP protocol as the transport format for requests and responses can be classified as “web services.”

Language agnostic

With web services, the client making the request and the API server providing the response can use any programming language or platform — it doesn’t matter because the message request and response are made through a common HTTP web protocol.

This is part of the beauty of web services: they are language agnostic and therefore interoperable across different platforms and systems.

SOAP APIs are the predecessor to REST APIs

Before REST became the most popular web service, SOAP (Simple Object Access Protocol) was much more common. To understand REST a little better, it helps to have some context with SOAP. This way you can see what makes REST different.

SOAP used standardized protocols and WSDL files

SOAP is a standardized protocol that requires XML as the message format for requests and responses. As a standardized protocol, the message format is usually defined through something called a WSDL file (Web Services Description Language).

The WSDL file defines the allowed elements and attributes in the message exchanges. The WSDL file is machine readable and used by the servers interacting with each other to facilitate the communication.

SOAP messages are enclosed in an “envelope” that includes a header and body, using a specific XML schema and namespace. For an example of a SOAP request and response format, see SOAP vs REST Challenges.

Problems with SOAP and XML: Too heavy, slow

The main problem with SOAP is that the XML message format is too verbose and heavy. It is particularly problematic with mobile scenarios where file size and bandwidth are critical. The verbose message format slows processing times, which makes SOAP interactions more slow.

SOAP is still used in enterprise application scenarios with server-to-server communication, but in the past 5 years, SOAP has largely been replaced by REST, especially for APIs on the open web. You can browse some SOAP APIs at http://xmethods.com/ve2/index.po.

REST is a style, not a standard

Like SOAP, REST (REpresentational State Transfer) uses HTTP as the transport protocol for the message requests and responses. However, unlike SOAP, REST is an architectural style, not a standard protocol. This is why REST APIs are sometimes called RESTful APIs — REST is a general style that the API follows.

A RESTful API might not follow all of the official characteristics of REST as outlined by Dr. Roy Fielding, who first described the model. Hence these APIs are “RESTful” or “REST-like.”

Requests and Responses

Here’s the general model of a REST API:

REST API

As you can see, there’s a request and a response between a client to the API server. The client and server can be based in any language, but HTTP is the protocol used to transport the message. This request-and-response pattern is fundamentally how REST APIs work.

Any message format can be used with REST

As an architectural style, you aren’t limited to XML as the message format. REST APIs can use any message format the API developers want to use, including XML, JSON, Atom, RSS, CSV, HTML, and more.

JSON most common format

Despite the variety of message format options, most REST APIs use JSON (JavaScript Object Notation) as the default message format. This is because JSON provides a lightweight, simple, and more flexible message format that increases the speed of communication.

The lightweight nature of JSON also allows for mobile processing scenarios and is easy to parse on the web using JavaScript. In contrast, with XML you have to use XSLT to parse and process the content.

REST focuses on resources accessed through URLs

Another unique aspect of REST is that REST APIs focus on resources (that is, things, rather than actions, which is what SOAP does), and ways to access the resources. You access the resources through URLs (Uniform Resource Locations). The URLs are accompanied by a method that specifies how you want to interact with the resource.

Common methods include GET (read), POST (create), PUT (update), and DELETE (remove). The URL usually includes query parameters that specify more details about the representation of the resource you want to see. For example, you might specify (in a query parameter) that you want to limit the display of 5 instances of the resource.

Sample URLs for a REST API

Here’s what a sample REST URI or endpoint might look like:

http://apiserver.com/homes?limit=5&format=json

This endpoint would get the “homes” resource and limit the result to 5 homes. It would return the response in JSON format.

You can have multiple endpoints that refer to the same resource. Here’s one variation:

http://apiserver.com/homes/1234

This might be an endpoint that retrieves a home resource with an ID of 1234. What is transferred back from the server to the client is the “representation” of the resource. The resource may have many different representations (showing all homes, homes that match a certain criteria, homes in a specific format, and so on), but here we want to see home 1234.

The web itself follows REST

The terminology of “URIs” and “GET requests” and “message responses” transported over “HTTP protocol” might seem unfamiliar, but really this is just the official REST terminology to describe what’s happening. If you’ve used the web, you’re already familiar with how REST APIs work, because the web itself essentially follows a RESTful style.

If you open a browser and go to http://idratherbewriting.com, you’re really using HTTP protocol (http://) to submit a GET request to the resource available on a web server. The response from the server sends the content at this resource back to you using HTTP. Your browser is just a client that makes the message response look pretty.

Web as REST API

You can see this response in cURL if you open a Terminal prompt and type curl http://idratherbewriting.com.

Because the web itself is an example of RESTful style architecture, the way REST APIs work will likely become second nature to you.

REST APIs are stateless and cacheable

Some additional features of REST APIs are that they are stateless and cacheable. Stateless means that each time you access a resource through an endpoint, the API provides the same response. It doesn’t remember your last request and take that into account when providing the new response. In other words, there aren’t any previously remembered states that the API takes into account with each request.

The responses can also be cached in order to increase the performance. If the browser’s cache already contains the information asked for in the request, the browser can simply return the information from the cache instead of getting the resource from the server again.

Caching with REST APIs is similar to caching on web pages. The browser uses the last-modified-time value in the HTTP headers to determine if it needs to get the resource again. If the content hasn’t been modified since the last time it was retrieved, the cached copy can be used instead. This increases the speed of the response.

REST APIs have other characteristics, which you can dive more deeply into on REST API Tutorial. One of these characteristics includes links in the responses to allow users to page through to additional items. This feature is called HATEOAS, or Hypermedia As The Engine of Application State.

Understanding REST at a higher, more theoretical level isn’t my goal here, nor is this knowledge necessary to document a REST API. However, there are a number of more technical books, courses, and websites that explore REST API concepts, constraints, and architecture in more depth that you can consult to dive deeper here. For example, check out Foundations of Programming: Web Services by David Gassner on lynda.com.

REST APIs don’t use WSDL files, but some specs exist

An important aspect of REST APIs, especially in terms of documentation, is that they don’t use a WSDL file to describe the elements and parameters allowed in the requests and responses.

Although there is a possible WADL (Web Application Description Language) file that can be used to describe REST APIs, they’re rarely used since the WADL files don’t adequately describe all the resources, parameters, message formats, and other attributes the REST API. (Remember that the REST API is an architectural style, not a standardized protocol.)

In order to understand how to interact with a REST API, you have to read the documentation for the API. (Hooray! This makes the technical writers’ role extremely important with REST APIs. )

Some formal specifications — for example, Swagger (also called OpenAPI) and RAML — have been developed to describe REST APIs. When you describe your API using the Swagger or RAML specification, tools that can read those specifications (like Swagger UI or the RAML API Console) will generate an interactive documentation output.

The Swagger or RAML output can take the place of the WSDL file that was more common with SOAP. These spec-driven outputs are usually interactive (featuring API Consoles or API Explorers) and allow you to try out REST calls and see responses directly in the documentation.

But don’t expect Swagger UI or RAML API Console documentation outputs to include all the details users would need to work with your API (for example, how to include authorization keys, details about workflows and interdependencies between endpoints, and so on). The Swagger or RAML output usually contains reference documentation only, which typically only accounts for part of the total needed documentation.

Overall, REST APIs are more varied and flexible than SOAP, and you almost always need to read the documentation in order to understand how to interact with a REST API. As you explore REST APIs, you will find that they differ greatly from one to another (especially the format and display of their documentation sites), but they all share the common patterns outlined here. At the core of any REST API is a request and response.

1.3 Scenario for using a weather API

Our course scenario: Weather forecast API

Enough with the abstract concepts. Let’s start using an actual API to get more familiar with how they work.

In the upcoming sections, you’ll use two different APIs in the context of a specific use case: retrieving a weather forecast. By first playing the role of a developer using an API, you’ll gain a greater understanding of how your audience will use APIs, the type of information they’ll need, and what they might do with the information.

Let’s say that you’re a web developer and you want to add a weather forecast feature to your site. Your site is for bicyclists. You want to allow users who come to your site to see what the wind conditions are for biking. You want something like this:

Wind meter conditions for website

You don’t have your own meteorological service, so you’re going to need to make some calls out to a weather service to get this information. Then you will present that information to users.

Get an idea of the end goal

To give you an idea of the end goal, here’s a sample. It’s not necessarily styled the same as the mockup, but it answers the question, “How windy is it?”

Click the button to see wind details.

Wind conditions for Santa Clara

Wind chill:
Wind speed:
Wind direction:

You can view the same code in a separate window here: Mashape API example

When you request this data, an API goes out to a weather service, retrieves the information, and displays it to you.

Of course, the above example is extremely simple. You could also build an attractive interface like this:

Sample weather interface

Find the Weather API by fyhao on Mashape

The Mashape Marketplace is a directory where publishers can publish their APIs, and where consumers can consume the APIs. Mashape manages the interaction between publishers and consumers by providing an interactive marketplace for APIs.

The APIs on Mashape tend to be rather simple compared to some other APIs, but this simplicity will work well to illustrate the various aspects of an API without getting too mired in other details.

Explore APIs at Mashape

You’re a consumer of an API, but which one do you need to pull in weather forecasts?

Explore the APIs available on Mashape and find the weather forecast API:

  1. Go to Mashape Marketplace and click Explore APIs.
  2. Try to find an API that will allow you to retrieve the weather forecast.

    As you explore the various APIs, get a sense of the variety and services that APIs provide. These APIs aren’t applications themselves. They provide developers with ways to pipe information into their applications. In other words, the APIs will provide the data plumbing for the applications that developers build.

  3. Search for an API called “Weather,” by fyhao at https://market.mashape.com/fyhao/weather-13. Although there are many weather APIs, this one seems to have a lot of reviews and is free.

    Weather API on Mashape

Find the Aeris Weather API

Now let’s look at another weather API (this one not on Mashape). In contrast to the simple API on Mashape, the Aeris Weather API is much more robust and extensive. You can see that the Aeris Weather API is a professional grade, information-rich API that could empower an entire news service.

  1. Go to www.aerisweather.com.
  2. Click Developer on the top navigation. Then under Aeris Weather API, click Documentation.
  3. Click Reference in the left sidebar, and then click Endpoints.

    Aeris Endpoints

  4. In the list of endpoints, click forecasts.
  5. Browse the type of information that is available through this API.

Here’s the Aeris weather forecast API in action making the same call as I showed earlier with Mashape: http://idratherbewriting.com/files/restapicourse/wind-aeris.html.

As you can see, both APIs contain this same information about wind, but the units differ.

Answer some questions about the APIs

Spend a little time exploring the features and information that these weather APIs provide. Try to answer these basic questions:

These are common questions developers want to know about an API.

Can you see how APIs can differ significantly? As mentioned previously, REST APIs are an architectural style, not a specific standard that everyone follows. You really have to read the documentation to understand how to use them.

1.4 Getting authorization keys

About authorization for API calls

Almost every API has a method in place to authenticate requests. You usually have to provide an API key in requests to get a response. Authorization allows API publishers to do the following:

In order to run the code samples in this course, you will need to use your own API keys, since these keys are usually treated like personal passwords and not given out or published openly on a web page.

Get the Mashape authorization keys

To get the authorization keys to use the Mashape API, you’ll need to sign up for a Mashape account.

  1. On market.mashape.com, click Sign Up in the upper-right corner and create an account.
  2. Click Applications on the top navigation bar, and then select Default Application.
  3. In the upper-right corner, click Get the Keys.

    Mashape -- getting the keys

  4. When the Environment Keys dialog appears, click Copy to copy the keys. (Choose the Testing keys, since this type allows you to make unlimited requests.)

    Mashape keys

  5. Open a text editor and paste the key so that you can easily access it later when you construct a call.

Get the Aeris Weather API secret and ID

The Aeris Weather API requires both a secret and ID to make requests.

  1. Go to http://www.aerisweather.com and click Sign Up in the upper-right corner.
  2. Select API Developer, if it’s not already selected. Then click Sign Up. (Note that the free version limits the number of requests per day and per minute you can make.)
  3. Click Checkout. You’re prompted to create an Aeris account. (Don’t worry — you won’t have to enter any credit card details.)
  4. Complete the fields and create an Aeris account. When finished creating the account, you’ll see a message that says “Your subscription has been successfully processed.”
  5. Once you sign up for an account, click Account in the upper-right corner.

    Aeris account

  6. Click Apps (on the second navigation row, to the right of “Usage”), and then click New Application.
  7. In the dialog box, enter the following:
  8. Click Save App.
  9. Refresh the web page.

Once your app registers, you should see an ID and secret for the app. Copy this information into a text file, since you’ll need it to make requests.

Text editor tips

When you’re working with code, you use a text editor (to work in plain text) instead of a rich text editor (which would provide a WYSIWYG interface). Many developers use different text editors. Here are a few choices:

These editors provide features that let you better manage the text. Choose the one you want. (Personally, I use Sublime Text when I’m working with code samples, and WebStorm when I’m working with Jekyll projects.) Avoid using TextEdit since it adds some formatting behind the scenes that can corrupt your content.

1.5 Submit requests through Postman

GUI clients make REST calls a little easier

When you’re testing endpoints with different parameters, you can use one of the many GUI REST clients available. With a GUI REST client, you can:

Common GUI clients

Some popular GUI clients include the following:

Of the various GUI clients available, I think Postman is the best option, since it allows you to save both calls and responses, is free, works on both Mac and PC, and is easy to configure.

Learn by doing, then deep dive into concepts

A lot of times abstract concepts don’t make sense until you can contextualize them with some kind of action. In this course, I’m following more of an act-first-then-understand type of methodology. After you do an activity, we’ll explore the concepts in more depth. So if it seems like I’m glossing over concepts things now, like what a GET method is or a resource URL, hang in there. When we deep dive into these points later, things will be a lot clearer.

Make a request in Postman

  1. If you haven’t already done so, download and install the [Postman app] at www.getpostman.com. If you’re on a Mac, choose the Mac app. If you’re on Windows, choose the Chrome app. (Note that you must also have Chrome to run the Chrome app.)
  2. You’ll make a REST call for the first endpoint (aqi) in the Mashape Weather API. Select GET for the method.
  3. Insert the endpoint into the main box (next to the method, which is GET by default): https://simple-weather.p.mashape.com/aqi
  4. Click the Params button (to the right of the box where you inserted the endpoint) and insert lat and lng parameters with specific values (other than 1).

    Finding latitude and longitude on Google Maps

    When you add these lat and lng parameters, they will dynamically be added as a query string to the endpoint URI. The query string is the code followed by the ? in the endpoint URL. A request URL only has one query string (one ?). If you have additional parameters in the query string, they’re joined with an ampersand & symbol.

  5. Click the Headers tab (below the GET button) and insert the key value pairs: Accept: text/plain and X-Mashape-Key: APIKEY. (Swap in your own API key in place of APIKEY.)

    Your inputs should look like this:

    Postman request

  6. Click Send.

    The response appears, such as 52. In this case, the response is text only. You can switch the format to HTML, JSON, XML, or other formats, but since this response is text only, you won’t see any difference. Usually the responses are more detailed JSON, which allows you to select a specific part of the response to work with.

Save the request

  1. In Postman, click the Save button (next to Send).
  2. In the Save Request dialog box, create a new collection (for example, weather) by typing the collection name in the “Or create new collection” box.
  3. In the Request Name box at the top of the dialog box, type a friendly name for the request, such as “AQI endpoint”.
  4. Click Save.

Saved endpoints appear in the left side pane under Collections.

Make requests for the other endpoints

Enter details into Postman for the other two endpoints for the Mashape Weather API:

When you save these other endpoints, click the arrow next to Save and choose Save As. Then choose your collection and request name. (Otherwise you’ll overwrite the settings of the existing request.)

Save as

(Alternatively, click the + button on the new tab and create new tabs each time.)

View the format of the weatherdata response in JSON

While the first two endpoint responses include text only, the weatherdata endpoint response is in JSON.

In Postman, make a request to the weatherdata API. Then toggle the options to Pretty and JSON.

JSON response

The Pretty JSON view expands the JSON response into more readable code.

For the sake of variety with GUI clients, here’s the same call made in Paw:

Paw

Like Postman, Paw also allows you to easily see the request headers, response headers, URL parameters, and other data. However, Paw is specific to Mac only.

Enter several requests for the Aeris API into Postman

Now let’s switch APIs a bit and see some weather information from the Aeris API. Constructing the endpoints for the Aeris Weather API is a bit more complicated since there are many different queries, filters, and other parameters you can use to configure the endpoint.

Here are a few requests to configure for Aeris. You can just paste the requests directly into the URL request box in Postman and the parameters will auto-populate in the parameter fields.

Note that the Aeris API doesn’t use a Header field to pass the API keys — the key and secret are passed directly in the request URL as part of the query string.

Get the weather forecast for your area:

http://api.aerisapi.com/observations/Santa+Clara,CA?client_id=CLIENTID&client_secret=CLIENTSECRET&limit=1

In the response, find the wind speed and compare it with the wind from the Mashape API. Are they the same?

Get the weather from a city on the equator — Chimborazo, Ecuador:

http://api.aerisapi.com/observations/Chimborazo,Ecuador?client_id=CLIENTID&client_secret=CLIENTSECRET&limit=1

Find out if all the country music in Knoxville, Tennessee is giving people migraines:

http://api.aerisapi.com/indices/migraine/Knoxville,TN?client_id=CLIENTID&client_secret=CLIENTSECRET
http://api.aerisapi.com/indices/migraine/Knoxville,TN?client_id=CLIENTID&client_secret=CLIENTSECRET

You’re thinking of moving to Arizona, but you want to find a place that’s cool. Use the normals endpoint:

http://api.aerisapi.com/normals/flagstaff,az?client_id=CLIENTID&client_secret=CLIENTSECRET&limit=5&filter=hassnow

By looking at these two different weather APIs, you can see some differences in the way the information is called and returned. However, fundamentally both APIs have endpoints that you can configure with parameters. When you make requests with the endpoints, you get responses that contain information, often in JSON format. This is the core of how REST APIs work — requests and responses.

1.6 Installing cURL

It’s better that you install cURL before the course so that you aren’t bogged down with technical issues when you’re trying to focus on the course material. cURL is usually available by default on Macs but requires some installation on Windows.

Installing cURL

Follow these instructions for installing cURL:

Mac

If you have a Mac, by default, cURL is probably already installed. To check:

  1. Open Terminal (press Cmd + space bar to open Spotlight, and then type “Terminal”).
  2. In Terminal type curl -V. The response should look something like this:

    curl 7.37.1 (x86_64-apple-darwin14.0) libcurl/7.37.1 SecureTransport zlib/1.2.5
    Protocols: dict file ftp ftps gopher http https imap imaps ldap ldaps pop3 pop3s rtsp smtp smtps telnet tftp
    Features: AsynchDNS GSS-Negotiate IPv6 Largefile NTLM NTLM_WB SSL libz
    

If you don’t see this, you need to download and install cURL.

To make a test API call, submit the following:

curl --get -k --include "https://simple-weather.p.mashape.com/aqi?lat=1.3319164&lng=103.7231246" -H "X-Mashape-Key: EF3g83pKnzmshgoksF83V6JB6QyTp1cGrrdjsnczTkkYgYrp8p" -H "Accept: text/plain"

You should get back a two-digit number in the response. (This is the “air quality index” for the weather.)

Windows

Installing cURL on Windows involves a few more steps. First, determine whether you have 32-bit or 64-bit Windows by right-clicking Computer and selecting Properties.

Then follow the instructions in this Confused by Code page.

Once installed, test your version of cURL by doing the following:

  1. Open a command prompt by clicking the Start button and typing cmd.
  2. Type curl -V.

The response should be as follows:

curl 7.37.1 (x86_64-apple-darwin14.0) libcurl/7.37.1 SecureTransport zlib/1.2.5
Protocols: dict file ftp ftps gopher http https imap imaps ldap ldaps pop3 pop3s rtsp smtp smtps telnet tftp
Features: AsynchDNS GSS-Negotiate IPv6 Largefile NTLM NTLM_WB SSL libz

To make a test API call, submit the following:

curl --get -k --include "https://simple-weather.p.mashape.com/aqi?lat=1.3319164&lng=103.7231246" -H "X-Mashape-Key: EF3g83pKnzmshgoksF83V6JB6QyTp1cGrrdjsnczTkkYgYrp8p" -H "Accept: text/plain"

You should get back a two-digit number in the response. (This is the “air quality index” for the weather.)

Notes about using cURL with Windows

1.7 Make a cURL call

About cURL

While Postman is convenient, it’s hard to represent — in your documentation — just how to make the calls. Additionally, different users probably use different GUI clients, or none at all (preferring the command line instead).

Instead of describing how to make REST calls using a GUI client like Postman, the most conventional method for documenting request syntax is to explain how to make the calls using cURL.

cURL is a command-line utility that lets you execute HTTP requests with different parameters and methods. In other words, instead of going to web resources in a browser’s address bar, you can use the command line to get these same resources, retrieved as text.

In this section, you’ll use cURL to make the same requests you made previously with Postman.

Prepare the weather request in cURL format

  1. Go back into the Weather API on Mashape.
  2. Copy the cURL request example for the first endpoint (aqi) into your text editor:

    curl --get --include 'https://simple-weather.p.mashape.com/aqi?lat=1.0&lng=1.0' \ -H 'X-Mashape-Key: EF3g83pKnzmshgoksF83V6JB6QyTp1cGrrdjsnczTkkYgYrp8p' \ -H 'Accept: text/plain'

  3. If you’re on Windows, do the following:

    The request should now look like this:

    curl --get -k --include "https://simple-weather.p.mashape.com/aqi?lat=1.0&lng=1.0" -H "X-Mashape-Key: APIKEY" -H "Accept: text/plain"
    
  4. Swap in your own API key in place of APIKEY.

Make the request in cURL (Mac)

  1. Open a terminal. To open Terminal, press Cmd + space bar and type Terminal.

  2. Paste the request you have in your text editor into the command line.

    My request for the Mashape Weather API looks like this:

    bash curl --get --include 'https://simple-weather.p.mashape.com/aqi?lat=1.3319164&lng=103.7231246' -H 'X-Mashape-Key: APIKEY' -H 'Accept: text/plain'

    For the Aeris Weather observations endpoint, it looks like this:

bash curl --get --include "http://api.aerisapi.com/observations/santa%20clara,ca?client_id=CLIENTID&client_secret=CLIENTSECRET" "Accept: application/json"

  1. Press your Enter key.

You should see something like this as a response:

cURL call

The response is just a single number: the air quality index for the location specified. (This response is just text, but most of the time responses from REST APIs are in JSON.)

Make the request in cURL (Windows 7)

  1. Copy the cURL call from your text editor.
  2. Go to Start and type cmd to open up the commandline. (If you’re on Windows 8, see these instructions for accessing the commandline.)
  3. Right-click and then select Paste to insert the call. My call for the Mashape API looks like this:

bash curl --get -k --include "https://simple-weather.p.mashape.com/aqi?lat=1.3319164&lng=103.7231246" -H "X-Mashape-Key: APIKEY" -H "Accept: text/plain"

For the Aeris endpoint, it looks like this:

bash curl --get --include "http://api.aerisapi.com/observations/santa%20clara,ca?client_id=CLIENTID&client_secret=CLIENTSECRET" -H "Accept: application/json"

The response from Mashape looks like this:

<img src="/images_api/commandline.png" alt="Command line Windows" />

Single and Double Quotes with Windows cURL requests

Note that if you’re using Windows to submit a lot of cURL requests, you’ll eventually run into issues with the single versus double quotes. Some API endpoints (usually for POST methods) require you to submit content in the body of the message request. The body content is formatted in JSON. Since you can’t use double quotes inside of other double quotes, you run into issues in submitting cURL requests.

Here’s the workaround. If you have to submit body content in JSON, you can store the content in a JSON file. Then you reference the file with an @ symbol, like this:

curl -H "Content-Type: application/json" -H "Authorization: 123" -X POST -d @mypostbody.json http://endpointurl.com/example

Here cURL will look in the existing directory for the mypostbody.json file, but you can also reference the complete path to the JSON file.

Make cURL requests for each of the weather endpoints

Make a cURL request for each of the weather endpoints for both the Mashape weather endpoints and the Aeris Weather endpoints, similar to how you made the requests in Postman.

1.8 Understand cURL more

cURL is a cross-platform way to show requests and responses

Before moving on, let’s pause a bit and learn more about cURL.

One of the advantages of REST APIs is that you can use almost any programming language to call the endpoint. The endpoint is simply a resource located on a web server at a specific path.

Each programming language has a different way of making web calls. Rather than exhausting your energies trying to show how to make web calls in Java, Python, C++, JavaScript, Ruby, and so on, you can just show the call using cURL.

cURL provides a generic, language agnostic way to demonstrate HTTP requests and responses. Users can see the format of the request, including any headers and other parameters. Your users can translate this into the specific format for the language they’re using.

REST APIs follow the same model of the web

One reason REST APIs are so familiar is because REST follows the same model as the web. When you type an http address into a browser address bar, you’re telling the browser to make an HTTP request to a resource on a server. The server returns a response, and your browser converts the response to a more visual display. But you can also see the raw code.

Try using cURL to GET a web page

To see an example of how cURL retrieves a web resource, open a terminal and type the following:

curl http://example.com

You should see all the code behind the site example.com. The browser’s job is to make that code visually readable. cURL shows you what you’re really retrieving.

Requests and responses include headers too

When you type an address into a website, you see only the body of the response. But actually, there’s more going on behind the scenes. When you make the request, you’re sending a header that contains information about the request. The response also contains a header.

  1. To see the response header in a cURL request, include -i in the cURL request:

    curl http://example.com -i
    

    The header will be included above the body in the response.

  2. To limit the response to just the header, use -I:

    curl http://example.com -I
    

    The response header is as follows:

    HTTP/1.1 200 OK
    Accept-Ranges: bytes
    Cache-Control: max-age=604800
    Content-Type: text/html
    Date: Fri, 19 Jun 2015 05:58:50 GMT
    Etag: "359670651"
    Expires: Fri, 26 Jun 2015 05:58:50 GMT
    Last-Modified: Fri, 09 Aug 2013 23:54:35 GMT
    Server: ECS (rhv/818F)
    X-Cache: HIT
    x-ec-custom-error: 1
    Content-Length: 1270
    

    The header contains the metadata about the response. All of this information is transferred to the browser whenever you make requests to URLs (that is, when you go to web pages), but the browser doesn’t show you this information. You can see it using the Chrome Developer Tools console if you look on the Network tab.

  3. Now let’s specify the method. The GET method is the default, but we’ll make it explicit here:

    curl -X GET http://example.com -I
    

    When you go to a website, you submit the request using the GET HTTP method. There are other HTTP methods you can use when interacting with REST APIs. Here are the common methods used when working with REST endpoints:

    HTTP Method Description
    POST Create a resource
    GET Read a resource
    PUT Update a resource
    DELETE Delete a resource

Unpacking the weather API cURL request

Let’s look more closely at the request you submitted for the weather:

  curl --get --include 'https://simple-weather.p.mashape.com/aqi?lat=37.354108&lng=-121.955236' \
  -H 'X-Mashape-Key: APIKEY' \
  -H 'Accept: text/plain'

cURL has shorthand names for the various options that you include with your request. The \ just creates a break for a new line for readability. (Don’t use \ in Windows.)

Here’s what the commands mean:

cURL command Description
--get The HTTP method to use. (This is actually unnecessary. You can remove this and the request returns the same response, since GET is the method used by default.)
--include Whether to show the headers in the response. Also represented by -i.
-H Submits a custom header. Include an additional -H for each header key-value pair you’re submitting.

Query strings and parameters

The latitude (lat) and longitude (lng) parameters were passed to the endpoint using “query strings.” The ? appended to the URL is the query string where the parameters are passed to the endpoint:

?lat=37.354108&lng=-121.955236

After the query string, each parameter is concatenated with other parameters through the & symbol. The order of the parameters doesn’t matter. The order only matters if the parameters are part of the URL path itself (not listed after the query string).

cURL has a lot of possible commands, but the following are the most common when working with REST APIs.

cURL command Description Example
-i or --include Include the response headers in the response. curl -i http://www.example.com
-d or --data Include data to post to the URL. The data needs to be url encoded. Data can also be passed in the request body. curl -d "data-to-post" http://www.example.com
-H or --header Submit the request header to the resource. This is very common with REST API requests because the authorization is usually included here. curl -H "key:12345" http://www.example.com
-X POST The HTTP method to use with the request (in this example, POST). If you use -d in the request, cURL automatically specifies a POST method. With GET requests, including the HTTP method is optional, because GET is the default method used. curl -X POST -d "resource-to-update" http://www.example.com
@filename Load content from a file curl -X POST -d @mypet.json http://www.example.com

See the cURL documentation for a comprehensive list of cURL commands you can use.

Example cURL command

Here’s an example that combines some of these commands:

curl -i -H "Accept: application/json" -X POST -d "{status:MIA}" http://personsreport.com/status/person123

We could also format this with line breaks to make it more readable:

curl -i \
     -H "Accept: application/json" \
     -X POST \
     -d "{status:MIA}" \
     http://personsreport.com/status/person123 \

(Of course line breaks are problematic on Windows, so I don’t recommend formatting cURL requests like this.)

The Accept header instructs the server to process the post body as JSON.

Test your memory

Fill in the blanks to see how much you remember:

See the cURL parameters on the answer page to check your responses.

More Resources

To learn more about cURL with REST documentation, see REST-esting with cURL.

1.9 Using methods with cURL (Petstore example)

Using Petstore API

Our sample weather API from Mashape doesn’t allow you to use anything but a GET method, so for this example, we’ll use the petstore API from Swagger, but without actually using the Swagger UI (which is something we’ll explore later). For now, we just need an API that we can create, update, and delete content from. (You’re just getting familiar with cURL here.)

Swagger Petstore

In this example, you’ll create a new pet, update the pet, get the pet’s ID, delete the pet, and then try to get the deleted pet. D

Create a new pet

To create a pet, you have to pass a JSON message in the request body. Rather than trying to encode the JSON and pass it in the URL, you’ll store the JSON in a file and reference the file.

  1. Insert the following into a file called mypet.json. This information will be passed in the -d parameter of the cURL request:

    {
      "id": 123,
      "category": {
        "id": 123,
        "name": "test"
      },
      "name": "fluffy",
      "photoUrls": [
        "string"
      ],
      "tags": [
        {
          "id": 0,
          "name": "string"
        }
      ],
      "status": "available"
    }
    
  2. Change the first id value to another integer (whole number) and the pet name of fluffy.

  3. Save the file in this directory: Users/YOURUSERNAME. (Replace YOURUSERNAME with your actual user name on your computer.)
  4. In your Terminal, browse to the directory where you saved the mypet.json file. (Usually the default directory is Users/YOURUSERNAME — hence the previous step.)

    If you’ve never browsed directories using the command line, note these essential commands:

    On a Mac, find your present working directory by typing pwd. Then move up by typing change directory: cd ../. Move down by typing cd pets, where pets is the name of the directory you want to move into. Type ls to list the contents of the directory.

    On a PC, just look at the prompt path to see your current directory. Then move up by typing cd ../. Move down by typing cd pets, where pets is the name of the directory you want to move into. Type dir to list the contents of the current directory.

  5. Once your Terminal or command prompt is in the same directory as your json file, create the new pet:

    curl -X POST --header "Content-Type: application/json" --header "Accept: application/json" -d @mypet.json "http://petstore.swagger.io/v2/pet"
    

    The response should look something like this:

    {"id":51231236,"category":{"id":4,"name":"testexecution"},"name":"fluffernutter","photoUrls":["string"],"tags":[{"id":0,"name":"string"}],"status":"available"}
    

Update your pet

Guess what, your pet hates its name! Change your pet’s name to something more formal using the update pet method.

  1. In the mypet.json file, change the pet’s name.
  2. Use the PUT method instead of POST with the same cURL content to update the pet’s name:

    curl -X PUT --header "Content-Type: application/json" --header "Accept: application/json" -d @mypet.json "http://petstore.swagger.io/v2/pet"
    

Get your pet’s name by ID

Now you want to find your pet’s name by passing the ID into the /pet/{petID} endpoint.

  1. In your mypet.json file, copy the first id value.
  2. Use this cURL command to get information about that pet ID, replacing 51231236 with your pet ID.

    curl -X GET --header "Accept: application/json" "http://petstore.swagger.io/v2/pet/51231236"
    

    The response contains your pet name and other information:

    {"id":51231236,"category":{"id":4,"name":"test"},"name":"mr. fluffernutter","photoUrls":["string"],"tags":[{"id":0,"name":"string"}],"status":"available"}
    

    You can format the JSON by pasting it into a JSON formatting tool:

    json { "id": 51231236, "category": { "id": 4, "name": "test" }, "name": "mr. fluffernutter", "photoUrls": [ "string" ], "tags": [ { "id": 0, "name": "string" } ], "status": "available" }

Delete your pet

Unfortunately, your pet has died. It’s time to delete your pet from the pet registry. <cry + tears / >

  1. Use the DELETE method to remove your pet. Replace 5123123 with your pet ID:

    curl -X DELETE --header "Accept: application/json" "http://petstore.swagger.io/v2/pet/5123123"
    
  2. Now check to make sure your pet is really removed. Use a GET request to look for your pet with that ID:

    curl -X GET --header "Accept: application/json" "http://petstore.swagger.io/v2/pet/5123123"
    

    You should see this error message:

    {"code":1,"type":"error","message":"Pet not found"}
    

This example allowed you to see how you can work with cURL to create, read, update, and delete resources. These four operations are referred to as CRUD and are common to almost every programming language.

Although Postman is probably easier to use, cURL lends itself to power-level usage. Quality assurance teams often construct advanced test scenarios that iterate through a lot of cURL requests.

Import cURL into Postman

You can import cURL commands into Postman by doing the following:

  1. Open a new tab in Postman and click the Import button in the upper-left corner.
  2. Select Paste Raw Text and insert your cURL command:

    curl -X GET --header "Accept: application/json" "http://petstore.swagger.io/v2/pet/5123123"
    

    Importing into Postman

    Make sure you don’t have any extra spaces at the beginning.

  3. Click Import.
  4. Close the dialog box.
  5. Click Send.

Export Postman to cURL

You can export Postman to cURL by doing the following:

  1. In Postman, click the Generate Code button.

    Generating code snippets

  2. Select cURL from the drop-down menu.
  3. Copy the code snippet.

    curl -X GET -H "Accept: application/json" -H "Cache-Control: no-cache" -H "Postman-Token: e40c8069-21db-916e-9a94-0b9a42b39e1b" 'http://petstore.swagger.io/v2/pet/5123123'
    

    You can see that Postman adds some extra header information (-H "Cache-Control: no-cache" -H "Postman-Token: e40c8069-21db-916e-9a94-0b9a42b39e1b") into the request. This extra header information is unnecessary and can be removed.

2.0 Analyze the JSON response

Prettify the weatherdata JSON response

Let’s look at the JSON response for the Mashape weatherdata endpoint in more depth. The minified response from cURL looks like this:

{"query":{"count":1,"created":"2015-06-03T16:24:26Z","lang":"en-US","results":{"channel":{"title":"Yahoo! Weather - Santa Clara, CA","link":"http://us.rd.yahoo.com/dailynews/rss/weather/Santa_Clara__CA/*http://weather.yahoo.com/forecast/USCA1018_c.html","description":"Yahoo! Weather for Santa Clara, CA","language":"en-us","lastBuildDate":"Wed, 03 Jun 2015 8:52 am PDT","ttl":"60","location":{"city":"Santa Clara","country":"United States","region":"CA"},"units":{"distance":"km","pressure":"mb","speed":"km/h","temperature":"C"},"wind":{"chill":"16","direction":"0","speed":"0"},"atmosphere":{"humidity":"67","pressure":"1014.8","rising":"0","visibility":"16.09"},"astronomy":{"sunrise":"5:46 am","sunset":"8:23 pm"},"image":{"title":"Yahoo! Weather","width":"142","height":"18","link":"http://weather.yahoo.com","url":"http://l.yimg.com/a/i/brand/purplelogo//uh/us/news-wea.gif"},"item":{"title":"Conditions for Santa Clara, CA at 8:52 am PDT","lat":"37.35","long":"-121.95","link":"http://us.rd.yahoo.com/dailynews/rss/weather/Santa_Clara__CA/*http://weather.yahoo.com/forecast/USCA1018_c.html","pubDate":"Wed, 03 Jun 2015 8:52 am PDT","condition":{"code":"30","date":"Wed, 03 Jun 2015 8:52 am PDT","temp":"16","text":"Partly Cloudy"},"description":"\n<img src=\"http://l.yimg.com/a/i/us/we/52/30.gif\"/><br />\n<b>Current Conditions:</b><br />\nPartly Cloudy, 16 C<BR />\n<BR /><b>Forecast:</b><BR />\nWed - AM Clouds/PM Sun. High: 22 Low: 13<br />\nThu - AM Clouds/PM Sun. High: 22 Low: 13<br />\nFri - AM Clouds/PM Sun. High: 24 Low: 14<br />\nSat - AM Clouds/PM Sun. High: 24 Low: 15<br />\nSun - Partly Cloudy. High: 26 Low: 16<br />\n<br />\n<a href=\"http://us.rd.yahoo.com/dailynews/rss/weather/Santa_Clara__CA/*http://weather.yahoo.com/forecast/USCA1018_c.html\">Full Forecast at Yahoo! Weather</a><BR/><BR/>\n(provided by <a href=\"http://www.weather.com\" >The Weather Channel</a>)<br/>\n","forecast":[{"code":"30","date":"3 Jun 2015","day":"Wed","high":"22","low":"13","text":"AM Clouds/PM Sun"},{"code":"30","date":"4 Jun 2015","day":"Thu","high":"22","low":"13","text":"AM Clouds/PM Sun"},{"code":"30","date":"5 Jun 2015","day":"Fri","high":"24","low":"14","text":"AM Clouds/PM Sun"},{"code":"30","date":"6 Jun 2015","day":"Sat","high":"24","low":"15","text":"AM Clouds/PM Sun"},{"code":"30","date":"7 Jun 2015","day":"Sun","high":"26","low":"16","text":"Partly Cloudy"}],"guid":{"isPermaLink":"false","content":"USCA1018_2015_06_07_7_00_PDT"}}}}}}

It’s not very readable (by humans), so we can use a JSON formatter tool to “prettify” it:

{  
   "query":{  
      "count":1,
      "created":"2015-06-03T16:24:26Z",
      "lang":"en-US",
      "results":{  
         "channel":{  
            "title":"Yahoo! Weather - Santa Clara, CA",
            "link":"http://us.rd.yahoo.com/dailynews/rss/weather/Santa_Clara__CA/*http://weather.yahoo.com/forecast/USCA1018_c.html",
            "description":"Yahoo! Weather for Santa Clara, CA",
            "language":"en-us",
            "lastBuildDate":"Wed, 03 Jun 2015 8:52 am PDT",
            "ttl":"60",
            "location":{  
               "city":"Santa Clara",
               "country":"United States",
               "region":"CA"
            },
            "units":{  
               "distance":"km",
               "pressure":"mb",
               "speed":"km/h",
               "temperature":"C"
            },
            "wind":{  
               "chill":"16",
               "direction":"0",
               "speed":"0"
            },
            "atmosphere":{  
               "humidity":"67",
               "pressure":"1014.8",
               "rising":"0",
               "visibility":"16.09"
            },
            "astronomy":{  
               "sunrise":"5:46 am",
               "sunset":"8:23 pm"
            },
            "image":{  
               "title":"Yahoo! Weather",
               "width":"142",
               "height":"18",
               "link":"http://weather.yahoo.com",
               "url":"http://l.yimg.com/a/i/brand/purplelogo//uh/us/news-wea.gif"
            },
            "item":{  
               "title":"Conditions for Santa Clara, CA at 8:52 am PDT",
               "lat":"37.35",
               "long":"-121.95",
               "link":"http://us.rd.yahoo.com/dailynews/rss/weather/Santa_Clara__CA/*http://weather.yahoo.com/forecast/USCA1018_c.html",
               "pubDate":"Wed, 03 Jun 2015 8:52 am PDT",
               "condition":{  
                  "code":"30",
                  "date":"Wed, 03 Jun 2015 8:52 am PDT",
                  "temp":"16",
                  "text":"Partly Cloudy"
               },
               "description":"\n<img src=\"http://l.yimg.com/a/i/us/we/52/30.gif\"/><br />\n<b>Current Conditions:</b><br />\nPartly Cloudy, 16 C<BR />\n<BR /><b>Forecast:</b><BR />\nWed - AM Clouds/PM Sun. High: 22 Low: 13<br />\nThu - AM Clouds/PM Sun. High: 22 Low: 13<br />\nFri - AM Clouds/PM Sun. High: 24 Low: 14<br />\nSat - AM Clouds/PM Sun. High: 24 Low: 15<br />\nSun - Partly Cloudy. High: 26 Low: 16<br />\n<br />\n<a href=\"http://us.rd.yahoo.com/dailynews/rss/weather/Santa_Clara__CA/*http://weather.yahoo.com/forecast/USCA1018_c.html\">Full Forecast at Yahoo! Weather</a><BR/><BR/>\n(provided by <a href=\"http://www.weather.com\" >The Weather Channel</a>)<br/>\n",
               "forecast":[  
                  {  
                     "code":"30",
                     "date":"3 Jun 2015",
                     "day":"Wed",
                     "high":"22",
                     "low":"13",
                     "text":"AM Clouds/PM Sun"
                  },
                  {  
                     "code":"30",
                     "date":"4 Jun 2015",
                     "day":"Thu",
                     "high":"22",
                     "low":"13",
                     "text":"AM Clouds/PM Sun"
                  },
                  {  
                     "code":"30",
                     "date":"5 Jun 2015",
                     "day":"Fri",
                     "high":"24",
                     "low":"14",
                     "text":"AM Clouds/PM Sun"
                  },
                  {  
                     "code":"30",
                     "date":"6 Jun 2015",
                     "day":"Sat",
                     "high":"24",
                     "low":"15",
                     "text":"AM Clouds/PM Sun"
                  },
                  {  
                     "code":"30",
                     "date":"7 Jun 2015",
                     "day":"Sun",
                     "high":"26",
                     "low":"16",
                     "text":"Partly Cloudy"
                  }
               ],
               "guid":{  
                  "isPermaLink":"false",
                  "content":"USCA1018_2015_06_07_7_00_PDT"
               }
            }
         }
      }
   }
}

JSON is how most REST APIs structure the response

JSON stands for JavaScript Object Notation. It’s the most common way REST APIs return information. Through Javascript, you can easily parse through the JSON and display it on a web page.

Although some APIs return information in both JSON and XML, if you’re trying to parse through the response and render it on a web page, JSON fits much better into the existing JavaScript + HTML toolset that powers most web pages.

JSON has two types of basic structures: objects and arrays.

JSON objects are key-value pairs

An object is a collection of key-value pairs, surrounded by curly braces:

{
"key1":"value1",
"key2":"value2"
}

The key-value pairs are each put into double quotation marks when both are strings. If the value is an integer (a whole number) or Boolean (true or false value), you omit the quotation marks around the value.

Each key-value pair is separated from the next by a comma (except for the last pair).

JSON arrays are lists of items

An array is a list of items, surrounded by brackets:

["first", "second", "third"]

The list of items can contain strings, numbers, booleans, arrays, or other objects.

With integers or booleans, you don’t use quotation marks.

[1, 2, 3]
[true, false, true]

Including objects in arrays, and arrays in objects

JSON can mix up objects and arrays inside each other. You can have an array of objects:

[ 
  object, 
  object,
  object
]

Here’s an example with values:

[  
   {  
      "name":"Tom",
      "age":39
   },
   {  
      "name":"Shannon",
      "age":37
   }
]

And objects can contain arrays in the value part of the key-value pair:

{
"children": ["Avery","Callie","lucy","Molly"],
"hobbies": ["swimming","biking","drawing","horseplaying"]
}

Just remember, objects are set off with curly braces { } and contain key-value pairs. Sometimes those values are arrays. Arrays are lists and are set off with square brackets [ ].

Identify the objects and arrays in the weatherdata API response

Look at the response from the weatherdata endpoint of the weather API.

It’s common for arrays to contain lists of objects, and for objects to contain arrays.

More information

For more information on understanding the structure of JSON, see json.com.

2.1 Using the JSON from the response payload

Making use of the JSON response

Seeing the response from cURL or Postman is cool, but how do you make use of the JSON data?

With most API documentation, you don’t need to show how to make use of JSON data. You assume that developers will use their JavaScript skills to parse through the data and display it appropriately in their apps.

However, to better understand how developers will access the data, we’ll go through a brief tutorial to display the REST response on a web page.

Display part of the REST JSON response on a web page

Mashape provides some sample code to parse and display the REST response on a web page using JavaScript. You could use it, but you could also use some auto-generated code from Postman to do pretty much the same thing.

  1. Start with a basic HTML template with jQuery referenced, like this:

     <html>
     <head>
     <title>Sample Page</title>
     <script src="http://ajax.googleapis.com/ajax/libs/jquery/1.11.1/jquery.min.js"></script>
     </head>
     <body>
    	
     </body>
     </html>
    

    Save your file with a name such as weatherdata.html.

  2. Open Postman and click the request to the weatherdata endpoint that you configured earlier.
  3. Click the Generate Code Snippet button.

    Generate code snippet

  4. Select JavaScript > jQuery AJAX.
  5. Copy the code sample.
  6. Insert the Postman code sample between <script> tags in the same template you started building in step 1.

    You can put the script in the head section if you want — just make sure you add it after the jQuery reference.

  7. The Postman code sample needs one more parameter: dataType. Add "dataType": "json" as parameter in settings.

    Your final code should look like this:

     <html>
     <head>
     <script src="http://ajax.googleapis.com/ajax/libs/jquery/1.11.1/jquery.min.js"></script>
     </head>
     <title>Sample Page</title>
     <script>
     var settings = {
       "async": true,
       "crossDomain": true,
       "dataType": "json",
       "url": "https://simple-weather.p.mashape.com/weatherdata?lat=37.354108&lng=-121.955236",
       "method": "GET",
       "headers": {
         "accept": "application/json",
         "x-mashape-key": "APIKEY"
       }
     }
     $.ajax(settings).done(function (response) {
       console.log(response);
     });
     </script>
     <body>
     </body>
     </html>
    
  8. Start Chrome and open the JavaScript Console by going to View > Developer > JavaScript Console.
  9. Open the weatherdata.html file in Chrome (File > Open File).

    The page body will be blank, but the weatherdata response should be logged to the JavaScript console. You can inspect the payload by expanding the sections.

    JSON payload from weatherdata API logged to console

    Note that Chrome tells you whether each expandable section is an object or an array. Knowing this is critical to accessing the value through JavaScript dot notation.

    The following sections will explain this AJAX code a bit more.

The AJAX method from jQuery

Probably the most useful method to know for showing code samples is the ajax method from jQuery.

In brief, this ajax method takes one argument: settings.

$.ajax(settings)

The settings argument is an object that contains a variety of key-value pairs. Each of the allowed key-value pairs is defined in jQuery’s ajax documentation.

Some important values are the url, which is the URI or endpoint you are submitting the request to. Another value is headers, which allows you to include custom headers in the request.

Look at the code sample you created. The settings variable is passed in as the argument to the ajax method. jQuery makes the request to the HTTP URL asynchronously, which means it won’t hang up your computer while you wait for the response. You can continue using your application while the request executes.

You get the response by calling the method done. In the preceding code sample, done contains an anonymous function (a function without a name) that executes when done is called.

The response object from the ajax call is assigned to the done method’s argument, which in this case is response. (You can name the argument whatever you want.)

You can then access the values from the response object using object notation. In this example, the response is just logged to the console.

This is likely a bit fuzzy right now, but it will become more clear with an example in the next section.

Logging responses to the console

The piece of code that logged the response to the console was simply this:

console.log(response);

Logging responses to the console is one of the most useful ways to test whether an API response is working (it’s also helpful for debugging or troubleshooting your code). The console collapses each object inside its own expandable section. This allows you to inspect the payload.

You can add other information to the console log message. To preface the log message with a string, add something like this:

console.log("Here's the response: " + response);

Strings are always enclosed inside quotation marks, and you use the plus sign + to concatenate strings with JavaScript variables, like response.

Customizing log messages is helpful if you’re logging various things to the console and need to flag them with an identifier.

Inspect the payload

Inspect the payload by expanding each of the sections in the Mashape weather API. Find the section that appears here: object > query > results > channel > item > description.

2.2 Access and print a specific JSON value

Accessing JSON values through dot notation

You’ll notice that in the main content display of the weatherdata code, the REST response information didn’t appear. It only appeared in the JavaScript Console. You need to use dot notation to access the JSON values you want.

Let’s say you wanted to pull out the description part of the JSON response. Here’s the dot notation you would use:

data.query.results.channel.item.description

The dot (.) after data (the name of the JSON payload) is how you access the values you want from the JSON object. JSON wouldn’t be very useful if you had to always print out the entire response. Instead, you select the exact element you want and pull that out through dot notation.

To pull out the description element from the JSON response and display it on the page, add this to your code sample, right below the console.log(response) part:

console.log(data.query.results.channel.item.description);

Your code should look like this:

  .done(function (data) {
    console.log(data);
    console.log (data.query.results.channel.item.description);

  });

Refresh your Chrome browser and see the information that appears in the console:

Weather description that gets pulled out through dot notation

Printing a JSON value to the page

Let’s say you wanted to print part of the JSON (the description element) to the page. This involves a little bit of JavaScript or jQuery (to make it easier).

  1. Add a named element to the body of your page, like this:

     <div id="weatherDescription"></div>
    
  2. Inside the tags of your done method, pull out the value you want into a variable, like this:

     var content = "data.query.results.channel.item.description";
    
  3. Below this (same section) use the jQuery append method to append the variable to the element on your page:

     $("#weatherDescription").append(content);
    

    This code says, find the element with the ID weatherDescription and append the content variable to it.

    Your entire code should look as follows:

     <html>
     <body>
    
     <script src="http://ajax.googleapis.com/ajax/libs/jquery/1.11.1/jquery.min.js"></script>
    
     <script>
     var settings = {
       "async": true,
       "crossDomain": true,
       "dataType": "json",
     "url": "https://simple-weather.p.mashape.com/weatherdata?lat=37.354108&lng=-121.955236",
       "method": "GET",
     "headers": {
       "accept": "application/json",
       "x-mashape-key": "APIKEY"
     }
     }
    
     $.ajax(settings)
    
     .done(function (response) {
       console.log(response);
    
       var content = response.query.results.channel.item.description;
       $("#weatherDescription").append(content);
     });
     </script>
    
     <div id="weatherDescription"></div>
     </body>
     </html>
    

    Here’s the result:

    Printing JSON to the page

    Now change the display to access the wind speed instead.

2.3 Diving into dot notation

Use a dot to access the value from a key

Let’s dive into dot notation a little more.

You use a dot after the object name to access its properties. For example, suppose you have an object called data:

"data": {
"name": "Tom"
}

To access Tom, you would use data.name.

It’s important to note the different levels of nesting so you can trace back the appropriate objects and access the information you want. You access each level down through the object name followed by a dot.

Use square brackets to access the values in an array

To access a value in an array, you use square brackets followed by the position number. For example, suppose you have the following array:

"data" : {
  "items": ["ball", "bat", "glove"]
}

To access glove, you would use data.items[2].

glove is the third item in the array.

Exercise with dot notation

In this activity, you’ll practice accessing different values through dot notation.

  1. Create a new file in your text editor and insert the following into it:

     <!DOCTYPE html>
     <html>
     <head>
     <script src="http://code.jquery.com/jquery-2.1.1.min.js"></script>
       <meta charset="utf-8">
       <title>JSON dot notation practice</title>
    
     <script>
     $( document ).ready(function() {
    
        var john = {
         "hair": "brown",
         "eyes": "green",
         "shoes": {
             "brand": "nike",
             "type": "basketball"
         },
         "favcolors": [
             "azure",
             "goldenrod"
         ],
         "children": [
             {
                 "child1": "Sarah",
                 "age": 2
             },
             {
                 "child2": "Jimmy",
                 "age": 5
             }
         ]
     }
    
     var sarahjson = john.children[0].child1;
     var greenjson = john.children[0].child1;
     var nikejson = john.children[0].child1;
     var goldenrodjson = john.children[0].child1;
     var jimmyjson = john.children[0].child1;
    
     $("#sarah").append(sarahjson);
     $("#green").append(greenjson);
     $("#nike").append(nikejson);
     $("#goldenrod").append(goldenrodjson);
     $("#jimmy").append(jimmyjson);
     });
     </script>
     </head>
     <body>
    
         <div id="sarah">Sarah: </div>
         <div id="green">Green: </div>
         <div id="nike">Nike: </div>
         <div id="goldenrod">Goldenrod: </div>
         <div id="jimmy">Jimmy: </div>
    
     </body>
     </html>
    

    Here we have a JSON object defined as a variable named john. (Usually APIs retrieve the response through a URL request, but for practice here, we’re just defining the object locally.)

    If you view the page in your browser, you’ll see the page says “Sarah” for each item because we’re accessing this value: john.children[0].child1 for each item.

  2. Change john.children[0].child1 to display the right values for each item. For example, the word green should appear at the ID tag called green.

Check your work by looking at the Dot Notation section on the answers page.

Showing wind conditions on the page

At the beginning of the course, I showed an example of embedding the wind speed and other details on a website. Now let’s revisit this code example and see how it’s put together.

Copy the following code into a basic HTML page, customize the APIKEY value, and view it in the browser:

<html>
<head>
<script src="http://code.jquery.com/jquery-2.1.1.min.js"></script>
<link rel="stylesheet"  href='https://maxcdn.bootstrapcdn.com/bootstrap/3.3.4/css/bootstrap.min.css' rel='stylesheet' type='text/css'>

  <title>Sample Query to get the wind</title>
<style>
   #wind_direction, #wind_chill, #wind_speed, #temperature, #speed {color: red; font-weight: bold;}
   body {margin:20px;}
</style>
  </head>
<body>


<script>

function checkWind() {

  var settings = {
    "async": true,
    "crossDomain": true,
    "dataType": "json",
  "url": "https://simple-weather.p.mashape.com/weatherdata?lat=37.354108&lng=-121.955236",
    "method": "GET",
  "headers": {
    "accept": "application/json",
    "x-mashape-key": "APIKEY"
  }
}

  $.ajax(settings)

  .done(function (response) {
    console.log(response);

    $("#wind_speed").append (response.query.results.channel.wind.speed);
    $("#wind_direction").append (response.query.results.channel.wind.direction);
    $("#wind_chill").append (response.query.results.channel.wind.chill);
    $("#temperature").append (response.query.results.channel.units.temperature);
    $("#speed").append (response.query.results.channel.units.speed);
  });
}
</script>


<button type="button" onclick="checkWind()" class="btn btn-danger weatherbutton">Check wind conditions</button>

<h2>Wind conditions for Santa Clara</h2>

<b>Wind chill: </b><span id="wind_chill"></span> <span id="temperature"></span></br>
<b>Wind speed: </b><span id="wind_speed"></span> <span id="speed"></span></br>
<b>Wind direction: </b><span id="wind_direction"></span>
</body>
</html>

A few things are different here, but it’s essentially the same code:

When you load the page and click the button, the following should appear:

Final REST API

Documenting a new endpoint

2.5 Documenting resource descriptions

The terminology to describe a “resource” varies

When it comes to the right terminology to describe the resource, practices vary. Exactly what are the “things” that you access using a URL? Here are some of the terms used in different API docs:

Some docs get around the situation by not calling them anything explicitly.

You could probably choose the terms that you like best. My favorite is to use resources (along with endpoint for the URL. An API has various “resources” that you access through “endpoints.” The endpoint gives you access to a resource. The endpoint is the URL path (in this example, /surfreport). The information the endpoint interacts with, though, is called a resource.

Some examples

Take look at Mailchimp’s API for an example.

Twitter resource

With Mailchimp, the resource might be “Automations Emails Instance.” The endpoint to access this resource is /automations/{workflow_id}/emails/{email_id}.

In contrast, look at Twitter’s API. This page is called GET statuses/retweets/:id. To access it, you use the Resource URL https://api.twitter.com/1.1/statuses/retweets/:id.json.

How Twitter refers to resources

No explicit names are used to refer to the resource.

Here’s the approach by Instagram. Their doc calls resources “endpoints” in the plural – e.g., “Relationship endpoints,” with each endpoint listed on the relationship page.

The EventBrite API shows a list of endpoints, but when you go to an endpoint, what you’re really seeing is an object. On the object’s page you can see the variety of endpoints you can use with the object.

Eventbrite

This simple example with the Mashape Weather API, however, just has three different endpoints. There’s not a huge reason to separate out endpoints by resource.

When it gets confusing to refer to resources by the endpoint

The Mashape Weather API is pretty simple, and just refers to the endpoints available. In this case, referring to the aqi endpoint or the air quality index resource doesn’t make a huge difference. But with more complex APIs, using the endpoint path to talk about the resource can get problematic.

When I worked at Badgeville, our endpoints somewhat looked like this:

api_site.com/{apikey}/users
// gets all users

api_site.com/{apikey}/users/{userId}
// gets a specific user

api_site.com/{apikey}/rewards
// gets all rewards

api_site.com/{apikey}/rewards/{rewardId}
// gets a specific reward

api_site.com/{apikey}/users/{userId}/rewards
// gets all rewards for a specific user

api_site.com/{apikey}/users/{userId}/rewards/{rewardId}
// gets a specific reward for a specific user

api_site.com/{apikey}/users/{userId}/rewards/{missionId}
// gets the rewards for a specfic mission related to a specific user

api_site.com/{apikey}/missions/{missionid}/rewards
// gets the rewards available for a specific mission

Depending on how you construct the endpoint paths determines the response. A rewards resource had various endpoints that returned different types of information related to rewards.

To say that you could use the rewards or missions endpoint wasn’t always specific enough, because there were multiple rewards and missions endpoints.

It can get awkward referring to the resource by its endpoint path. For example, “When you call /users/{userId}/rewards/{rewardId}, you get a specific reward for a user. The /users/{userId}/rewards/{rewardId} endpoint takes several parameters…” It’s a mouthful.

The same resource can have multiple endpoints

The Box API has a good example of how the same resource can have multiple endpoints and methods.

Example from Box

The Box example has 5 different endpoints or methods you can call. Each of these methods lets you access the Collaboration resource or object in different ways. Why call it an object? When you get the Collaboration resource, the JSON is an object.

Wait, I’m confused

You’re probably thinking, wait, I’m a bit confused. Exactly what am I supposed to call the things I’m documenting in an API? My recommendation is to call them resources. In your table of contents, you might group all the resources under a larger umbrella called “API Reference.”

But my point is that there is no standard practice here. The terminology varies, and this is one of those cases where everyone chooses their favorite term.

When describing the resource, start with a verb

Regardless of the terms you use, the description is usually brief, from 1-3 sentences, and often expressed as a fragment in the active tense.

Review the surf report wiki page containing the information about the endpoint, and try to describe the endpoint in the length of one or two tweets (140 characters).

Here are some examples of resource descriptions:

Delicious API

Check to see when a user last posted an item. Returns the last updated time for the user, as well as the number of new items in the user’s inbox since it was last visited.

Use this before calling posts/all to see if the data has changed since the last fetch.

Foursquare API

Returns menu information for a venue.

In some cases, menu information is provided by our partners. When displaying the information from a partner, you must attribute them using the attribution information included in the provider field. Not all menu information available on Foursquare is able to be redistributed through our API.

How I go about it

Here’s how I went about creating the endpoint description. If you want to try crafting your own description of the endpoint first, and then compare yours to mine, go for it. However, you can also just follow along here.

I start by making a list of what the resource contains.

Surfreport

After drafting the outline, I craft the sentences:

surfreport/{beachId}

Returns information about surfing conditions at a specific beach ID, including the surf height, water temperature, wind, and tide. Also provides an overall recommendation about whether to go surfing.

{beachId} refers to the ID for the beach you want to look up. All Beach ID codes are available from our site.

Critique the Mashape Weather API descriptions

Look over the descriptions of the three endpoints in the weather API. They’re pretty short. For example, the aqi endpoint just says “Air Quality Index.”

I think these description are too short. But developers like concision. If shortening the surfreport description, you could also write:

/surfreport/{beachId}

Provides surf condition information.

Compare these descriptions with the endpoint descriptions from the Aeris Weather API.

With Aeris Weather, the description for the forecasts endpoint is as follows:

The forecasts endpoint/data set provides the core forecast data for US and international locations. Forecast information is available in daily, day/night intervals, as well as, custom intervals such as 3 hour or 1 hour intervals.

In summary, the description provides a 1-3 sentence summary of the information the resource contains.

Recognize the difference between reference docs versus user guides

One thing to keep in mind is the difference between reference docs and user guides/tutorials:

With the description of surfreport, you might expand on this with much greater detail in the user guide. But in the reference guide, you just provide a short description.

You could link the description to the places in the user guide where you expand on it in more detail. But since developers often write API documentation, they sometimes never write the user guide (as is the case with the Weather API in Mashape).

2.6 Documenting the endpoints and methods

Terminology for endpoints varies

Now let’s document the endpoints. When you list the endpoints, the term you use to describe this section also varies. Here are some terms you might see:

My preferred term is “endpoint.”

The endpoint definition usually contains the end path only

When you describe the endpoint, it’s common to list the end path only (hence the nickname “endpoint”).

In our example, the endpoint/endpath is just /surfreport/{beachId}. You don’t have to list the full URL every time (which would be https://simple-weather.p.mashape.com/surfreport{beachId}. Doing so distracts the user from focusing on the path that matters.

In your user guide, explain the full code path in an introductory section.

Represent path parameters with curly braces

If you have path parameters in your endpoint, represent them through curly braces. For example, here’s an example from Mailchimp’s API:

/campaigns/{campaign_id}/actions/send

Better yet, put the path parameter in another color to set it off:

/campaigns/{campaign_id}/actions/send

Curly braces are a convention that users will understand. In the above example, almost no URL uses curly braces in the syntax, so the {campaign_id} is an obvious placeholder.

Another convention it to represent parameter values with a colon, like this:

/campaigns/:campaign_id/actions/send

You can see this convention in the EventBrite API and the Aeris Weather API.

In general, if the placeholder name is ambiguous as to whether it’s a placeholder or something you’re supposed to customize with an actual value, clarify it.

You can list the method beside the endpoint

It’s common to list the method (GET, POST, PUT, DELETE) next to the endpoint. Since there’s not much to say about the method itself, it makes sense to group the method with the endpoint. Here’s an example from Box’s API:

Box API

And here’s an example from Linkedin’s API:

Linkedin Example

Your turn to try: Write the endpoint definition for surfreport

List out the endpoint definition and method for the surfreport/{beachId} endpoint.

Here’s my approach:

Endpoint definition

GET surfreport/{beachId}

If you had different endpoints for the same resource, you might have more to say here. But with this example, the bulk of the description is with the resource.

2.7 Documenting parameters

Parameters are ways to configure the endpoint

Parameters refer to the various ways the endpoint can be configured to influence the response. Many times parameters are out in a simple table like this:

Parameter Required? Data Type Example
format optional string json

Here’s an example from Yelp’s documentation:

Yelp parameters

You can format the values in a variety of ways. If you’re using a definition list or other non-table format, you should develop styles that make the values easily readable.

Four types of parameters

REST APIs have four types of parameters:

Data types indicate the format for the values

It’s important to list the data type for each parameter because APIs may not process the parameter correctly if it’s the wrong data type or wrong format. These data types are the most common:

There are more data types in programming, and if you have more specific data types, be sure to note them. In Java, for example, it’s important to note the data type allowed because Java allocates memory space based on the size of the data. As such, Java gets much more specific about the size of numbers. You have a byte, short, int, double, long, float, char, boolean, and so on.

However, you usually don’t have to specify this level of detail with a REST API. You can probably just write “number.”

Parameters should list allowed values

One of the problems with the Mashape Weather API is that it doesn’t tell you which values are allowed for the latitude and longitude. If you type in coordinates for Bangalore, for example, 12.9539974 and 77.6309395, the response is Not Supported - IN - India - IN-KA. Which cities are supported, and where does one look to see a list? This information should be made explicit in the description of parameters.

Parameter order doesn’t matter

Often the parameters are added with a query string (?) at the end of the endpoint, and then each parameter is listed one right after the other with an ampersand (&) separating them. Usually the order in which parameters are passed to the endpoint does not matter.

For example:

/surfreport/{beachId}?days=3&units=metric&time=1400

and

/surfreport/{beachId}?time=1400&units=metric&days=3

would return the same result.

However, if the parameter is part of the actual endpoint path (not added in the query string), such as with {beachId} above, then you usually describe this value in the description of the endpoint itself.

Here’s an example from Twilio:

Twilio Example

The {PhoneNumber} value is described in the description of the endpoint rather than in another section that lists the query parameters you can pass to the endpoint.

Other important details about parameters are the maximum of minimum values allowed for the parameter, and whether the parameter is optional or required.

Color coding parameter values

When you list the parameters in your endpoint, it can help to color code the parameters both in the table and in the endpoint definition. This makes it clear what’s a parameter and what’s not. Through color you create an immediate connection between the endpoint and the parameter definitions.

For example, suppose your endpoint definition is as follows:

http://domain.com:port//service/myendpoint/user/{user}/bicycles/{bicycles}/

Follow through with this same color in your table describing the parameters:

URL Parameter Description
user Here’s my description of the user parameter.
bicycles Here’s my description of the bicycles parameter.

By color coding the parameters, it’s easy to see the parameter in contrast with the other parts of the URL.

Note that if you’re custom-color-coding the parameters, you’ll need to skip the automatic syntax highlighting options in code blocks and just use either your own styles or a general pre element.

Passing parameters in the JSON body

Frequently with POST requests, you will submit a JSON object in the request body. This JSON object may be a lengthy list of key value pairs with multiple levels of nesting.

For example, the endpoint URL may be something simple, such as /surfreport/{beachId}. But in the body of the request, you might include a JSON object, like this:

{
"days": 2,
"units": "imperial",
"time": 1433524597
}

This is known as a request body parameter.

Documenting lengthy JSON objects in request bodies

Documenting JSON data (both in request body parameters and responses) is actually one of the trickier parts of API documentation. Documenting a JSON object is easy if the object is simple, with just a few key-value pairs. But what if you have a JSON object with multiple objects inside objects, numerous levels of nesting, and lengthy and conditional data? What if the JSON object spans more than 100 lines, or 1,000?

Tables work all right for documenting JSON, but in a table, it can be hard to distinguish between top-level and sub-level items. The object that contains an object that also contains an object, etc., can be confusing to represent.

By all means, if the JSON object is relatively small, a table is probably your best option. But there are other approaches that designers have taken as well.

Take a look at eBay’s findItemsByProduct endpoint.

eBay parameters

There’s a table below the sample request that describes each parameter:

eBay parameters

But the sample request also contains links to each of the parameters. When you click a parameter value in the sample request, you go to a page that provides more details about that parameter value, such as the ItemFilter. This is likely because the parameter values are more complex and require more explanation.

The same parameter values might be used in other requests as well, so this facilitates re-use. Even so, I dislike jumping around to other pages for the information I need.

Swagger UI’s approach

Is the display from the Swagger UI any better?

The Swagger UI reads the Swagger spec file and displays it in the visual format that you see with examples such as the Swagger Petstore.

The Swagger UI lets you toggle between an “Example Value” and a “Model” view for both responses and request body parameters.

The Example Value shows a sample of the syntax along with examples. When you click the Model (yellow box) in the /Pet (POST) endpoint, Swagger inserts the content in the body parameter box. Here’s the Pet POST endpoint’s Example Value:

{
  "id": 0,
  "category": {
    "id": 0,
    "name": "string"
  },
  "name": "doggie",
  "photoUrls": [
    "string"
  ],
  "tags": [
    {
      "id": 0,
      "name": "string"
    }
  ],
  "status": "available"
}

Now click Model (the grayed out text) and look at the view.

Swagger Model

This view describes the various parts of the request, noting the data types and any descriptions in your Swagger spec. Here’s the Model:

Pet {
    id (integer, optional),
    category (Category, optional),
    name (string),
    photoUrls (Array[string]),
    tags (Array[Tag], optional),
    status (string, optional): pet status in the store = ['available', 'pending', 'sold']
}
Category {
    id (integer, optional),
    name (string, optional)
}
Tag {
    id (integer, optional),
    name (string, optional)
}

The Petstore spec doesn’t actually include many parameter descriptions in the Model, but if any descriptions that are included, they would appear here in the Model rather than the Example Value.

In this view, when there’s a nested object, like category, it has a reference to another part of the model. You have to look at “Category” for details about category and look at “Tag” for details about tags.

Reading the Model

Presumably the Model format appears like this because there’s not enough room to visually depict nested objects in one inch of space. But it could potentially mislead users into thinking that you have multiple objects listed one after another instead of nested inside each other.

Ultimately, I’m not sure how useful the Model view is beyond providing a place to describe the objects and properties. I’m also not sure why the Swagger team didn’t include descriptions of each parameter in the request body, because those descriptions could appear in the Model view and thereby provide more rationale for having the Model view in the first place.

Conclusion

You can see that there’s a lot of variety in documenting JSON and XML responses. There’s no right way to document the parameters, except to choose the method that depicts the parameters in the clearest, easiest to read way.

Construct a table to list the surfreport parameters

For our new surfreport endpoint, look through the parameters available and create a table similar to the one above.

Here’s what my table looks like:

Parameter Required Description Type
days Optional The number of days to include in the response. Default is 3. Integer
units Optional Options are either imperial or metric. Whether to return the values in imperial or metric measurements. Imperial will use feet, knots, and fahrenheit. Metric will use centimeters, kilometers per hour, and celsius. metric is the default. string
time Optional If you include the time, then only the current hour will be returned in the response. integer. Unix format (ms since 1970) in UTC.

2.8 Documenting sample requests

The sample request clarifies how to use the endpoint

Although you’ve already listed the endpoint and parameters, you should also include one or more sample requests that shows the endpoint integrated with parameters in an easy-to-understand way.

In the CityGrid Places API, the basic places endpoint is as follows:

https://api.citygridmedia.com/content/places/v2/search/where

However, there are 17 possible query string parameters you can use with this endpoint. As a result, the documentation includes several sample requests show the parameters used with the endpoint:

CityGrid Places API example

These examples show several common combinations of the parameters. Adding multiple requests as samples makes sense when the parameters wouldn’t usually be used together. For example, there are few cases where you might actually include all 17 parameters in the same request, so any sample will be limited in what it can show.

This example shows “Italian restaurants in Chicago using placement ‘sec-5’“*:

https://api.citygridmedia.com/content/places/v2/search/where?what=restaurant&where=chicago,IL&tag=11279&placement=sec-5&publisher=test

If responses vary a lot, consider including multiple responses with the requests. How many different requests and responses should you show? There’s probably no easy answer, but probably no more than a few. You decide what makes sense for your API.

In the CityGrid Places API, notice how the examples don’t include the sample responses on the same page but rather link to live examples. When you click the URL link, you execute the request in your browser and can see the response. (Here’s an example).

This approach is common and works well (for GET requests) when you can pull it off. Unfortunately, this approach makes it difficult to define the responses. (The CityGrid API documentation is detailed and does include information in later sections that describes the responses.)

API explorers provide interactivity with your own data

Many APIs have a feature called an API explorer. For example, you can see Foursquare’s API explorer here:

Foursquare's API Explorer

The API Explorer lets you insert your own values, your own API key, and other parameters into a request so you can see the responses directly in the Explorer. Being able to see your own data maybe makes the response more real and immediate.

However, if you don’t have the right data in your system, using your own API key may not show you the full response that’s possible.

Here’s another example from the New York Times API, which uses Lucybot (powered by Swagger) to handle the interactive API explorer features:

NYTimes API Explorer created through Lucybot and Swagger

This example compels users to try out the endpoints to get a better understanding of the information they return.

API Explorers can be dangerous in the hands of users

Although interactivity is powerful, API Explorers can be a dangerous addition to your site. What if a novice user trying out a DELETE method accidentally removes data? How do you later remove the test data added by POST or PUT methods?

It’s one thing to allow GET methods, but if you include other methods, users could inadvertently corrupt their data. With IBM’s Watson APIs, which use the Swagger UI, they removed the Try it out button.

In Sendgrid’s API, they include a warning message to users before testing out calls with their API Explorer:

SendGrid API Explorer warning message

As far as integrating other API Explorer tooling, this is a task that should be relatively easy for developers. All the Explorer does it map values from a field to an API call and return the response to the same interface. In other words, the API plumbing is all there — you just need a little JavaScript and front-end skills to make it happen.

However, you don’t have to build your own tooling. Existing tools such as Swagger UI (which parses a Swagger spec file) and Readme.io (which allows you to enter the details manually) can integrate an API Explorer functionality directly into your documentation.

Document the sample request with the surfreport/{beachId} endpoint

Come back to the surfreport/{beachId} endpoint example. Create a sample request for it.

Here’s mine:

Sample request

curl --get --include 'https://simple-weather.p.mashap .com/surfreport/123?units=imperial&days=1&time=1433772000' -H 'X-Mashape-Key: APIKEY' -H 'Accept: application/json'

2.9 Documenting sample responses

Provide a sample response for the endpoint

It’s important to provide a sample response from the endpoint. This lets developers know if the endpoint contains the information they want, and how that information is labeled.

Here’s an example from Flattr’s API. In this case, the response actually includes the response header as well as the response body:

If the header information is important, include it. Otherwise, leave it out.

Define what the values mean in the endpoint response

Some APIs describe each item in the response, while others, perhaps because the responses are self-evident, omit the response documentation. In the Flattr example above, the response isn’t explained. Neither is the response explained in Twitter’s API.

If the labels in the response are abbreviated or non-intuitive, however, you definitely should document the responses. Developers sometimes abbreviate the responses to increase performance by reducing the amount of text sent.

Additionally, if you’re documenting some of the response items but not others, the doc will look inconsistent.

One of the problems with the Mashape Weather API is that it doesn’t describe the meaning of the responses. If the air quality index is 25, is that a good or bad value when compared to 65? What is the scale based on?

Does each city/country define its own index? Does a high number indicate a poor quality of air or a high quality? How does air quality differ from air pollution? These are the types of answers one would hope to learn in a description of the responses.

Strategies for documenting nested objects

Many times the response contains nested objects (objects within objects). Here Dropbox represents the nesting with a slash. For example, team/name provides the documentation for the name object within the team object.

Other APIs will nest the response definitions to imitate the JSON structure. Here’s an example from bit.ly’s API:

Bitly response

The indented approach with different levels of bullets can be an eyesore, so I recommend avoiding it.

In Peter Gruenbaum’s API tech writing course on Udemy, he also represents the nested objects using tables:

Peter Gruenbaum course

Gruenbaum’s use of tables is mostly to reduce the emphasis on tools and place it more on the content.

eBay’s approach is a little more unique:

eBay example response

For example, MinimumAdvertisedPrice is nested inside DiscountPriceInfo, which is nested in Item, which is nested in ItemArray. (Note also that this response is in XML instead of JSON.)

It’s also interesting how much detail eBay includes for each item. Whereas the Twitter writers appear to omit descriptions, the eBay authors write small novels describing each item in the response.

Where to include the response

Some APIs collapse the response into a show/hide toggle to save space. Others put the response in a right column so you can see it while also looking at the endpoint description and parameters. Stripe’s API made this tri-column design famous:

Stripe's tri-column design

A lot of APIs have modeled their design after Stripe’s. (For example, see Slate or readme.io.)

To represent the child objects, Stripe uses an expandable section under the parent (see the “Hide Child Attributes” link in the screenshot above).

I’m not sure that the tripane column is so usable. The original idea of the design was to allow you to see the response and description at the same time, but when the description is lengthy (such as is the case with source), it creates unevenness in the juxtaposition.

Many times in Stripe’s documentation, the descriptions aren’t in the same viewing area as the sample response, so what’s the point of arranging them side by side? It splits the viewer’s focus and causes more up and down scrolling.

Use realistic values in the response

The response should contain realistic values. If developers give you a sample response, make sure each of the possible items that can be included are shown. The values for each should be reasonable (not bogus test data that looks corny).

Format the JSON in a readable way

Use proper JSON formatting for the response. A tool such as JSON Formatter and Validator can make sure the spacing is correct.

Add syntax highlighting

If you can add syntax highlighting as well, definitely do it. One good Python-based syntax highlighter is Pygments. This highlighter relies on “lexers” to indicate how the code should be highlighted. For example, some common lexers are java, json, html, xml, cpp, dotnet, and javascript. A non-python-based equivalent to Pygments is Rouge.

Since your tool and platform dictate the syntax highlighting options available, look for syntax highlighting options within the system that you’re using. If you don’t have any syntax highlighters to integrate directly into your tool, you could add syntax highlighting manually for each code sample by pasting it into the syntaxhighlight.in highlighter.

Embedding dynamic responses

Sometimes responses are generated dynamically based on API calls to a test system. For example, look at the Rhapsody API and click an endpoint — it appears to be generated dynamically.

At one company I worked for, we had a test system we used to generate the responses. It was important that the test system had the right data to create good responses. You don’t want a bunch of null or missing items in the response.

However, once the test system generated the responses, those responses were imported into the documentation through a script.

Creative approaches in documenting lengthy JSON responses

In addition to using standard tables to document JSON responses, you can also implement some more creative approaches.

The scrolling-to-definitions approach

In my documentation theme for Jekyll, I tried an approach to documenting JSON that uses a jQuery plugin called ScrollTo. You can see it here:

Scrollto

When you click on an item in the JSON object, the right-pane scrolls to the item’s description. I like this approach, though I’ve not really seen it done in other API documentation sites.

One problem is that you end up with three scroll bars on one page, which isn’t the best design. Additionally, the descriptions in this demo are just paragraphs. Usually you structure the information with more detail (for example, data type, description, notes, etc.).

Also, this approach doesn’t allow for easy scanning. However, this scrolling view might be an alternative view to a more scannable table. That is, you could store the definitions in another file and then include the definitions in both this scrolling view and a master table list, allowing the user to choose the view he or she wants.

The side-by-side approach

In Stripe’s API documentation, the writers try to juxtapose the responses in a right side pane with the documentation in the main window.

Stripe

The idea is that you can see both the description and a sample response at the same time, and just scroll down.

However, the description doesn’t always line up with the sample response. (In some places, child attributes are collapsed to save space.) I’m not sure why some items (such as livemode) aren’t documented.

The no-need-for-descriptions approach

Some sites, like Twitter’s API docs, don’t seem to describe the items in the JSON response at all. Looking at this long response for the post status/retweet endpoint in Twitter’s API docs, there isn’t even an attempt to describe what all the items mean. Maybe they figure most of the items in the response are self-evident?

Twitter

Theoretically, each item in the JSON response should be a clearly chosen word that represents what it means in an obvious way. However, to reduce the size and increase the speed of the response, developers often resort to shorter terms or use abbreviations. The shorter the term, the more it needs accompanying documentation.

In one endpoint I documented, the response included about 20 different two-letter abbreviations. I spent days tracking down what each abbreviation meant. Many developers didn’t even know what the abbreviations meant.

The RAML API Console approach

When you use RAML to document endpoints with JSON objects in the request body, the RAML API Console output looks something like this:

RAML

Here each body parameter is a named JSON object that has standard values such as description and type. While this looks a little cleaner initially, it’s also somewhat confusing. The actual request body object won’t contain description and type parameters like this, nor would it contain the schema, type, or properties keys either.

The problem with RAML is that it tries to describe a JSON structure using a JSON structure itself, but the JSON structure of the description doesn’t match the JSON structure it describes, so it’s confusing.

Further, this approach doesn’t provide an example in context, which is what usually clarifies the data for the user.

Custom-styled tables

The MYOB Developer Center takes an interesting approach in documenting the JSON in their APIs. They list the JSON structure in a table-like way, with different levels of indentation. You can move your mouse over a field for a tooltip description, or you can click it to have a description expand below.

To the right of the JSON definitions is a code sample with real values. When you select a value, both the element in the table and the element in the code sample highlight at the same time.

MYOB JSON doc approach

If you have long JSON objects like this, a custom table with different classes applied to different levels might be the only truly usable solution. It facilitates scanning, and the popover + collapsible approach allows you to compress the table so you can jump to the part you’re interested in.

However, this approach requires more manual work from a documentation point of view, and there isn’t any interactivity to try out the endpoints. Still, if you have long JSON objects, it might be worth it.

Create a sample response in your surfreport/{beachId} endpoint

For your surfreport/{beachId} endpoint, create a section that shows the sample response. Look over the response to make sure it shows what it should.

Here’s what mine looks like:

Sample response

The following is a sample response from the surfreport/{beachId} endpoint:
{
    "surfreport": [
        {
            "beach": "Santa Cruz",
            "monday": {
                "1pm": {
                    "tide": 5,
                    "wind": 15,
                    "watertemp": 80,
                    "surfheight": 5,
                    "recommendation": "Go surfing!"
                },
                "2pm": {
                    "tide": -1,
                    "wind": 1,
                    "watertemp": 50,
                    "surfheight": 3,
                    "recommendation": "Surfing conditions are okay, not great."
                },
                "3pm": {
                    "tide": -1,
                    "wind": 10,
                    "watertemp": 65,
                    "surfheight": 1,
                    "recommendation": "Not a good day for surfing."
                }
            }
        }
    ]
}

The following table describes each item in the response.*

Response item Description
beach The beach you selected based on the beach ID in the request. The beach name is the official name as described in the National Park Service Geodatabase.
{day} The day of the week selected. A maximum of 3 days get returned in the response.
{time} The time for the conditions. This item is only included if you include a time parameter in the request.
{day}/{time}/tide The level of tide at the beach for a specific day and time. Tide is the distance inland that the water rises to, and can be a positive or negative number. When the tide is out, the number is negative. When the tide is in, the number is positive. The 0 point reflects the line when the tide is neither going in nor out but is in transition between the two states.
{day}/{time}/wind The wind speed at the beach, measured in knots (nautical miles per hour). Wind affects the surf height and general wave conditions. Wind speeds of more than 15 knots make surf conditions undesirable, since the wind creates white caps and choppy waters.
{day}/{time}/watertemp The temperature of the water, returned in Farenheit or Celsius depending upon the units you specify. Water temperatures below 70 F usually require you to wear a wetsuit. With temperatures below 60, you will need at least a 3mm wetsuit and preferably booties to stay warm.
{day}/{time}/surfheight The height of the waves, returned in either feet or centimeters depending on the units you specify. A surf height of 3 feet is the minimum size needed for surfing. If the surf height exceeds 10 feet, it is not safe to surf.
{day}/{time}/recommendation An overall recommendation based on a combination of the various factors (wind, watertemp, surfheight). Three responses are possible: (1) "Go surfing!", (2) "Surfing conditions are okay, not great", and (3) "Not a good day for surfing." Each of the three factors is scored with a maximum of 33.33 points, depending on the ideal for each element. The three elements are combined to form a percentage. 0% to 59% yields response 3, 60% - 80% and below yields response 2, and 81% to 100% yields response 3.
*Because this is a fictitious endpoint, I'm making the descriptions up.

3.0 Documenting code samples

REST APIs are language agnostic and interoperable

One aspect of REST APIs that facilitates widespread adoption is that they aren’t tied to a specific programming language. Developers can code their applications in any language, from Java to Ruby to JavaScript, Python, C#, Ruby, Node JS, or something else. As long as they can make an HTTP web request in that language, they can use the API. The response from the web request will contain the data in either JSON or XML.

Deciding which languages to show code samples in

Because you can’t entirely know which language your end users will be developing in, it’s kind of fruitless to try to provide code samples in every language. Many APIs just show the format for submitting requests and a sample response, and the authors will assume that developers will know how to submit HTTP requests in their particular programming language.

However, some APIs do show simple code snippets in a variety of languages. Here’s an example from Evernote’s API documentation:

Evernote API code samples

And another from Twilio:

Twilio code samples

However, don’t feel so intimidated by this smorgasbord of code samples. Some API doc tools might actually automatically generate these code samples because the patterns for making REST requests in different programming languages follow a common template. This is why many APIs decide to provide one code sample (usually in cURL) and let the developer extrapolate the format in his or her own programming language.

Auto-generating code samples

You can auto-generate code samples from both Postman and Paw, if needed.

Paw has more than a dozen code generator extensions:

Once you install them, generating a code sample is a one-click operation:

Paw code generators

The Postman app has most of these code generators built in by default.

Generate a JavaScript code sample from Postman

To generate a JavaScript code snippet from Postman:

  1. Configure a weatherdata request in Postman (or select one you’ve saved).
  2. Below the Send button, click the Generate Code Snippets button.
  3. In the dialog box that appears, browse the available code samples using the drop-down menu. Note how your request data is implemented into each of the different code sample templates.
  4. Select the JavaScript > jQuery AJAX code sample:

  5. Copy the content by clicking the Copy button.

This is the JavaScript code that you can attach to an event on your page.

Implement the JavaScript code snippet

You usually don’t need to show the code sample on a working HTML file, but if you want to show users code they can make work in their own browsers, you can do so.

  1. Create a new HTML file with the basic HTML elements:

    <!DOCTYPE html>
    <head>
    <title>My sample page</title>
    </head>
    <body>
    	
    </body>
    </html>
    
  2. Insert the JavaScript code you copied inside some script tags inside the head:

    <!DOCTYPE html>
    <head>
    <script src="http://code.jquery.com/jquery-2.1.1.min.js"></script>
    <script>
    var settings = {
      "async": true,
      "crossDomain": true,
      "url": "https://simple-weather.p.mashape.com/weatherdata?lat=37.354108&lng=-121.955236",
      "method": "GET",
      "headers": {
        "accept": "application/json",
        "x-mashape-key": "APIKEY"
      }
    }
    	
    $.ajax(settings).done(function (response) {
      console.log(response);
    });
    </script>
    </head>
    <body>
    	
    </body>
    </html>
    
  3. The Mashape Weather API requires the dataType parameter, which Postman doesn’t automatically include. Add "dataType": "json", in the list of settings:

    <!DOCTYPE html>
    <head>
    <script src="http://code.jquery.com/jquery-2.1.1.min.js"></script>
    <script>
    var settings = {
      "async": true,
      "crossDomain": true,
      "dataType": "json",
      "url": "https://simple-weather.p.mashape.com/weatherdata?lat=37.354108&lng=-121.955236",
      "method": "GET",
      "headers": {
        "accept": "application/json",
        "x-mashape-key": "APIKEY"
      }
    }
    	
    $.ajax(settings).done(function (response) {
      console.log(response);
    });
    </script>
    </head>
    <body>
    hello
    </body>
    </html>
    
  4. This code uses the ajax method from jQuery. The parameters are defined in a variable called settings and then passed into the method. The ajax method will make the request and assign the response to the done method’s argument (response). The response object will be logged to the console.
  5. Open the file up in your Chrome browser.
  6. Open the JavaScript Developer Console by going to View > Developer > JavaScript Console. Refresh the page.

    You should see the object logged to the console.

    Object logged to the console

    Let’s say you wanted to pull out the sunrise time and append it to a tag on the page. You could do so like this:

    ```html <!DOCTYPE html>

    Sunrise time

    </html> ```

    This code uses the append method from jQuery to assign a value from the response object to the sunrise ID tag on the page.

SDKs provide tooling for APIs

A lot of times, developers will create an SDK (software development kit) that accompanies a REST API. The SDK helps developers implement the API using specific tooling.

For example, when I worked at Badgeville, we had both a REST API and a JavaScript SDK. Because JavaScript was the target language developers were working in, Badgeville developed a JavaScript SDK to make it easier to work with REST using JavaScript. You could submit REST calls through the JavaScript SDK, passing a number of parameters relevant to web designers.

An SDK is any kind of tooling that makes it easier to work with your API. SDKs are usually specific to a particular language platform. Sometimes they are GUI tools. If you have an SDK, you’ll want to make more detailed code samples showing how to use the SDK.

General code samples

Although you could provide general code samples for every language with every call, it’s usually not done. Instead, there’s often a page that shows how to work with the code in various languages. For example, with the Wunderground Weather API, they have a page that shows general code samples:

Wunderground code samples

Although the Mashape Weather API doesn’t provide a code sample in the Weather API page, Mashape as a platform provides a general code sample on their Consume an API in JS page. The writers explain that you can consume the API with code on an HTML web page like this:

Consuming a REST API through JavaScript

You already worked with this code earlier, so it shouldn’t be new. It’s mostly same code as the JavaScript snippet we just used, but here there’s an error function defined, and the header is set a bit differently.

Create a code sample for the surfreport endpoint

As a technical writer, add a code sample to the surfreport/{beachId} endpoint that you’re documenting. Use the same code as above, and add a short description about why the code is doing what it’s doing.

Here’s my approach:

Code example

The following code samples shows how to use the surfreport endpoint to get the surf conditions for a specific beach. In this case, the code shows the overall recommendation about whether to go surfing.

```html
<!DOCTYPE html>
<head>
<script src="http://code.jquery.com/jquery-2.1.1.min.js"></script>
<script>
var settings = {
  "async": true,
  "crossDomain": true,
  "dataType": "json",
  "url": "https://simple-weather.p.mashape.com/surfreport/25",
  "method": "GET",
  "headers": {
    "accept": "application/json",
    "x-mashape-key": "APIKEY"
  }
}

$.ajax(settings).done(function (response) {
  console.log(response);
  $("#surfheight").append(response.query.results.channel.surf.height);
});
</script>
</head>
<body>
<h2>Surf Height</h2>
<div id="surfheight"></div>
</body>
</html>
```

In this example, the ajax method from jQuery is used because it allows us to load a remote resource asynchronously.

In the request, you submit the authorization through the header rather than directly in the endpoint path. The endpoint limits the days returned to 1 in order to increase the download speed.

For demonstration purposes, the response is assigned to the response argument of the done method, and then written out to the surfheight tag on the page.

We're just getting the surf height, but there's a lot of other data you could choose to display.

You might not include a detailed code sample like this for just one endpoint, but including some kind of code sample is almost always helpful.

3.1 Putting it all together

Full working example

In this example, let’s pull together the various parts you’ve worked on to showcase the full example. I chose to format mine in Markdown syntax in a text editor.

Here’s my example.

surfreport/{beachId}

Returns information about surfing conditions at a specific beach ID, including the surf height, water temperature, wind, and tide. Also provides an overall recommendation about whether to go surfing.

{beachId} refers to the ID for the beach you want to look up. All Beach ID codes are available from our site.

Endpoint definition

surfreport/{beachId}

HTTP method

GET

Parameters

Parameter Description Data Type
days Optional. The number of days to include in the response. Default is 3. integer
units Optional. Whether to return the values in imperial or metric measurements. Imperial will use feet, knots, and fahrenheit. Metric will use centimeters, kilometers per hour, and celsius. string
time Optional. If you include the time, then only the current hour will be returned in the response. integer. Unix format (ms since 1970) in UTC.

Sample request

curl --get --include 'https://simple-weather.p.mashape.com/surfreport/123?units=imperial&days=1&time=1433772000' -H 'X-Mashape-Key: EF3g83pKnzmshgoksF83V6JB6QyTp1cGrrdjsnczTkkYgYrp8p' -H 'Accept: application/json'

Sample response

{
    "surfreport": [
        {
            "beach": "Santa Cruz",
            "monday": {
                "1pm": {
                    "tide": 5,
                    "wind": 15,
                    "watertemp": 80,
                    "surfheight": 5,
                    "recommendation": "Go surfing!"
                },
                "2pm": {
                    "tide": -1,
                    "wind": 1,
                    "watertemp": 50,
                    "surfheight": 3,
                    "recommendation": "Surfing conditions are okay, not great."
                },
                "3pm": {
                    "tide": -1,
                    "wind": 10,
                    "watertemp": 65,
                    "surfheight": 1,
                    "recommendation": "Not a good day for surfing."
                }
            }
        }
    ]
}

The following table describes each item in the response.

Response item Description
beach The beach you selected based on the beach ID in the request. The beach name is the official name as described in the National Park Service Geodatabase.
{day} The day of the week selected. A maximum of 3 days get returned in the response.
{time} The time for the conditions. This item is only included if you include a time parameter in the request.
{day}/{time}/tide The level of tide at the beach for a specific day and time. Tide is the distance inland that the water rises to, and can be a positive or negative number. When the tide is out, the number is negative. When the tide is in, the number is positive. The 0 point reflects the line when the tide is neither going in nor out but is in transition between the two states.
{day}/{time}/wind The wind speed at the beach, measured in knots per hour or kilometers per hour depending on the units you specify. Wind affects the surf height and general wave conditions. Wind speeds of more than 15 knots per hour make surf conditions undesirable, since the wind creates white caps and choppy waters.
{day}/{time}/watertemp The temperature of the water, returned in Fahrenheit or Celsius depending upon the units you specify. Water temperatures below 70 F usually require you to wear a wetsuit. With temperatures below 60, you will need at least a 3mm wetsuit and preferably booties to stay warm.
{day}/{time}/surfheight The height of the waves, returned in either feet or centimeters depending on the units you specify. A surf height of 3 feet is the minimum size needed for surfing. If the surf height exceeds 10 feet, it is not safe to surf.
{day}/{time}/recommendation An overall recommendation based on a combination of the various factors (wind, watertemp, surfheight). Three responses are possible: (1) "Go surfing!", (2) "Surfing conditions are okay, not great", and (3) "Not a good day for surfing." Each of the three factors is scored with a maximum of 33.33 points, depending on the ideal for each element. The three elements are combined to form a percentage. 0% to 59% yields response 3, 60% - 80% and below yields response 2, and 81% to 100% yields response 3.

Error and status codes

The following table lists the status and error codes related to this request.

Status code Meaning
609 Invalid time parameters. All time parameters must be in Java epoch format.
4112 The beach ID was not found in the lookup.

Code example

Code example

The following code samples shows how to use the surfreport endpoint to get the surf height for a specific beach.

<!DOCTYPE html>
<head>
<script src="http://code.jquery.com/jquery-2.1.1.min.js"></script>
<script>
var settings = {
  "async": true,
  "crossDomain": true,
  "dataType": "json",
  "url": "https://simple-weather.p.mashape.com/surfreport/25?days=1&units=metric",
  "method": "GET",
  "headers": {
    "accept": "application/json",
    "x-mashape-key": "APIKEY"
  }
}

$.ajax(settings).done(function (response) {
  console.log(response);
  $("#surfheight").append(response.query.results.channel.surf.height);
});
</script>
</head>
<body>
<h2>Surf Height</h2>
<div id="surfheight"></div>
</body>
</html>

In this example, the ajax method from jQuery is used because it allows us to load a remote resource asynchronously.

In the request, you submit the authorization through the header rather than directly in the endpoint path. The endpoint limits the days returned to 1 in order to increase the download speed.

For demonstration purposes, the response is assigned to the response argument of the done method, and then written out to the surfheight tag on the page.

We're just getting the surf height, but there's a lot of other data you could choose to display.

Structure and templates

If you have a lot of endpoints to document, you’ll probably want to create templates that follow a common structure.

Additionally, if you want to add a lot of styling to each of the elements, you may want to push each of these elements into your template by way of a script. I’ll talk more about publishing in the upcoming sections, Publishing API Documentation.

3.2 Creating the user guide

User guides versus reference documentation

Up until this point, we’ve been focusing on the endpoint (or reference) documentation aspect of user guides. The endpoint documentation is only one part (albeit a significant one) in API documentation. You also need to create a user guide and tutorials.

Whereas the endpoint documentation explains how to use each of the endpoints, you also need to explain how to use the API overall. There are other sections common to API documentation that you must also include. (These other sections are absent from the Mashape Weather API because it’s such a simple API.)

In Mulesoft’s API tooling, you can see some other sections common to API documentation:

Common sections in API documentation

Although this is the Yahoo Weather API page, all APIs using the Mulesoft platform have this same template.

Essential sections in a user guide

Some of these other sections to include in your documentation include the following:

Since the content of these sections varies a lot based on your API, it’s not practical to explore each of these sections using the same API like we did with the API endpoint reference documentation. But I’ll briefly touch upon some of these sections.

Sendgrid’s documentation has a good example of these other user-guide sections essential to API documentation. It does a good job showing how API documentation is more than just a collection of endpoints.

Also include the usual user guide stuff

Beyond the sections outlined above, you should include the usual stuff that you put in user guides. By the usual stuff, I mean you list out the common tasks you expect your users to do. What are their real business scenarios for which they’ll use your API?

Sure, there are innumerable ways that users can put together different endpoints for a variety of outcomes. And the permutations of parameters and responses also provide endless combinations. But no doubt there are some core tasks that most developers will use your API to do. For example, with the Twitter API, most people want to do the following:

Provide how-to’s for these tasks just like you would with any user guide. Seeing the tasks users can do with an API may be a little less familiar because you don’t have a GUI to click through. But the basic concept is the same — ask what will users want to do with this product, what can they do, and how do they do it.

3.3 Writing the overview section

About the Overview section

The overview explains what you can do with the API (high-level business goals), and who the API is for. Too often with API documentation (perhaps because the content is often written by developers), the documentation gets quickly mired in technical details without ever explaining clearly what the API is used for. Don’t lose sight of the overall purpose and business goals of your API by getting lost in the endpoints.

Sample overview

The SendGrid API does a good job at providing an overview: Sendgrid overview

Common business scenarios

In the overview, list some common business scenarios in which the API might be useful. This will give people the context they need to evaluate whether the API is relevant to their needs.

Keep in mind that there are thousands of APIs. If people are browsing your API, their first and most pressing question is, what information does it return? Is this information relevant and useful to me?

Where to put the overview

Your overview should probably go on the homepage of the API, or be a link from the homepage. This is really where you define your audience as well, since the degree to which you explain what the API does depends on how you perceive the audience.

3.4 Writing the Getting Started section

About the Getting started section

Following the Overview section, you usually have a “Getting started” section that details the first steps users need to start using the API.

The “Getting started” section should explain the first steps users must take to start using the API. Some of these steps might involve the following:

Show the general pattern for requests

When you start listing out the endpoints for your resources, you just list the “end point” part of the URL. You don’t list the full HTTP URL that users will need to make the request. Listing out the full HTTP URL with each endpoint would be tedious and take up a lot of space.

You generally list the full HTTP URL in a Getting Started section that shows how to make a call to the API.

For example, you might explain that the domain root for making a request is this:

http://myapi.com/v2/

And when you combine the domain root with a sample endpoint (or resource root), it looks like this:

http://myapi.com/v2/homes/{id}

Once users know the domain root, they can easily add any endpoint to that domain root to construct a request.

Sample Getting Started sections

Here’s the Getting Started section from the Alchemy API:

Alchemy API

Here’s a Getting Started tutorial from the HipChat API:

HipChat API Getting Started

Here’s a Getting Started section from the Aeris Weather API:

Aeris weather Getting started

Here’s another example of a Getting Started tutorial from Smugmug’s API:

Smugmug

I like how, right from the start, Smugmug tries to hold your hand to get you started. In this case, the tutorial for getting started is integrated directly in with the main documentation.

If you compare the various Getting Started sections, you’ll see that some are detailed and some are high-level and brief. In general, the more you can hold the developer’s hand, the better.

Hello World tutorials

In developer documentation, one common topic type is a Hello World tutorial. The Hello World tutorial holds a user’s hand from start to finish in producing the simplest possible output with the system. The simplest output might just be a message that says “Hello World.”

Although you don’t usually write Hello World messages with the API, the concept is the same. You want to show a user how to use your API to get the simplest and easiest result, so they get a sense of how it works and feel productive. That’s what the Getting Started section is all about.

You could take a common, basic use case for your API and show how to construct a request, as well as what response returns. If a developer can make that call successfully, he or she can probably be successful with the other calls too.

3.5 Documenting authentication and authorization

Authentication and authorization overview

Before users can make requests with your API, they’ll usually need to register for some kind of application key, or learn other ways to authenticate the requests.

APIs vary in the way they authenticate users. Some APIs just require you to include an API key in the request header, while other APIs require elaborate security due to the need to protect sensitive data, prove identity, and ensure the requests aren’t tampered with.

In this section, you’ll learn more about authentication and what you should focus on in documentation.

Defining terms

First, a brief definition of terms:

An API might authenticate you but not authorize you to make a certain request.

Consequences if an API lacks security

There are many different ways to enforce authentication and authorization with the API requests. Enforcing this authentication and authorization is vital. Consider the following scenarios if you didn’t have any kind of security with your API:

Clearly, API developers must think about ways to make APIs secure. There are quite a few different methods. I’ll explain a few of the most common ones here.

API keys

Most APIs require you to sign up for an API key in order to use the API. The API key is a long string that you usually include either in the request URL or in a header. The API key mainly functions as a way to identify the person making the API call (authenticating you to use the API). The API key is associated with a specific app that you register.

The company producing the API might use the API key for any of the following:

Sometimes APIs will give you both a public and private key. The public key is usually included in the request, while the private key is treated more like a password and used only in server-to-server communication.

In some API documentation, when you’re logged into the site, your API key automatically gets populated into the sample code and API Explorer. (Flickr’s API does this, for example.)

Basic Auth

One type of authorization is called Basic Auth. With this method, the sender places a username:password into the request header. The username and password is encoded with Base64, which is an encoding technique that converts the username and password into a set of 64 characters to ensure safe transmission. Here’s an example of a Basic Auth in a header:

Authorization: Basic bG9sOnNlY3VyZQ==

APIs that use Basic Auth will also use HTTPS, which means the message content will be encrypted within the HTTP transport protocol. (Without HTTPS, it would be easy for people to decode the username and password.)

When the API server receives the message, it decrypts the message and examines the header. After decoding the string and analyzing the username and password, it then decides whether to accept or reject the request.

In Postman, you can configure Basic Authorization like this:

  1. Click the Authorization tab.
  2. Type the username and password on the right of the colon on each row.
  3. Click Update Request.

The Headers tab now contains a key-value pair that looks like this:

Authorization: Basic RnJlZDpteXBhc3N3b3Jk

Postman handles the Base64 encoding for you automatically when you enter a username and password with Basic Auth selected.

HMAC (Hash-based message authorization code)

HMAC stands for Hash-based message authorization code and is a stronger type of authentication.

With HMAC, both the sender and receiver know a secret key that no one else does. The sender creates a message based on some system properties (for example, the request timestamp plus account ID).

The message is then encoded by the secret key and passed through a secure hashing algorithm (SHA). (A hash is a scramble of a string based on an algorithm.) The resulting value, referred to as a signature, is placed in the request header.

When the receiver (the API server) receives the request, it takes the same system properties (the request timestamp plus account ID) and uses the secret key (which only the requester and API server know) and SHA to generate the same string.

If the string matches the signature in the request header, it accepts the request. If the strings don’t match, then the request is rejected.

Here’s a diagram depicting this workflow:

HMAC workflow

The important point is that the secret key (critical to reconstructing the hash) is known only to the sender and receiver. The secret key is not included in the request.

HMAC security is used when you want to ensure the request is both authentic and hasn’t been tampered with.

OAuth 2.0

One popular method for authenticating and authorizing users is to use OAuth 2.0. This approach relies upon an authentication server to communicate with the API server in order to grant access. You often see OAuth 2.0 when you’re using a site and are prompted to log in using a service like Twitter, Google, or Facebook.

OAuth login window

There are a few varieties of OAuth — namely, “one-legged OAuth” and “three-legged OAuth.” One-legged OAuth is used when you don’t have sensitive data to secure. This might be the case if you’re just retrieving general, read-only information (such as news articles).

In contrast, three-legged OAuth is used when you need to protect sensitive data. There are three groups interacting in this scenario:

Here’s the basic workflow of OAuth 2.0:

OAuth workflow

First the consumer application sends over an application key and secret to a login page at the authentication server. If authenticated, the authentication server responds to the user with an access token.

The access token is packaged into a query parameter in a response redirect (302) to the request. The redirect points the user’s request back to the resource server (the API server).

The user then makes a request to the resource server (API server). The access token gets added to the header of the API request with the word Bearer followed by the token string. The API server checks the access token in the user’s request and decides whether to authenticate the user.

Access tokens not only provide authentication for the requester, they also define the permissions of how the user can use the API. Additionally, access tokens usually expire after a period of time and require the user to log in again.

For more information about OAuth 2.0, see these resources:

What to document with authentication

In API documentation, you don’t need to explain how your authentication works in detail to outside users. In fact, not explaining the internal details of your authentication process is probably a best practice as it would make it harder for hackers to abuse the API.

However, you do need to explain some basic information such as:

If you have public and private keys, you should explain where each key should be used, and that private keys should not be shared.

If different license tiers provide different access to the API calls you can make, these licensing tiers should be explicit in your authorization section or elsewhere.

Where to list the API keys section in documentation

Since the API keys section is usually essential before developers can start using the API, this section needs to appear in the beginning of your help.

Here’s a screenshot from SendGrid’s documentation on API keys:

SendGrid API Keys

Include information on rate limits

Whether in the authorization keys or another section, you should list any applicable rate limits to the API calls. Rate limits determine how frequently you can call a particular endpoint. Different tiers and licenses may have different capabilities or rate limits.

If your site has hundreds of thousands of visitors a day, and each page reload calls an API endpoint, you want to be sure the API can support that kind of traffic.

Here’s a great example of the rate limits section from the Github API:

Rate limiting section from Github

3.6 Documenting response and error codes

Response codes let you know the status of the request

Remember when we submitted the cURL call back in an earlier lesson? We submitted a cURL call and specified that we wanted to see the response headers (--include or -i):

  curl --get -include 'https://simple-weather.p.mashape.com/aqi?lat=37.354108&lng=-121.955236' \-H 'X-Mashape-Key: APIKEY' \
  -H 'Accept: text/plain'

The response, including the header, looked like this:

HTTP/1.1 200 OK
Content-Type: text/plain
Date: Mon, 08 Jun 2015 14:09:34 GMT
Server: Mashape/5.0.6
X-Powered-By: Express
Content-Length: 3
Connection: keep-alive

16

The first line, HTTP/1.1 200 OK, tells us the status of the request. (If you change the method, you’ll get back a different status code.)

With a GET request, it’s pretty easy to tell if the request is successful or not because you get back something in the response.

But suppose you’re making a POST, PUT, or DELETE call, where you’re changing data contained in the resource. How do you know if the request was successfully processed and received by the API?

HTTP response codes in the header of the response will indicate whether the operation was successful. The HTTP status codes are just abbreviations for longer messages.

Common status codes follow standard protocols

Most REST APIs follow a standard protocol for response headers. For example, 200 isn’t just an arbitrary code decided upon by the Mashape Weather API developers. 200 is a universally accepted code for a successful HTTP request.

You can see a list of common REST API status codes here and a general list of HTTP status codes here.

Where to list the HTTP response and error codes

Most APIs should have a general page listing response and error codes across the entire API. Twitter’s API has a good example of the possible status and error codes you will receive when making requests:

Twitter API status codes

In contrast, with the Flickr API, each “method” (endpoint) lists error codes:

Flickr API

Either location has merits, but my preference is a single centralized page for the entire API because listing them out on each endpoint page would add a lot of extra repeated words on each page.

Where to get error codes

Error code may not be readily apparent when you’re documenting your API. You will need to ask developers for a list of all the status codes. In particular, if developers have created special status codes for the API, highlight these in the documentation.

For example, if you exceed the rate limit for a specific all, the API might return a special status code. You would especially need to document this custom code. Listing out all the error codes is a reference section in the “Troubleshooting” topic of your API documentation.

When endpoints have specific status codes

In the Flattr API, sometimes endpoints return particular status codes. For example, when you “Check if a thing exists,” the response includes HTTP/1.1 302 Found when the object is found. This is a standard HTTP response. If it’s not found, you see a 404 status code.

Not found status code

If the status code is specific to a particular endpoint, you can include it with that endpoint’s documentation.

Alternatively, you can have a general status and error codes page that lists all possible codes for all the endpoints. For example, with the Dropbox API, the writers list out the error codes related to the API:

In particular, you should look for codes that return when there is an error, since this information helps developers troubleshoot problems.

How to list status codes

Your list of status codes can be done in a basic table, somewhat like this:

Status code Meaning
200 Successful request and response.
400 Malformed parameters or other bad request

Status codes aren’t readily visible

Status codes are pretty subtle, but when a developer is working with an API, these codes may be the only “interface” the developer has. If you can control the messages the developer sees, it can be a huge win. All too often, status codes are uninformative, poorly written, and communicate little or no helpful information to the user to overcome the error.

Status/error codes can assist in troubleshooting

Status and error codes can be particularly helpful when it comes to troubleshooting. Therefore, you can think of these error codes as complementary to a section on troubleshooting.

Almost every set of documentation could benefit from a section on troubleshooting. Document what happens when users get off the happy path and start stumbling around in the dark forest.

A section on troubleshooting could list possible error messages users get when they do any of the following:

Where possible, document the exact text of the error in the documentation so that it easily surfaces in searches.

3.7 Documenting code samples and tutorials

About code samples

As you write documentation for developers, you’ll start to include more and more code samples. You might not include these more detailed code samples with the endpoints you document, but as you create tasks and more sophisticated workflows about how to use the API to accomplish a variety of tasks, you’ll end up leveraging different endpoints and showing how to address a variety of scenarios.

Here’s a sample code sample page from Mashape:

Mashape code sample

The following sections list some best practices around code samples.

Code samples are like candy for developers

Code samples play an important role in helping developers use an API. No matter how much you try to explain and narrate how, it’s only when you show something in action that developers truly get it.

You are not the audience

Recognize that, as a technical writer rather than a developer, you aren’t your audience. Developers aren’t newbies when it comes to code. But different developers have different specializations. Someone who is a database programmer will have a different skill set from a Java developer who will have a different skil lset from a JavaScript developer, and so on.

Developers often make the mistake of assuming that their developer audience has a skill set similar to their own, without recognizing different developer specializations. Developers will often say, “If the user doesn’t understand this code, he or she shouldn’t be using our API.”

It might be important to remind developers that users often have technical talent in different areas. For example, a user might be an expert in Java but only mildly familiar with JavaScript.

Focus on the why, not the what

In any code sample, you should focus your explanation on the why, not the what. Explain why you’re doing what you’re doing, not the detailed play-by-play of what’s going on.

Here’s an example of the difference:

Explain your company’s code, not general coding

Developers unfamiliar with common code not related to your company (for example, the .ajax() method from jQuery) should consult outside sources for tutorials about that code. You shouldn’t write your own version of another service’s documentation. Instead, focus on the parts of the code unique to your company. Let the developer rely on other sources for the rest (feel free to link to other sites).

Keep code samples simple

Code samples should be stripped down and as simple as possible. Providing code for an entire HTML page is probably unnecessary. But including it doesn’t hurt anyone, and for newbies it can help them see the big picture.

Avoid including a lot of styling or other details in the code that will potentially distract the audience from the main point. The more minimalist the code sample, the better.

When developers take the code and integrate it into a production environment, they will make a lot of changes to account for scaling, threading, and efficiency, and other production-level factors.

Add both code comments and before-and-after explanations

Your documentation regarding the code should mix code comments with some explanation either after or before the code sample. Brief code comments are set off with forward slashes / in the code; longer comments are set off between slashes with asterisks, like this: /* .... */.

Comments within the code are usually short one-line notes that appear after every 5-10 lines of code. You can follow up this code with more robust explanations later.

This approach of adding brief comments within the code, followed by more robust explanations after the code, aligns with principles of progressive information disclosure that help align with both advanced and novice user types.

Make code samples copy-and-paste friendly

Many times developers will copy and paste code directly from the documentation into their application. Then they will usually tweak it a little bit for their specific parameters or methods.

Make sure that the code works. When I first used this Mashape code sample, dataType was actually spelled datatype. As a result, the code didn’t work (it returned the response as text, not JSON). It took me about 30 minutes of troubleshooting before I consulted the ajax method and realized that it should be dataType with a capital T.

Ideally, test out all the code samples yourself. This allows you to spot errors, understand whether all the parameters are complete and valid, and more. Usually you just need a sample like this to get started, and then you can use the same pattern to plug in different endpoints and parameters. You don’t need to come up with new code like this every time.

Provide a sample in your target language

With REST APIs, developers can use pretty much any programming language to make the request. Should you show code samples that span across various languages?

Providing code samples is almost always a good thing, so if you have the bandwidth, follow the examples from Evernote and Twilio. However, providing just one code example in your audience’s target language is probably enough, if needed at all. You could also skip the code samples altogether, since the approach for submitting an endpoint follows a general pattern across languages.

Remember that each code sample you provide needs to be tested and maintained. When you make updates to your API, you’ll need to update each of the code samples across all the different languages.

Code samples are maintenance heavy with new releases

Getting into code samples leads us more toward user guide tasks than reference tasks. However, keep in mind that code samples are a bear to maintain. When your API pushes out a new release, will you check all the code samples to make sure the code doesn’t break with the new API (this is called regression testing by QA).

What happens if new features require you to change your code examples? The more code examples you have, the more maintenance they require.

3.8 Creating the quick reference guide

About quick reference guides

For those power users who just want to glance at the content to understand it, provide a quick reference guide.

The quick reference guide serves a different function from the getting started guide. The getting started guide helps beginners get oriented; the quick reference guide helps advanced users quickly find details about endpoints and other API details.

Sample quick reference guide

Here’s a quick reference guide from Eventful’s API:

Eventful quick reference

An online quick reference guide can serve as a great entry point into the documentation. Here’s a quick reference from Shopify about using Liquid:

Shopify quick reference guide

Visual quick reference guides

You can also make a visual illustration showing the API endpoints and how they relate to one another. I once created a one page endpoint diagram at Badgeville, and I found it so useful I ended up taping it on my wall. Although I can’t include it here for privacy reasons, the diagram depicted the various endpoints and methods available to each of the resources (remember that one resource can have many endpoints).

3.9 Exploring more REST APIs

Let’s explore more APIs

Now it’s time to explore some other REST APIs and code for some specific scenarios. This experience will give you more exposure to different REST APIs, how they’re organized, the complexities and interdependency of endpoints, and more.

Attack the challenge first, then read the answer

There are several examples with different APIs. A challenge is listed for each exercise. First try to solve the challenge on your own. Then follow along in the sections below to see how I approached it.

In these examples, I usually printed the code to a web page to visualize the response. However, that part is not required in the challenge. (It mostly makes the exercise more fun to me.)

Shortcuts for API keys

Each API requires you to use an API key, token, or some other form of authentication. You can register for your own API keys, or you can use my keys here.

Swap out APIKEY in code samples

I never insert API keys in code samples for a few reasons:

When you see APIKEY in a code sample, remember to swap in an API key there. For example, if the API key was 123, you would delete APIKEY and use 123.

4.0 EventBrite example: Get Event information and display it on a page

The challenge

Use the EventBrite API to get the event title and description of this event.

About EventBrite

EventBrite is an event management tool, and you can interact with it through an API to pull out the event information you want. In this example, you’ll use the EventBrite API to print a description of an event to your page.

1. Get an anonymous OAuth token

To make any kind of requests, you’ll need a tokaen, which you can learn about in the Authentication section. Although it’s best to pass an Oauth token in the header, for simplicity purposes you can just get a token to make direct calls.

If you want to sign up for your own token, register your app here. Then copy the “Anonymous access OAuth token.”

2. Determine the resource and endpoint you need

The EventBrite API documentation is here: developer.eventbrite.com. Looking through the endpoints available (listed under Endpoints in the sidebar). Which endpoint should we use?

To get event information, we’ll use the event object.

EventBrite Event

The events object allows us to “retrieve a paginated response of public event objects from across Eventbrite’s directory, regardless of which user owns the event.”

The events object has a lot of different endpoints available. However, the GET events/:id/ URL, described here seems to provide what we need.

3. Construct the request

Reading the quick start page, the sample request format is here:

https://www.eventbriteapi.com/v3/users/me/?token=MYTOKEN

This is for a users object endpoint, though. For events, we would change it to this:

https://www.eventbriteapi.com/v3/events/:id/?token={your api key}

Find an ID of an event you want to use, such as this event:

Sample event

The event ID appears in the URL. Now populate the request with the ID of this event: https://www.eventbriteapi.com/v3/events/17920884849/?token={your api key}

4. Make a request and analyze the response

Now that you have an endpoint and API token, make the request.

The response from the endpoint is as follows:

{
    "name": {
        "text": "An Aggressive Approach to Concise Writing, with Joe Welinske",
        "html": "An Aggressive Approach to Concise Writing, with Joe Welinske"
    },
    "description": {
        "text": "Webinar Description \nWriting concisely is one of the fundamental skills central to any mobile user assistance. The minimal screen real estate can\u2019t support large amounts of text and graphics without extensive gesturing by the users. Using small font sizes just makes the information unreadable unless the user pinches and stretches the text.   Even outside of the mobile space, your ability to streamline your content improves the likelihood it will be effectively consumed by your target audience.   This session offers a number of examples and techniques for reducing the footprint of your prose while maintaining a quality message. The examples used are in the context of mobile UA but can be applied to any technical writing situation. \nAbout Joe Welinske Joe Welinske specializes in helping your software development effort through crafted communication. The best user experience features quality words and images in the user interface. The UX of a robust product is also enhanced through comprehensive user assistance. This includes Help, wizards, FAQs, videos and much more. For over twenty-five years, Joe has been providing training, contracting, and consulting services for the software industry. Joe recently published the book, Developing User Assistance for Mobile Apps. He also teaches courses for Bellevue College, the University of California, and the University of Washington. Joe is an Associate Fellow of STC. ",
        "html": "<P><SPAN STYLE=\"font-size: medium;\"><STRONG>Webinar Description<\/STRONG><\/SPAN><\/P>\r\n<P>Writing concisely is one of the fundamental skills central to any mobile user assistance. The minimal screen real estate can\u2019t support large amounts of text and graphics without extensive gesturing by the users. Using small font sizes just makes the information unreadable unless the user pinches and stretches the text.<BR> <BR>Even outside of the mobile space, your ability to streamline your content improves the likelihood it will be effectively consumed by your target audience.<BR> <BR>This session offers a number of examples and techniques for reducing the footprint of your prose while maintaining a quality message. The examples used are in the context of mobile UA but can be applied to any technical writing situation.<\/P>\r\n<P><SPAN STYLE=\"font-size: medium;\"><STRONG>About Joe Welinske<\/STRONG><\/SPAN><BR>Joe Welinske specializes in helping your software development effort through crafted communication. The best user experience features quality words and images in the user interface. The UX of a robust product is also enhanced through comprehensive user assistance. This includes Help, wizards, FAQs, videos and much more. For over twenty-five years, Joe has been providing training, contracting, and consulting services for the software industry. Joe recently published the book, Developing User Assistance for Mobile Apps. He also teaches courses for Bellevue College, the University of California, and the University of Washington. Joe is an Associate Fellow of STC.<\/P>"
    },
    "id": "17920884849",
    "url": "http://www.eventbrite.com/e/an-aggressive-approach-to-concise-writing-with-joe-welinske-tickets-17920884849",
    "start": {
        "timezone": "America/Los_Angeles",
        "local": "2015-09-24T12:00:00",
        "utc": "2015-09-24T19:00:00Z"
    },
    "end": {
        "timezone": "America/Los_Angeles",
        "local": "2015-09-24T13:00:00",
        "utc": "2015-09-24T20:00:00Z"
    },
    "created": "2015-07-27T15:14:49Z",
    "changed": "2015-07-27T16:19:40Z",
    "capacity": 24,
    "status": "live",
    "currency": "USD",
    "shareable": true,
    "online_event": false,
    "tx_time_limit": 480,
    "logo_id": null,
    "organizer_id": "7774592843",
    "venue_id": "11047889",
    "category_id": "102",
    "subcategory_id": "2004",
    "format_id": "2",
    "resource_uri": "https://www.eventbriteapi.com/v3/events/17920884849/",
    "logo": null
}

5. Pull out the information you need

The information has a lot more than we need. We just want to display the event’s title and description on our site. To do this, we use some simple jQuery code to pull out the information and append it to a tag on our web page:

<html>
<body>

<script src="http://ajax.googleapis.com/ajax/libs/jquery/1.11.1/jquery.min.js"></script>

<script>
  var settings = {
    "async": true,
    "crossDomain": true,
    "url": "https://www.eventbriteapi.com/v3/events/17920884849/?token=APIKEY",
    "method": "GET",
    "headers": {}
  }

  $.ajax(settings).done(function (data) {
    console.log(data);
    var content = "<h2>" + data.name.text + "</h2>" + data.description.html;
    $("#eventbrite").append(content);
  });
</script>

<div id="eventbrite"></div>

</body>
</html>

We covered this approach earlier in the course, so I won’t go into much detail here.

Here’s the result:

Eventbrite result

Code explanation

The result is as plain-jane as it can be in terms of style. But with API documentation code examples, you want to keep code examples simple. In fact, you most likely don’t need a demo at all. Simply showing the payload returned in the browser is sufficient for a UI developer. However, for testing it’s fun to make content actually appear on the page.

The ajax method from jQuery gets a payload for an endpoint URL, and then assigns it to the data argument. We log data to the console to more easily inspect its payload. To pull out the various properties of the object, we use dot notation. data.name.text gets the text property from the name object that is embedded inside the data object.

We then rename the content we want with a variable (var content) and use jQuery’s append method to assign it to a specific tag (eventbrite) on the page.

4.1 Flickr example: Retrieve a Flickr gallery and display it on a web page

The challenge

Use the Flickr API to get photo images from this Flickr gallery.

Flickr Overview

In this Flickr API example, you’ll see that our goal requires us to call several endpoints. You’ll see that just having an API reference that lists the endpoints and responses isn’t enough. Often one endpoint requires other endpoint responses as inputs, and so on.

In this example, we want to get all the photos from a specific Flickr gallery and display them on a web page. Here’s the gallery we want:

Flickr gallery

1. Get an API key to make requests

Before you can make a request with the Flickr API, you’ll need an API key, which you can read more about here. When you register an app, you’re given a key and secret.

2. Determine the resource and endpoint you need

From the list of Flickr’s API methods, the flickr.galleries.getPhotos endpoint, which is listed under the galleries resource, is the one that will get photos from a gallery.

Flickr getPhotos endpoint

One of the arguments we need for the getPhotos endpoint is the gallery ID. Before we can get the gallery ID, however, we have to use another endpoint to retrieve it. Rather unintuitively, the gallery ID is not the ID that appears in the URL of the gallery.

We use the flickr.urls.lookupGallery endpoint listed in the URLs resource section to get the gallery ID from a gallery URL:

Flickr lookupGallery endpoint endpoint

The gallery ID is 66911286-72157647277042064. We now have the arguments we need for the flickr.galleries.getPhotos endpoint.

3. Construct the request

We can make the request to get the list of photos for this specific gallery ID.

Flickr provides an API Explorer to simplify calls to the endpoints. If we go to the API Explorer for the galleries.getPhotos endpoint, we can plug in the gallery ID and see the response, as well as get the URL syntax for the endpoint.

Using the Flickr API Explorer to get the request syntax

Insert the gallery ID, select Do not sign call (we’re just testing here, so we don’t need extra security), and then click Call Method.

Here’s the result:

Flickr gallery response</a>

The URL below the response shows the right syntax for using this method:

https://api.flickr.com/services/rest/?method=flickr.galleries.getPhotos&api_key=APIKEY&gallery_id=66911286-72157647277042064&format=json&nojsoncallback=1

If you submit the request direct in your browser using the given URL, you can see the same response but in the browser rather than the API Explorer:

Flickr response in browser

4. Analyze the response

All the necessary information is included in this response in order to display photos on our site, but it’s not entirely intuitive how we construct the image source URLs from the response.

Note that the information a user needs to actually achieve a goal isn’t explicit in the API reference documentation. All the reference doc explains is what gets returned in the response, not how to actually use the response.

The Photo Source URLs page in the documentation explains it:

You can construct the source URL to a photo once you know its ID, server ID, farm ID and secret, as returned by many API methods. The URL takes the following format:
https://farm{farm-id}.staticflickr.com/{server-id}/{id}_{secret}.jpg
    or
https://farm{farm-id}.staticflickr.com/{server-id}/{id}_{secret}_[mstzb].jpg
    or
https://farm{farm-id}.staticflickr.com/{server-id}/{id}_{o-secret}_o.(jpg|gif|png)

Here’s what an item in the JSON response looks like:

"photos": {
  "page": 1,
  "pages": 1,
  "perpage": 500,
  "total": 15,
  "photo": [
    {
     "id": "8432423659",
     "owner": "37107167@N07",
     "secret": "dd1b834ec5",
     "server": "8187",
     "farm": 9,
     "title": "Color",
     "ispublic": 1,
     "isfriend": 0,
     "isfamily": 0,
     "is_primary": 1,
     "has_comment": 0
} ...

You access these fields through dot notation. It’s a good idea to log the whole object to the console just to explore it better.

5. Pull out the information you need

The following code uses jQuery to loop through each of the responses and inserts the necessary components into an image tag to display each photo. Usually in documentation you don’t need to be so explicit about how to use a common language like jQuery. You assume that the developer is capable in a specific programming language.

<html>
<style>
img {max-height:125px; margin:3px; border:1px solid #dedede;}
</style>
<body>

<script src="http://ajax.googleapis.com/ajax/libs/jquery/1.11.1/jquery.min.js"></script>


<script>

var settings = {
  "async": true,
  "crossDomain": true,
  "url": "https://api.flickr.com/services/rest/?method=flickr.galleries.getPhotos&api_key=APIKEY&gallery_id=66911286-72157647277042064&format=json&nojsoncallback=1",
  "method": "GET",
  "headers": {}
}

$.ajax(settings).done(function (data) {
  console.log(data);



$("#galleryTitle").append(data.photos.photo[0].title + " Gallery");
    	$.each( data.photos.photo, function( i, gp ) {

var farmId = gp.farm;
var serverId = gp.server;
var id = gp.id;
var secret = gp.secret;

console.log(farmId + ", " + serverId + ", " + id + ", " + secret);

//  https://farm{farm-id}.staticflickr.com/{server-id}/{id}_{secret}.jpg

$("#flickr").append('<img src="https://farm' + farmId + '.staticflickr.com/' + serverId + '/' + id + '_' + secret + '.jpg"/>');

});
});

</script>


<h2><div id="galleryTitle"></div></h2>
<div style="clear:both;"/>
<div id="flickr"/>


</body>
</html>

And the result looks like this:

Flickr gallery demo

Code explanation

The final line shows how you insert those variables into the HTML:

$("#flickr").append('<img src="https://farm' + farmId + '.staticflickr.com/' + serverId + '/' + id + '_' + secret + '.jpg"/>');

4.2 Klout example: Retrieve Klout influencers and influencees

The challenge

Use the Klout API to get your Klout score and a list of your influencers and influencees.

About Klout

Klout is a service that gauges your online influence (your klout) by measuring tweets, retweets, likes, etc. from a variety of social networks using a sophisticated algorithm. In this tutorial, you’ll use the Klout API to retrieve a Klout score for a particular Twitter handle, and then a list of your influencers.

Klout has an “interactive console” driven by Mashery I/O docs that allows you to insert parameters and go to an endpoint. The interactive console also contains brief descriptions of what each endpoint does.

Klout Interactive Console

1. Get an API key to make requests

To use the API, you have to register an “app,” which allows you to get an API key. Go to My API Keys page to register your app and get the keys.

2. Make requests for the resources you need

The API is relatively simple and easy to browse.

To get your Klout score, you need to use the score endpoint. This endpoint requires you to pass your Klout ID.

Since you most likely don’t know your Klout ID, use the identity(twitter_screen_name) endpoint first.

Klout identity endpoint

Instead of using the API console, you can also submit the request via your browser by going to the request URL:

http://api.klout.com/v2/identity.json/twitter?screenName=tomjohnson&amp;key={your api key}

My Klout ID is 1134760.

Now you can use the score endpoint to calculate your score.

Klout Score

My score is 54. Klout’s interactive console makes it easy to get responses for API calls, but you could equally submit the request URI in your browser.

http://api.klout.com/v2/user.json/1134760/score?key={your api key}

After submitting the request, here is what you would see:

{
   "score": 54.233149646009174,
   "scoreDelta": {
   "dayChange": -0.5767549117977069,
   "weekChange": -0.5311640476663939,
   "monthChange": -0.2578449396243201
},
   "bucket": "50-59"
}

Now suppose you want to know who you have influenced (your influencees) and who influences you (your influencers). After all, this is what Klout is all about. Influence is measured by the action you drive.

To get your influencers and influencees, you need to use the influence endpoint, passing in your Klout ID.

3. Analyze the response

And here’s the influence resource’s response:

{
    "myInfluencers": [{
        "entity": {
            "id": "441634251566461018",
            "payload": {
                "kloutId": "441634251566461018",
                "nick": "jekyllrb",
                "score": {
                    "score": 50.41206120210041,
                    "bucket": "50-59"
                },
                "scoreDeltas": {
                    "dayChange": -0.05927708546307997,
                    "weekChange": -0.739829931907181,
                    "monthChange": -0.7917151139830239
                }
            }
        }
    }, {
        "entity": {
            "id": "33214052017370475",
            "payload": {
                "kloutId": "33214052017370475",
                "nick": "Mrtnlrssn",
                "score": {
                    "score": 22.45014953758632,
                    "bucket": "20-29"
                },
                "scoreDeltas": {
                    "dayChange": -0.3481056157609004,
                    "weekChange": -2.132213372307284,
                    "monthChange": -2.315034722843535
                }
            }
        }
    }, {
        "entity": {
            "id": "177892199475207065",
            "payload": {
                "kloutId": "177892199475207065",
                "nick": "TCSpeakers",
                "score": {
                    "score": 28.23034124231384,
                    "bucket": "20-29"
                },
                "scoreDeltas": {
                    "dayChange": 0.00154327588529668,
                    "weekChange": -0.6416866188503434,
                    "monthChange": -4.226666088333872
                }
            }
        }
    }, {
        "entity": {
            "id": "91760850663150797",
            "payload": {
                "kloutId": "91760850663150797",
                "nick": "JohnFoderaro",
                "score": {
                    "score": 39.39045702175103,
                    "bucket": "30-39"
                },
                "scoreDeltas": {
                    "dayChange": -0.6092388403641991,
                    "weekChange": -0.699356032047298,
                    "monthChange": 5.34513233077341
                }
            }
        }
    }, {
        "entity": {
            "id": "1057244",
            "payload": {
                "kloutId": "1057244",
                "nick": "peterlalonde",
                "score": {
                    "score": 42.39625419500191,
                    "bucket": "40-49"
                },
                "scoreDeltas": {
                    "dayChange": -0.32068173129262334,
                    "weekChange": 0.14276611846587173,
                    "monthChange": -0.9354253686809457
                }
            }
        }
    }],
    "myInfluencees": [{
        "entity": {
            "id": "537311",
            "payload": {
                "kloutId": "537311",
                "nick": "techwritertoday",
                "score": {
                    "score": 49.99313854987996,
                    "bucket": "40-49"
                },
                "scoreDeltas": {
                    "dayChange": -0.10510042996928348,
                    "weekChange": -0.568647896457648,
                    "monthChange": 0.3425617785475197
                }
            }
        }
    }, {
        "entity": {
            "id": "91760850663150797",
            "payload": {
                "kloutId": "91760850663150797",
                "nick": "JohnFoderaro",
                "score": {
                    "score": 39.39045702175103,
                    "bucket": "30-39"
                },
                "scoreDeltas": {
                    "dayChange": -0.6092388403641991,
                    "weekChange": -0.699356032047298,
                    "monthChange": 5.34513233077341
                }
            }
        }
    }, {
        "entity": {
            "id": "33214052017370475",
            "payload": {
                "kloutId": "33214052017370475",
                "nick": "Mrtnlrssn",
                "score": {
                    "score": 22.45014953758632,
                    "bucket": "20-29"
                },
                "scoreDeltas": {
                    "dayChange": -0.3481056157609004,
                    "weekChange": -2.132213372307284,
                    "monthChange": -2.315034722843535
                }
            }
        }
    }, {
        "entity": {
            "id": "45598950992256021",
            "payload": {
                "kloutId": "45598950992256021",
                "nick": "DavidEgyes",
                "score": {
                    "score": 40.40572793362214,
                    "bucket": "40-49"
                },
                "scoreDeltas": {
                    "dayChange": 0.001934309078080787,
                    "weekChange": 2.233816485488269,
                    "monthChange": 1.4901401977594801
                }
            }
        }
    }, {
        "entity": {
            "id": "46724857496656136",
            "payload": {
                "kloutId": "46724857496656136",
                "nick": "fabi_ator",
                "score": {
                    "score": 30.32498605174672,
                    "bucket": "30-39"
                },
                "scoreDeltas": {
                    "dayChange": -0.005890177199574964,
                    "weekChange": -0.6859163242901047,
                    "monthChange": -5.293301673692355
                }
            }
        }
    }],
    "myInfluencersCount": 5,
    "myInfluenceesCount": 5
}

The response contains an array containing 5 influencers and an array containing 5 influencees. (Remember the square brackets denote an array; the curly braces denote an object. Each array contains a list of objects.)

4. Pull out the information you need

Suppose you just want a short list of Twitter names with their links.

Using jQuery, you can iterate through the JSON payload and pull out the information that you want:

<html>
<body>

<script src="http://ajax.googleapis.com/ajax/libs/jquery/1.11.1/jquery.min.js"></script>

<script>
  var settings = {
    "async": true,
    "crossDomain": true,
    "url": "http://api.klout.com/v2/user.json/1134760/influence?key=APIKEY&callback=?",
    "method": "GET",
    "dataType": "jsonp",
    "headers": {}
  }

  $.ajax(settings).done(function (data) {
    console.log(data);
    $.each( data.myInfluencees, function( i, inf ) {
       $("#kloutinfluencees").append('<li><a href="http://twitter.com/'+inf.entity.payload.nick + '">' + inf.entity.payload.nick + '</a></li>');
     });
    $.each( data.myInfluencers, function( i, inf ) {
       $("#kloutinfluencers").append('<li><a href="http://twitter.com/'+inf.entity.payload.nick + '">' + inf.entity.payload.nick + '</a></li>');
     });
  });
</script>

<h2>My influencees (people I influence)</h2>
<ul id="kloutinfluencees"></ul>

<h2>My influencers (people who influence me)</h2>
<ul id="kloutinfluencers"></ul>

</body>
</html>

The result looks like this:

Klout result

Code explanation

The code uses the ajax method from jQuery to get a JSON payload for a specific URL. It assigns this payload to the data argument. The console.log(data) code just logs the payload to the console to make it easy to inspect.

The jQuery each method iterates through each property in the data.myInfluencees object. It renames this object inf (you can choose whatever names you want) and then gets the entity.payload.nick property (nickname) for each item in the object. It inserts this value into a link to the Twitter profile, and then appends the information to a specific tag on the page (#kloutinfluencees).

Pretty much the same approach is used for the data.myInfluencers object, but the tag the data is appended to is (kloutinfluencers).

Note that in the ajax settings, a new attribute is included: "dataType": "jsonp". If you omit this, you’ll get an error message that says:

XMLHttpRequest cannot load http://api.klout.com/v2/user.json/876597/influence?key=APIKEY&callback=?. No 'Access-Control-Allow-Origin' header is present on the requested resource. Origin 'null' is therefore not allowed access.

When you submit requests to endpoints, you’re getting information from other domains and pulling the information to your own domain. For security purposes, servers block this action. The resource server has to enable something called Cross Origin Resource Sharing (CORS).

JSONP gets around CORS restricts by wrapping the JSON into script tags, which servers don’t block. With JSONP, you can only use GET methods. You can read more about JSONP here.

Publishing API documentation

4.4 Next phase of course

You finished!

Congratulations, you finished the documenting REST APIs section of the course. You’ve learned the core of documenting REST APIs. We haven’t covered publishing tools or strategies. Instead, this part of the course has focused on the creating content, which should always be the first consideration.

Summary of what you learned

During this part of the course, you learned the core tasks involved in documenting REST APIs. First, as a developer, you did the following:

Then you switched perspectives and approached APIs from a technical writer’s point of view. As a technical writer, you documented each of the main components of a REST API:

Although the technology landscape is broad, and there are many different technology platforms, languages, and code bases, most REST APIs have these same sections in common.

The next part of the course

Now that you’ve got the content down, the next step is to focus on publishing strategies for API documentation. This is the focus of the next part of the course.

4.5 Publishing API docs

Publishing context

In earlier parts of this workshop, we used a simple Weather API from Mashape to demonstrate how to use a REST API. Now we’ll explore various tools to publish information from the same Mashape Weather API.

Why focus on publishing API docs?

The first question about a focus on publishing API documentation might be, why? What makes publishing API documentation so different from other kinds of documentation that it merits its own section? How and why does the approach here need to differ from the approach for publishing regular documentation?

This is a valid question that I want to answer by telling a story.

My story: Turning from DITA to Jekyll

When I first transitioned into developer and API documentation, I had my mind set on using DITA, and I converted a large portion of my content over to it.

However, as I started looking more at API documentation sites, primarily those listed on Programmableweb.com, which maintains the largest directory of web APIs, I didn’t find many DITA-based API doc sites. In fact, it turns out that almost none of the API doc sites listed on ProgrammableWeb even use tech comm authoring tools.

Despite many advances with single sourcing, content re-use, conditional filtering, and other features in help authoring tools and content management systems, almost no API documentation sites on Programmableweb.com use them. Why is that? Why has the development community outright rejected tech comm tools (and their 50 years of evolution)?

Granted, there is the occasional HAT, as with Photobucket’s API, but they’re rare. And I’ve not yet found an API doc site that structures all content in DITA.

I asked a recruiter (who specializes in API documentation jobs) whether it was more advantageous to become adept with DITA or to learn a tool such as a static site generator, which is more common in this space.

My recruiter friend knows the market — especially the Silicon Valley market — extremely well. He urged me to look at the static site generator route. He said that many small companies, especially startups, are looking for writers who can publish documentation that looks beautiful, like the many modern web outputs on Programmableweb.

Five reasons why developer doc doesn’t use HATs

I think there are at least five reasons why developers reject tech comm authoring tools:

1. The HAT tooling doesn’t match developer workflows and environments

If devs are going to contribute or write docs, the tools need to fit their own development tools and workflows. Their tooling is to treat doc as code, committing it to source control, building outputs from the server, etc. They want to package the doc in with their other code, check it into their repos, and include it in the builds.

Why are engineers writing in the first place, you might ask? Well, sometimes you really need engineers to contribute because the content is so technical, it’s beyond the domain of non-specialists. If you want engineers to get involved, you need to use developer tooling.

2. HATs won’t generate docs from source

Ideally, engineers want to add annotations in their code and then simply generate the doc from those annotations. They’ve been doing that with Java and C++ doc for the past 20 years. There are quite a few tools in the developer doc space that will auto-generate documentation from source code annotations, but it’s not something that HATs or GUI doc tools do.

3. API doc follows a specific structure and pattern not modeled in any HAT

The reference documentation is pushed into well-defined templates, which list sections such as endpoint parameters, sample requests, sample responses, and so forth. Sometimes this template can be driven from the source code itself.

If you have a lot of endpoints, you need a system for pushing the content into these templates. There are many templating frameworks that handle these scenarios nicely. Other times you need custom scripts. Either way, not many HATs handle this kind of template-driven publishing scenario.

4. Many APIs have interactive API consoles, allowing you to try out the calls.

You won’t find an interactive API console in a HAT. By interactive API console, I mean you enter your own API key and values, and then run the call directly in the documentation. The response you see is from your own data in the API.

5. With APIs, the doc is the interface, so it has to be sexy enough to sell the product.

Most output from HATs look dated and old. They look like a relic of the pre-2000 Internet era.

With API documentation, often times the documentation is the product — there isn’t a separate GUI that clients interact with. That GUI is the documentation, so it has to be sexy and awesome.

Most tripane help doesn’t make that cut. If the help looks old and frame-based, it doesn’t instill much confidence in the developers using it.

A new direction

Based on all of these factors, I decided to put DITA authoring on pause and try a new tool with my documentation: Jekyll. I’ve come to really love using Jekyll, working primarily in Markdown, leveraging Liquid for conditional logic, and committing updates to a repository. I realize that not everyone has the luxury of switching authoring tools, but since my company is somewhat small, and I’m one of three writers, I wasn’t burned by a ton of legacy content or heavy processes, so I could innovate.

Jekyll is just one documentation publishing option in this space. I personally enjoy working with a more code-based approach, but there are many different options and routes to explore.

Now let’s explore various ways to publish API documentation. Most of these routes will take you away from traditional tech comm tools and publishing strategies.

4.6 List of about 100 APIs

A survey of API documentation sites

The following are about 100 openly accessible REST APIs that you can browse as a way to look at patterns and examples. Most of these REST API links are available from programmableweb.com. I initially started gathering a list of the APIs in Programmableweb’s “Most Popular” category, but then I just started adding links as I ran across interesting APIs.

  1. Google Places API
  2. Twitter API
  3. Flickr API
  4. Facebook API
  5. Youtube API
  6. eBay API
  7. Amazon API
  8. Twilio API
  9. Last.fm API
  10. Bing API
  11. Delicious API
  12. Google Cloud API
  13. Foursquare API
  14. Google Data API
  15. Dropbox API
  16. Splunk API
  17. Flattr API
  18. Docusign API
  19. Geonames
  20. Adsense API
  21. Box API
  22. Amazon API
  23. Linkedin API
  24. Instagram API
  25. Yahoo BOSS API
  26. Yahoo Social API
  27. Google Analytics API
  28. Yelp API
  29. Panaromio API
  30. Facebook API
  31. Eventful API
  32. Concur API
  33. Paypal API
  34. Bitly API
  35. Hostip API
  36. Reddit API
  37. Netvibes API
  38. Rhapsody API
  39. Donors Choose
  40. Sendgrid API
  41. Photobucket API
  42. Mailchimp
  43. Basecamp API
  44. Smugmug API
  45. NYTimes API
  46. USPS API
  47. NWS API
  48. Evernote API
  49. Stripe API
  50. Parse API
  51. Opensecrets API
  52. Compete API
  53. CNET API
  54. Amazon API
  55. Hoiio API
  56. Citygrid API
  57. Mapbox API
  58. Groupon API
  59. AddThis Menu API
  60. Yahoo Weather API
  61. SimplyHired API
  62. Crunchbase API
  63. Zendesk API
  64. nDango API
  65. Ninja Blocks API
  66. Pushover API
  67. Pusher API
  68. Pingdom API
  69. Daily Mile API
  70. Jive
  71. IBM Watson (uses Swagger)
  72. HipChat API
  73. Stores API
  74. Alchemy API
  75. Indivo API 1.0 and Indivo API 2.0 on readthedocs platform
  76. Socrata API
  77. Github API
  78. Mailgun API
  79. RiotGames API example of Swagger
  80. Basecamp API example of Github
  81. UserApp API
  82. Kimono Labs API
  83. SwiftType API
  84. Snipcart API
  85. VHX API
  86. Polldaddy API
  87. Gumroad API
  88. Formstack API
  89. Livefyre API
  90. Salesforce Chatter RESt API
  91. Rotten Tomatoes API
  92. Github
  93. Context.io

Programmableweb.com: A directory of API doc sites on the open web

For a directory of API documentation sites on the open web, see the Programmableweb.com. You can browse more than 13,000 web APIs.

Programmable web directory

Note that Programmableweb only lists web APIs, meaning APIs that you can access on the web. They don’t list the countless internal, firewalled-off APIs that many companies provide at a cost to paying customers. There are many more thousands of firewalled, private APIs out there that most of us will never know about.

Look at 5 different APIs

Look at about 5 different APIs (choose any of those listed on the page). Look for one thing that the APIs have in common.

4.7 Breaking down API doc

API docs have tremendous variety

Perhaps no other genre of technical documentation has such variety in the outputs as API documentation. Almost every API documentation site looks unique. REST APIs are as diverse as different sites on the web, each with their own branding, navigation, terminology, and style.

No common tooling

Just as websites have a diversity of engines, platforms, and approaches, so too does API documentation. There is no common tooling like there is among GUI documentation. You can’t usually determine what platform is driving the outputs, and often the branding fits in seamlessly with the other company content.

Similar patterns and structures

Despite the wide variety of APIs, there is some commonality among them. The common ground is primarily in the endpoint documentation. But user guides have common themes as well.

Three kinds of API doc content

In a blog post by the writers at Parse, they break down API doc content into three main types:

Reference: This is the listing of all the functionality in excruciating detail. This includes all datatype and function specs. Your advanced developers will leave this open in a tab all day long.

Guides: This is somewhere between the reference and tutorials. It should be like your reference with prose that explains how to use your API.

Tutorials: These teach your users specific things that they can do with your API, and are usually tightly focused on just a few pieces of functionality. Bonus points if you include working sample code.</blockquote>

I think this division of content represents the API documentation genre well and serves as a good guide as you develop strategies for publishing API documentation.

In Mulesoft’s API platform, you can see many of these sections in their standard template for API documentation:

Common sections in API documentation

I won’t get into too much detail about each of these sections. In previous sections of this course, I explored the content development aspect of API documentation in depth. Here I’ll just list the salient points.

Guides

In most API guide articles, you’ll find the following recurring themes in the guides section:

Guide articles aren’t auto-generated, and they vary a lot from product to product. When technical writers are involved in API documentation, they’re almost always in charge of this content.

Tutorials

The second genre of content is tutorial articles. Whether it’s called Getting Started, Hello World Tutorial, First Steps, or something else, the point of the tutorial articles is to guide a new developer into creating something simple and immediate with the API.

By showing the developer how to create something from beginning to end, you provide an overall picture of the workflow and necessary steps to getting output with the API. Once a developer can pass the authorization keys correctly, construct a request, and get a response, he or she can start using practically any endpoint in the API.

Here’s a list of tutorials from Parse:

Parse tutorials

Some tutorials can even serve as reference implementations, showing full-scale code that shows how to implement something in a detailed way. This kind of code is highly appealing to developers because it usually helps clarify all the coding details.

Reference

Finally, reference documentation is probably the most characteristic part of API documentation. Reference documentation is highly structured and templatized. Reference documentation follows common patterns when it comes to describing endpoints.

In most endpoint documentation, you’ll see the following sections:

If engineers write anything, it’s usually the endpoint reference material.

Note that the endpoint documentation is never meant to be a starting point. The information is meant to be browsed, and a new developer will need some background information about authorization keys and more to use the endpoints.

Here’s a sample page showing endpoints from Instagram’s API:

Instagram endpoints

4.8 Tool decisions

Writers tools or developers tools

One of the first considerations to make when you think about API doc tooling is who will be doing the writing. If developers will be writing and contributing to the docs, you should integrate the writing tools and process into their toolchain and workflow.

One the other hand, if technical writers will create all the documentation, generating doc content from the source code may only prove to be a complicated hassle with little benefit.

Integrating into engineering tools and workflows

Riona Macnamara is a technical writer at Google. Riona says that several years ago, internal documentation at Google was scattered across wikis, Google Sites, Google Docs, and other places.

In surveys at Google about the workplace, many employees said the inability to find accurate, up-to-date documentation was one of the biggest pain points.

Despite Google’s excellence in organizing the world’s information, organizing it internally proved to be difficult.

Riona says they helped solve the problem by integrating documentation into the engineer’s workflow. Rather than trying to force-fit writer tools onto engineers, they fit the documentation into developer tools.

Developers now write documentation in Markdown files in the same repository as their code. Some other engineers wrote a script to display these Markdown files in a browser directly from the code repository.

The method quickly gained traction, with hundreds of developer projects adopting the new method. Now instead of authoring documentation in a separate system (using writers’ tools), developers simply add the doc in the same repository as the code. This ensures that anyone who is using the code can also find the documentation.

Engineers can either read the documentation directly in the Markdown source, or they can read it displayed in a browser.

If you plan to have developers write, definitely check out Riona Macnamara’s Write the Docs 2015 presentation: Documentation, Disrupted: How two technical writers changed Google engineering culture.

Pros of having developers write

Having developers write or contribute to documentation provides several advantages.

Avoids documentation drift

By keeping documentation tightly coupled with code, you can avoid documentation drift. The idea is that documentation that exists separate from the documentation has a tendency to get out of sync with the actual code. As developers add new parameters, functions, and other details, the technical writers may not be aware of all these details. But many in-source document generators will actually drive the output directly from the parameters and classes in the code.

Continental drift

Allows the person who creates the code (and so best understands it) to also document it

Let’s face it – sometimes developer documentation is so complex, only developers can really write it. Unless you have a background in engineering, understanding all the details in programming, server configuration, or other technical platforms may be beyond the technical writers’ ability to document without a lot of research, interviewing, and careful note taking.

Sometimes developers prefer to just write the doc themselves, communicating from one developer to another. If a developer is the audience, and another developer is the writer, chances are they can cut through some of the guesswork about assumptions, prerequisite knowledge, and accuracy.

Cons of having developers write

Here are a few cons in having developers write documentation.

Problem 1: The curse of knowledge

A developer who creates the API may assume too much of the audiences’ technical ability. As a result, the descriptions may not be helpful. Steven Pinker explains that the curse of knowledge is one reason why writing is often bad.

Steven Pinker on the source of bad writing

The more you know about a topic, the more assumptions and background information you have automatically firing away in your brain. You become blind to all of these assumptions, terms, and other details that new learners struggle with. You’re so familiar with a topic that you can’t see it as a new learner would. You don’t know the questions to ask, the things that don’t make sense.

Problem 2: Not task-focused

Documentation generated from source files is feature-based. It’s the equivalent of writing documentation tab by tab in a GUI interface. In contrast, task-based doc includes multiple calls and workflows in support of goals. Task-based documentation might make use of several different objects and methods across the reference doc.

If developers write the documentation in the source, most likely the result will be somewhat useless feature-based documentation. Here’s a text one of my colleagues, a project manager, sent me about the challenges he’s facing with documentation:

Text about dependencies and workflows</a>

Capturing and describing the interdependencies, goals, workflows, and other tasks that cut across endpoints and setups is more of a task suited to a technical writer, not a developer who is simply defining a parameter in the source file of a class he or she created.

Problem 3: Output doesn’t integrate with user guide doc

Documentation generated from the source doesn’t integrate directly into a website except as a link from your other web pages. Like a HAT-produced webhelp file, the auto-doc is its own little website. Here’s an example from Netty’s documentation that shows how the auto-generated doc is separate from the rest of the site.

No integration

Having separate outputs creates a somewhat fragmented or disjointed documentation experience. Branding the outputs to create one seamlessly branded site may require a lot of cobbling together and overwriting of stylesheets.

Problem 4: Gives illusion of having complete doc

Finally, when documentation is generated from the source, written by developers, it can given the illusion of documentation. This is something Jacob Kaplan Moss writes about. He says,

… auto-generated documentation is worse than useless: it lets maintainers fool themselves into thinking they have documentation, thus putting off actually writing good reference by hand. If you don’t have documentation just admit to it. Maybe a volunteer will offer to write some! But don’t lie and give me that auto-documentation crap”.

Auto-generated just means the documentation is generated from code annotations in the source files. If you have an output like this, it may give the idea that you’ve got all the documentation you need. In reality, the reference documentation is just one part of API documentation. The user guides and tutorials – elements that can’t be auto-generated – are just as important as the reference documentation.

4.9 Github wikis

Github wikis as complementary repositories to code projects

When you create a repository on Github, the repository comes with a wiki that you can add pages to. This wiki can be really convenient if your source code is stored on Github.

Here’s an example of the Basecamp API, which is housed on Github.

Basecamp API

Markdown syntax

You write wiki pages in Markdown syntax. There’s a special flavor of Markdown syntax for Github wikis. The Github Flavored Markdown allows you to create tables, add classes to code blocks (for proper syntax highlighting), and more.

The wiki repository

The wiki you create is its own repository that you can clone locally. (If you look at the “Clone this wiki locally link,” you’ll see that it’s a separate repo from your main code repository.) You can work on files locally and then commit them to the wiki repository when you’re ready to publish.

You can also arrange the wiki pages into a sidebar.

Treating doc as code

One of the neat things about using a Github repository is that you treat the doc as code, editing it in a text editor, committing it to a repository, and packaging it up into the same area as the rest of the source code. Because it’s in its own repository, technical writers can work in the documentation right alongside project code without getting merge conflicts.

Working locally allows you to leverage other tools

Because you can work with the wiki files locally, you can leverage other tools (such as static site generators, or even DITA) to generate the Markdown files. This means you can handle all the re-use, conditional filtering, and other logic outside of the Github wiki. You can then output your content as Markdown files and then commit them to your Github repository.

Limitations with Github wikis

There are some limitations with Github wikis:

Create a Github wiki and publish content on a sample page

In this section, you will create a new Github repo and publish a sample file there.

  1. Go to Github.com and either sign in or create an account.
  2. After you’re signed in, click the + button in the upper-right corner and select New repository.

    Creating a new Github repository

  3. Give the repository a name, description, select Public, select to Initialize the repo with a README, and then click Create repository.

    Creating a new Github repository

  4. Click the Wiki link at the top of the repository.
  5. Click Create first page.
  6. Insert your own sample documentation page, preferably using Markdown syntax. Or grab the sample Markdown page of a fake endpoint called surfreport here and insert it into the page.

  7. Click Save page.

Notice how Github automatically converts the Markdown syntax into HTML with some decent styling.

You could use this Github wiki in an entirely browser-based way for multiple people to collaborate and edit content. However, you can also take all the content offline and edit locally, and then reupload all your edits.

Save the Github repository locally

  1. While viewing your the Github wiki in your browser, look for clone repo link next to the HTTPS button. Copy the link by clicking the Copy to clipboard button.

    Cloning the wiki gives you a copy of the content on your local machine. Git is distributed version control software, so everyone has his or her own copy.

    More than just copying the files, though, when you clone a repo, you initialize Git in the cloned folder. Git starts tracking your edits to the files, providing version control. You can run “pull” commands to get updates of the online repository pulled down to your local copy. You can also commit your changes and then push your changes back up to the repository if you’re entitled as a collaborator for the project.

    The “Clone this wiki locally” link allows you to easily insert the URL into a git clone {url} command in your terminal.

    In contrast to “Clone this wiki locally,” the “Clone in Desktop” option launches the Github Desktop client and allows you to manage the repository and your modified files, commits, pushes, and pull through the Github Desktop client.

  2. If you’re a Windows user, open the Git Shell, which should be a shortcut on your Desktop or should be available in your list of programs. (This shell gets installed when you installed Github Desktop.)
  3. In your terminal, either use the default directory or browse to a directory where you want to download your repository.
  4. Type the following, but replace the git URL with your own git URL that you copied earlier. The command should look like this:

     git clone https://github.com/tomjohnson1492/weatherapi.wiki.git
    
  5. Navigate to the directory (either using standard ways of browsing for files on your computer or via the terminal) to see the files you downloaded. If you can view invisible files on your machine, you will also see a git folder.

Set up Git and Github authentication

  1. Set up Git on your computer.

    If you’re installing the Windows version of Github Desktop, after you install Github, you’ll get a special Github Shell shortcut that you can use to work on the command line. You should use that special Github Shell rather than the usual command line prompt.

    Note that when you use that Github Shell, you can also use more typical Unix commands, such as pwd for present working directory instead of dir (though both commands will work).

    On a Mac, however, you don’t need a special Git Shell. Open the Terminal in the same way — go to Applications > Utilities > Terminal.

    You can check to see if you have Git already installed by opening a terminal and typing git --version.

  2. Configure Git with Github authorization. This will allow you to push changes without entering your username and password each time. See the following topics to set this up:

After you make these configurations, close and re-open your terminal.

Make a change locally, commit it, and push the commit to the Github repository

  1. In a text editor, open the Markdown file you downloaded in the github repository.
  2. Make a small change and save it.
  3. In your terminal, make sure you’re in the directory where you downloaded the github project. To look at the directories under your current path, type ls. Then use cd {directory name} to drill into the folder, or cd ../ to move up a level.

  4. Add the file to your staging area:

     git add --all
    

    Git doesn’t track all files in the same folder where the invisible Git folder has been initialized. Git tracks modifications only for the files that have been “added” to Git. By selecting --all, you’re adding all the files in the folder to Git. You could also type a specific file name here instead of --all.

  5. See the changes set in your staging area:

     git status
    

    The staging area lists all the files that have been added to Git that you have modified in some way.

  6. Commit the changes:

     git commit -m "updated some content"
    

    When you commit the changes, you’re creating a snapshot of the files at a specific point in time for versioning.

    The command above is a shortcut for committing and typing a commit message in the same command. It’s much easier to commit updates this way.

    If you just type git commit, you’ll be prompted with another window to describe the change. On Windows, this new window will be a Notepad window. Describe the change on the top line, and then save and close the Windows file.

    On a Mac, a new window doesn’t open. Instead, the vi editor mode opens up. (“vi” stands for visual, but it’s not a very visual editor.) To use this mode, you have to know a few simple Unix commands:

    You can also use other vi commands.

  7. Push the changes to your repository:

     git push
    
  8. Now verify that your changes took effect. Browse to your Github wiki repository and look to see the changes.

5.0 More about Markdown

Markdown overview

Markdown is a shorthand syntax for HTML. Instead of using ul and li tags, for example, you just use asterisks (*). Instead of using h2 tags, you use hashes (##). There’s a Markdown tag for most of the common HTML elements.

Here’s a sample to get a sense of the syntax:

## Heading 2

This is a bulleted list: 

* first item
* second item
* third item

This is a numbered list: 

1. Click this **button**.
2. Go to [this site](http://www.example.com).
3. See this image:

![My alt tag](myimagefile.png)

Markdown is meant to be kept simple, so there isn’t a comprehensive Markdown tag for each HTML tag. For example, if you need figure elements and figcaption elements, you’ll need to use HTML. What’s nice about Markdown is that if the Markdown syntax doesn’t provide the tag you need, you can just use HTML.

If a system accepts Markdown, it converts the Markdown into HTML so the browser can read it.

John Grubber, a blogger, first created Markdown (see his Markdown documentation here). Others adopted it, and many made modifications to include the syntax they needed. As a result, there are various “flavors” of Markdown, such as Github-flavored Markdown, Multimarkdown, and more.

In contrast, DITA is a committee-based XML architecture derived from a committee. There aren’t lots of different flavors and spinoffs of DITA based on how people customized the tags. There’s an official DITA spec that is agreed-upon by the DITA OASIS committee. Markdown doesn’t have that kind of committee, so it evolves on its own as people choose to implement it.

Why developers love Markdown

In many development tools you use for publishing documentation, many of them will use Markdown. For example, Github uses Markdown. If you upload files containing Markdown and use an md file extension, Github will render the Markdown into HTML.

Markdown has appeal especially by developers for a number of reasons:

You can work in text-file formats using your favorite code editor

Although you can also work with DITA in a text editor, it’s a lot harder to read the code with all the XML tag syntax. For example, look at the tags required by DITA for a simple instruction about printing a page:

<task id="task_mhs_zjk_pp">
    <title>Printing a page</title>
    <taskbody>
<steps>
        <stepsection>To print a page:</stepsection>
    <step>
        <cmd>Go to <menucascade>
            <uicontrol>File</uicontrol><uicontrol>Print</uicontrol>
        </menucascade></cmd>
    </step>
    <step>
        <cmd>Click the <uicontrol>Print</uicontrol> button.</cmd>
    </step>
</steps>
    </taskbody>
</task>

Now compare the same syntax with Markdown:

## Print a page
1. Go to **File > Print**.
2. Click the **Print** button.

Although you can read the XML and get used to it, most people who write in XML use specialized XML editors (like OxygenXML) that make the raw text more readable. Or simply by working in XML all day, you get used to working with all the tags.

But if you send a developer an XML file, they probably won’t be familiar with all the tags, nor the nesting schema of the tags. For whatever reason, developers tend to be allergic to XML.

In contrast, Markdown allows you to easily read it and work with it in a text editor.

You can treat the Markdown files with the same workflow and routing as code

Another great thing about Markdown is that you can package up the Markdown files and run them through the same workflow as code. You can run diffs to see what changed, you can insert comments, and exert the same control as you do with regular code files. Working with Markdown files comes naturally to developers.

Markdown is easy to learn

Finally, developers usually don’t want to expend energy learning an XML documentation format. Most developers don’t want to spend a lot of time in documentation, so when they do review content, the simpler the format, the better. Markdown allows developers to quickly format content in HTML without investing hardly any time in learning a tool or XML schema or other formatting.

Drawbacks of Markdown

Markdown has a few drawbacks:

Markdown has different flavors

Whatever system you adopt, if it uses Markdown, make sure you understand what type of Markdown it supports. There are two components to Markdown. First is the processor that converts the Markdown into HTML. Some processors include Red Carpet, Kramdown, Pandoc, Discount, and more.

Beyond the processor, you need to know which type of Markdown the processor supports. Some examples include basic Markdown, Github-flavored Markdown, Multimarkdown, and others.

Markdown and complexity

If you need more complexity than Markdown offers, a lot of tools will leverage other templating languages, such as Liquid or Coffeescript. Many times these other processing languages will fill in the gaps for Markdown and provide you with the ability to create includes, conditional attributes, conditional text, and more.

Analyzing a Markdown sample

Take a look at the following Markdown content. Try to identify the various Markdown syntax used.

# surfreport/{beachId}

Returns information about surfing conditions at a specific beach ID, including the surf height, water temperature, wind, and tide. Also provides an overall recommendation about whether to go surfing. 

`{beachId}` refers to the ID for the beach you want to look up. All Beach ID codes are available from our site.

## Endpoint definition

`surfreport/{beachId}`

## HTTP method

<span class="label label-primary">GET</span> 

## Parameters

| Parameter | Description | Data Type | 
|-----------|------|-----|-----------|
| days | *Optional*. The number of days to include in the response. Default is 3. | integer | 
| units | *Optional*. Whether to return the values in imperial or metric measurements. Imperial will use feet, knots, and fahrenheit. Metric will use centimeters, kilometers per hour, and celsius. | string |
| time | *Optional*. If you include the time, then only the current hour will be returned in the response.| integer. Unix format (ms since 1970) in UTC. |

## Sample request

```
curl --get --include 'https://simple-weather.p.mashape.com/surfreport/123?units=imperial&days=1&time=1433772000' 
  -H 'X-Mashape-Key: WOyzMuE8c9mshcofZaBke3kw7lMtp1HjVGAjsndqIPbU9n2eET' 
  -H 'Accept: application/json'
```

## Sample response

```json
{
    "surfreport": [
        {
            "beach": "Santa Cruz",
            "monday": {
                "1pm": {
                    "tide": 5,
                    "wind": 15,
                    "watertemp": 80,
                    "surfheight": 5,
                    "recommendation": "Go surfing!"
                },
                "2pm": {
                    "tide": -1,
                    "wind": 1,
                    "watertemp": 50,
                    "surfheight": 3,
                    "recommendation": "Surfing conditions are okay, not great."
                },
                "3pm": {
                    "tide": -1,
                    "wind": 10,
                    "watertemp": 65,
                    "surfheight": 1,
                    "recommendation": "Not a good day for surfing."
                }
            }
        }
    ]
}
```

The following table describes each item in the response.

|Response item | Description |
|----------|------------|
| **beach** | The beach you selected based on the beach ID in the request. The beach name is the official name as described in the National Park Service Geodatabase. | 
| **{day}** | The day of the week selected. A maximum of 3 days get returned in the response. | 
| **{time}** | The time for the conditions. This item is only included if you include a time parameter in the request. | 
| **{day}/{time}/tide** | The level of tide at the beach for a specific day and time. Tide is the distance inland that the water rises to, and can be a positive or negative number. When the tide is out, the number is negative. When the tide is in, the number is positive. The 0 point reflects the line when the tide is neither going in nor out but is in transition between the two states. | 
| **{day}/{time}/wind** | The wind speed at the beach, measured in knots per hour or kilometers per hour depending on the units you specify. Wind affects the surf height and general wave conditions. Wind speeds of more than 15 knots per hour make surf conditions undesirable, since the wind creates white caps and choppy waters. | 
| **{day}/{time}/watertemp** | The temperature of the water, returned in Farenheit or Celsius depending upon the units you specify. Water temperatures below 70 F usually require you to wear a wetsuit. With temperatures below 60, you will need at least a 3mm wetsuit and preferably booties to stay warm.|
| **{day}/{time}/surfheight** | The height of the waves, returned in either feet or centimeters depending on the units you specify. A surf height of 3 feet is the minimum size needed for surfing. If the surf height exceeds 10 feet, it is not safe to surf. | 
| **{day}/{time}/recommendation** | An overall recommendation based on a combination of the various factors (wind, watertemp, surfheight). Three responses are possible: (1) "Go surfing!", (2) "Surfing conditions are okay, not great", and (3) "Not a good day for surfing." Each of the three factors is scored with a maximum of 33.33 points, depending on the ideal for each element. The three elements are combined to form a percentage. 0% to 59% yields response 3, 60% - 80% and below yields response 2, and 81% to 100% yields response 3. | 

## Error and status codes

The following table lists the status and error codes related to this request.

| Status code | Meaning | 
|--------|----------|
| 200 | Successful response |
| 400 | Bad request -- one or more of the parameters was rejected. |
| 4112 | The beach ID was not found in the lookup. |

## Code example

The following code samples shows how to use the surfreport endpoint to get the surf conditions for a specific beach. In this case, the code is just showing the overall recommendation about whether to go surfing.

```html
<!DOCTYPE html>
<html>
<head>
<script src="http://code.jquery.com/jquery-2.1.1.min.js"></script>
  <meta charset="utf-8">
  <title>API Weather Query</title>
  <script>

  function getSurfReport() { 

// use AJAX to avoid CORS restrictions in API calls.
 var output = $.ajax({
    url: 'https://simple-weather.p.mashape.com/surfreport/123?units=imperial&days=1&time=1433772000', 
    type: 'GET', 
    data: {}, 
    dataType: 'json',
    success: function(data) {
        //Here we pull out the recommendation from the JSON object.
        //To see the whole object, you can output it to your browser console using console.log(data);
        document.getElementById("output").innerHTML = data.surfreport[0].monday.2pm.recommendation; 
        },
    error: function(err) { alert(err); },
    beforeSend: function(xhr) {
    xhr.setRequestHeader("X-Mashape-Authorization", "WOyzMuE8c9mshcofZaBke3kw7lMtp1HjVGAjsndqIPbU9n2eET"); // Enter here your Mashape key
    }
});
	
}
 
</script>
</head>
<body>
	
  <button onclick="getSurfReport()">See the surfing recommendation</button>
  <div id="output"></div>
  
</body>
</html>
```

In this example, the `ajax` method from jQuery is used because it allows cross-origin resource sharing (CORS) for the weather resources. In the request, you submit the authorization through the header rather than directly in the endpoint path. The endpoint limits the days returned to 1 in order to increase the download speed.

For simple demo purposes, the response is assigned to the `data` argument of the success method, and then written out to the `output` tag on the page. We're just getting the surfing recommendation, but there's a lot of other data you could choose to display.

Write some Markdown on a page

On your Github wiki page, edit the page and create the following:

5.1 Version control systems

About version control systems

Pretty much every IT shop uses some form of version control with their software code. Version control is how developers collaborate and manage their work.

If you’re working in API documentation, you’ll most likely need to plug into your developer’s version control system to get code. Or you may be creating branches and adding or editing documentation there.

Many developers are extremely familiar with version control, but typically these systems aren’t used much by technical writers because technical writers have traditionally worked with binary file formats, such as Microsoft Word and Adobe Framemaker. Binary file formats are readable only by computers, and version control systems do a poor job in managing binary files because you can’t easily see changes from one version to the next.

If you’re working in a text file format, you can integrate your doc authoring and workflow into a version control system. If you do, a whole new world will open up to you.

Different types of version control systems

There are different types of version control systems. A centralized version control system requires everyone to check out or synchronize files with a central repository when editing them. This setup isn’t so common anymore, since working with files on a central server tends to be slow.

More commonly, software shops use distributed version control systems. The most common systems are probably Git and Mercurial. Largely due to the fact that Github provides repositories for free on the web, Git is the most common version control repository for web and open source projects, so we’ll be focusing on it more. However, these two systems share the same concepts and workflows.

Github
Github's distributed version control system allows for a phenomenon called "social coding."

Note that Github provides online repositories and tools for Git. However, Git and Github aren’t the same.

The idea of version control

When you install version control software such as Git and initialize a repository in a folder, an invisible folder gets added to the repository. This invisible folder handles the versioning of the content in that folder.

When you add files to Git and commit them, Git takes a snapshot of that file at that point in time. When you commit another change, Git creates another snapshot. If you decide to revert to an earlier version of the file, you just revert to the particular snapshot. This is the basic idea of versioning content.

Basic workflow with version control

There are many excellent tutorials on version control on the web, so I’ll defer to those tutorials for more details. In short, Git provides several stages for your files. Here’s the general workflow:

  1. You must first add any files that you want Git to track. Just because the files are in the initialized Git repository doesn’t mean that Git is actually tracking and versioning their changes. Only when you officially “add” files to your Git project does Git start tracking changes to that file.
  2. Any modified files that Git is tracking are said to be in a “staging” area.
  3. When you “commit” your files, Git creates a snapshot of the file at that point in time. You can always revert to this snapshot.
  4. After you commit your changes, you can “push” your changes to the master. Once you push your changes to the master, your own working copy and the master branch are back in sync.

Branching

Git’s default repository is the “master” branch. When collaborating with others on the same project, usually people branch the master, make edits in the branch, and then merge the branch back into the master.

If you’re editing doc annotations in code files, you’ll probably follow this same workflow — making edits in a special doc branch. When you’re done, you’ll create a pull request to have developers merge the doc branch back into the master.

GUI version control clients

Although most developers use the command line when working with version control systems, there are many GUI clients available that may simplify the whole process. GUI clients might be especially helpful when you’re trying to see what has changed in a file, since the GUI can better highlight and indicate the changes taking place.

You can also see changes in a text file format, but the >>>>> and <<<<< tags aren’t always that intuitive.

Follow a typical workflow with a Github project using Github Desktop

In this tutorial, you’ll use Github Desktop to manage the workflow. First download and install Github Desktop. You’ll also need a Github account.

  1. Go to Github.com and create a new repository from the the Repositories tab.
  2. View your repository, and then click the Clone in Desktop button.

    Clone in Desktop

  3. Select the folder where you want to clone the repository (such as under your username), and then click Clone.

    Github Desktop should launch (you’ll need to allow the application to launch, most likely) and add the newly created repository.

    Repo added to Github Desktop

  4. Go into the repository (using your Finder or browsing folders normally) and add a simple text file with some content.
  5. Go back to Github Desktop and click the Uncommitted Changes link at the top.

    Uncommitted changes

    You’ll see the new file you added in the list of uncommitted changes.

  6. Type a commit message.
  7. Click Commit to Master.
  8. Click the History tab at the top. You can see the most recent commit there. If you view your repository online, you’ll see that the change you made has been pushed to the master.

Create a branch

Now let’s create a branch, make some changes, and then merge the branch into the master.

  1. Click the Add a branch button and create a new branch. Call it something like “tom-edits,” but use your own name.

    Adding a branch

    When you create the branch, you’ll see the branch drop-down menu indicate that you’re working in that branch. A branch is a copy of the master that exists on a separate line. You can see that the visual line in Github Desktop branches off to the side when you create a branch.

    Working in a branch

  2. Browse to the file you created earlier and make a change to it, such as adding a new line with some text.
  3. Return to Github Desktop and notice that on the Uncommitted Changes tab, you have new modified files.

    New files modified

    The right pane shows the deleted lines in red and new lines in green. This helps you see what changed.

    However, if you switch to the master branch, you won’t see the modified files. That’s because you’re working in a branch, and so your changes are associated with that branch. Switching this branch option in Github Desktop changes the working directory of your Github project to the branch.

    Switch back to your tom-edits branch.

Merge the branch through a pull request

  1. Now let’s merge the tom-edits branch into the master. Click the Pull Request button in the upper-right corner.

    You’re shown that you’re merging the tom-edits branch into the master.

  2. Describe the pull request, and then click Send Pull Request.

  3. Go to the link shown to evaluate the pull request online. In the browser interface, you can click the Files changes tab to see what files have changed in tom-edit that you are merging into the master.

  4. Click Merge Pull Request.

    Merging a pull request

    The branch gets merged into the master. You can delete the tom-edits branch now if you want.

  5. In your Github Desktop client, select the master branch, and then click the Sync button.

    The Sync button pulls the latest changes from the master and updates your working copy to it. You will see the pull request merged. It shows you the lines that have been added in the files.

    Merged pull request

Managing conflicts

Suppose you make a change on your local copy of a file in the repository, and someone else changes the same file in conflicting ways and commits it to the repository first. What happens?

When you sync with the repository, you’ll see a message prompting you to either discard your changes or to commit them before syncing.

“Syncing would overwrite your uncommitted changes. Please commit or discard your changes and try again.”

If you decide to commit your changes, you’ll see a message that says,

“Please resolve all conflicted files, commit, and then try syncing again.”

From the command line, if you run git status, it will tell you which files have conflicts. If you open the file with the conflicts, you’ll see markers showing you the conflicts. It will look something like this:

«««<HEAD I love carrots. ===== I love bananas. »»»>origin/master

In this case, HEAD is your local change. Here you changed the line to “I love carrots.” Origin/master shows the change someone else made and already committed to the master: “I love bananas.”

Fix all the conflicts by adjusting the content between the content markers and then deleting the content markers.

Now you need to re-add the file to git again. To add a specific file:

git add home.md

To re-add all files:

git commit -a

Now make a commit and push it to the origin’s master branch:

git commit -m "fixed conflicts"

Your options are the following:

5.2 Pull request workflows through Github in the browser

Managing reviews through Github

In the previous step, you used Github Desktop to manage the workflow of committing files and creating requests. In this tutorial, you’ll do a similar thing but using the browser-based interface that Github provides rather than using a terminal or Github Desktop.

Make edits in a separate branch

By default, your new repository has one branch called “Master.” Usually when you’re making changes or reviews/edits, you create a new branch and make all the changes in the branch. Then when finished, the repo owner merges edits from the branch into the master through a “pull request.”

To make edits in a separate branch:

  1. Pretend you’re a SME reviewer. Go to the Github repo and create a new branch by selecting the branch drop-down menu and typing a new branch name, such as “sme review.”

    Creating a new branch

    When you create a new branch, the content from the master is copied over into the new branch. The branch is like doing a “Save as” with an existing document.

  2. Click the README.txt file, and then click the Edit this file button (pencil icon) to edit the file.

    Making an edit

  3. Make some changes to the content, and then scroll down and click Commit Changes. Explain the reason for the changes and commit the changes to your sme review branch, and then click Commit Changes.

    Reviewers could continue making edits this way until they have finished reviewing all of the documentation. All of the changes are made on a branch, not the master.

Create a pull request

Now that the review process is complete, it’s time to merge the branch into the master. You merge the branch into the master through a pull request. Any “collaborator” on the team with write access can initiate and complete the pull request. You can add collaborators through Settings.

To create a pull request:

  1. View the repository and click the Pull requests button on the right.
  2. Click the New pull request button.

    New Pull Request

  3. Select the branch (“sme review”) that you want to compare against the master.

    Compare to

    When you compare the branch against the master, you can see a list of all the changes. You can view the changes through two viewing modes: Unified or Split. Unified shows the edits together in the same content area, whereas split shows the two files side by side.

  4. Click Create pull request.
  5. Describe the pull request, and then click Create pull request.

Process the pull request

Now pretend you are the project owner, and you see that you received a new pull request. You want to process the pull request and merge the sme review branch into the master.

  1. Click the Pull requests button to see the pending pull requests.
  2. Click the pull request and view the changes by clicking the Files changed tab.

    Note also that if the pull request is made against an older version of the master, such that the master’s original content no longer exists or has moved elsewhere, the merges will be more difficult to make.

  3. Click the Conversation tab, and then click the Merge pull request button.
  4. Click Confirm merge.

    The sme review branch gets merged into the master. Now the master and the sme review branch are the same.

  5. Click the Delete branch button to delete the sme review branch.

    If you don’t want to delete the branch here, you can always remove old branches by clicking the branches link while viewing your Github repository, and then click the Delete (trash can) button next to the branch.

    Deleting old branches

    If you look at your list of branches, you’ll see that the deleted branch no longer appears.

Add collaborators to your project

You need to add collaborators to your Github project so they can commit edits to a branch. If someone isn’t a collaborator and they want to make edits, they will receive an error.

If people don’t have write access, they can fork the project instead of making edits on a branch on the same project. Forking a project clones the entire repository, though, rather than creating a branch within the same repository. You can merge a forked repository, but this scenario probably is less common for technical writers working with developers on the same projects.

To add collaborators to your Github project:

  1. While viewing your Github repository, click the Settings button (gear icon) on the lower-right.
  2. Click the Collaborators tab on the left.
  3. Type the Github usernames of those you want to have access in the Collaborator area.
  4. Click Add Collaborator.

    Adding collaborators

5.3 REST API specification formats

REST API specifications

In an earlier lesson, I mentioned that REST APIs follow an architectural style, not a specific standard. However, there are several REST specifications that have been formulated to try to provide better documentation, tooling, and structure with REST APIs. The three most popular REST API specifications are as follows:

Should you use an automated solution?

In a survey on API documentation, I asked people if they were automating their REST API documentation through one of these standards. Only about 30% of the people said yes.

Keep in mind that these specifications just describe the reference endpoints in an API, for the most part. While the reference topics are important, in my documentation projects, the bulk of the documentation is actually not the reference topics. There is a tremendous amount of documentation about how to configure the services that use the endpoint, how to deploy the services, what the various resources and rules are, and so forth.

If you choose to automate your documentation using one of these specifications, it likely will be a separate site that showcases your endpoints and provides API interactivity. You’ll still need to write a boatload of documentation about how to actually use your API.

5.4 Implementing Swagger (OpenAPI specification) with your REST API documentation

(This article was originally published in ISTC Communicator, Autumn 2016.)

Introduction

On a recent project, after I created documentation for a new API (Application Programming Interface), the project manager wanted to demo the new functionality to some field engineers.

To prepare for the demo, the project manager summarised, in a PowerPoint presentation, the new endpoints that had been added. The request and responses from each endpoint, along with their parameters, were included as attractively as possible in a number of PowerPoint slides.

During the demo, the project manager talked through each of the slides, explaining the new endpoints, the parameters the users can configure, and the responses from the server. How did the field engineers react to the new demo?

The field engineers wanted to try out the requests and see the responses for themselves. They wanted to “push the buttons,” so to speak, and see how the API responded. I’m not sure if they were skeptical of the API’s advertised behavior, or if they had questions the slides failed to answer. But they insisted on making actual calls themselves and seeing the responses, despite what the project manager had noted on each slide.

The field engineers’ insistence on trying out every endpoint made me rethink my API documentation. All the engineers I’ve ever known have had similar inclinations to explore and experiment on their own.

I have a mechanical engineering friend who once nearly entirely dismantled his car’s engine to change a head gasket: he simply loved to take things apart and put them back together. It’s the engineering mind. When you force engineers to passively watch a PowerPoint presentation, they quickly lose interest.

After the meeting, I wanted to make my documentation more interactive, with options for users to try out the calls themselves. I had heard of Swagger (which is now called the OpenAPI specification but still commonly referred to as Swagger). I knew that Swagger was a way to make my API documentation interactive. Looking at the Swagger demo, I knew I had to figure it out.

About Swagger

Swagger is a specification for describing REST APIs. This means Swagger provides a set of objects, with a specific schema about their naming, order, and contents, that you use to describe each part of your API.

You can think of the Swagger specification like DITA but for APIs. With DITA, you have a number of elements that you use to describe your help content (for example, task, step, cmd). The elements have a specific order they have to appear in. The cmd element must appear inside a step, which must appear inside a task, and so on. The elements have to be used correctly according to the XML schema in order to be valid.

Many tools can parse valid DITA XML and transform the content into different outputs. The Swagger specification works similarly, only the specification is entirely different, since you’re describing an API instead of a help topic.

The official description of the Swagger specification is available in a Github repository. Some of these elements are {path}, parameters, responses, and security. Each of these elements is actually an “object” (instead of an XML element) that holds a number of fields and arrays.

In the Swagger specification, your endpoints are paths. If you had an endpoint called “pet”, your Swagger specification for this endpoint might look as follows:

paths:
  /pets:
    get:
      description: Returns all pets from the system that the user has access to
      operationId: findPets
      produces:
        - application/json
        - application/xml
        - text/xml
        - text/html
      parameters:
        - name: tags
          in: query
          description: tags to filter by
          required: false
          type: array
          items:
            type: string
          collectionFormat: csv
        - name: limit
          in: query
          description: maximum number of results to return
          required: false
          type: integer
          format: int32
      responses:
        '200':
          description: pet response
          schema:
            type: array
            items:
              $ref: '#/definitions/pet'

This YAML code actually comes from the Swagger Petstore demo.

Here’s what these objects mean:

It can take quite a while to figure out the Swagger specification. Give yourself a couple of weeks and a lot of example specification files to look at, especially in the context of the actual API you’re documenting. Remember that the Swagger specification is general enough to describe nearly every REST API, so some parts may be more applicable than others.

When you’re implementing the specification, instead of working in a text editor, you can write your code in the Swagger editor. The Swagger Editor dynamically validates whether the specification file you’re creating is valid.

Swagger Editor

While you’re coding in the Swagger Editor, if you make an error, you can quickly fix it before continuing, rather than waiting until a later time to run a build and sort out errors.

For your specification file’s format, you have the choice of working in either JSON or YAML. The previous code sample is in YAML. YAML refers to “YAML Ain’t Markup Language,” meaning YAML doesn’t have any markup tags (<>), as is common with other markup languages such as XML.

YAML depends on spacing and colons to establish the object syntax. This makes the code more human-readable, but it’s also trickier to get the spacing right.

Manual or automated?

So far I’ve been talking about creating the Swagger specification file as if it’s the technical writer’s task and requires manual coding in a text editor based on close study of the specification. That’s how I approached it, but developers can also automate the specification file through annotations in the programming source code.

Swagger offers a variety of libraries that you can add to your programming code. These libraries will parse through your code’s annotations and generate a specification file. Of course, someone has to know exactly what annotations to add and how to add them. Then someone has to write content for each of the annotation’s values (describing the endpoint, the parameters, and so on).

Still, many developers get excited about this approach because it offers a way to “automatically” generate documentation from code annotations, which is what developers have been doing for years with other programming languages such as Java (using Javadoc) or C++ (using Doxygen). They usually feel that generating documentation from the code results in less documentation drift.

Although you can generate your specification file from code annotations, not everyone agrees that this is the best approach. In Undisturbed REST: A Guide to Designing the Perfect API, Michael Stowe recommends that teams implement the specification by hand and then treat the specification file as a contract that developers use when doing the actual coding.

In other words, developers consult the specification file to see what the parameter names should be called, what the responses should be, and so on. After this contract has been established, Stowe says you can then put the annotations in your code to auto-generate the specification file.

Too often, development teams quickly jump to coding the API endpoints, parameters, and responses without doing much user testing or research into whether the API aligns with what users want. Since versioning APIs is extremely difficult (you have to support each new version going forward with full backwards compatibility to previous versions), you want to avoid the “faily fast” approach that is so commonly embraced with agile.

From the Swagger specification file, some tools can generate a mock API that you can put before users to have them try out the requests.

The mock API generates a response that looks like it’s coming from a real server, but it’s really just a pre-defined response in your code and appears to be dynamic to the user.

With my project, our developers weren’t that familiar with Swagger, so I simply created the specification file by hand. Additionally, I didn’t have free access to the programming sourcecode, and our developers spoke English as a second or third language only. They weren’t eager to be in the documentation business.

Parsing the Swagger specification

Once you have a valid Swagger specification file that describes your API, you can then feed this specification to different tools to parse it and generate the interactive documentation similar to the Petstore example I referenced earlier.

Probably the most common tool used to parse the Swagger specification is Swagger UI. After you download Swagger UI, you basically just open up the index.html file inside the “dist” folder (which contains the Swagger UI project build) and reference your own Swagger specification file in place of the default one.

The Swagger UI code generates a display that looks like this:

Swagger Petstore

Some designers criticise the Swagger UI’s expandable/collapsible output as being dated. I somewhat agree: the collapsed design makes it difficult to scan the information and easily see the details. However, at the same time, developers find the one-page model attractive and like the ability to zoom out or in for details.

As with most Swagger-based outputs, Swagger UI provides a “Try it out” button. First you populate the endpoint parameters with values. In the following image, users click the Example Value (yellow field) to populate the body parameter with the required JSON. In query parameters, there’s a simple form where you enter the values.

Swagger Parameters

After customizing the parameters, you click Try it out! Swagger UI shows you the cURL format of the request followed by the request URL and response. The response is usually returned in JSON format.

Swagger's response

There are other tools besides Swagger UI that can parse your Swagger specification file. Some of these tools include Restlet Studio, Apiary, Apigee, Lucybot, Gelato/Mashape, Readme.io, swagger2postman, swagger-ui responsive theme, Postman Run Buttons and more.

Some web designers have created integrations of Swagger with static site generators such as Jekyll (see Carte). More tools roll out regularly for parsing and displaying content from a Swagger specification file.

In fact, once you have a valid Swagger specification, using a tool called API Transformer, you can even transform it into other API specifications, such as RAML or API Blueprint. In this way you can expand your tool horizons even wider. (RAML and API Blueprint are alternative specifications to Swagger: they’re not as popular, but the logic of the specifications is similar.)

Responses to Swagger documentation

With my project, I used the Swagger UI to parse my Swagger specification. I customised Swagger UI’s colors a bit, added a logo and a few other features. I spliced in a reference to Bootstrap so that I could have pop-up modals where users could generate their authorisation codes. I even added some collapse and expand features in the description element to provide necessary information to users about a sample project.

Beyond these simple modifications, however, it takes a bit of web developer prowess to significantly alter the Swagger UI display.

When I showed the results to the project managers, they loved it. They quickly embraced the Swagger output in place of the PowerPoint slides and promoted it among the field engineers and users. The vice president of Engineering even decided that Swagger would be the default approach for documenting all APIs.

Overall, delivering the Swagger output was a huge feather in my cap at the company, and it established an immediate credibility of my technical documentation skills, since no one else in the company had a clue about how to deliver the Swagger output.

A slight trough of disillusionment

Despite Swagger’s interactive power to appeal to the “let me try” desires of users, I began to realise there were some downsides to Swagger.

Swagger’s output is still just a reference document. It provides the basics about each endpoint, including a description, the parameters, a sample request, and a response. It doesn’t provide space for a Hello World tutorial, information about how to get API keys, how to configure any API services, information about rate limits, or the thousand other details that go into a user guide.

So, even though you have this cool, interactive tool for users to explore and learn about your API, at the same time you still have to provide a user guide. Similarly, delivering a Javadoc or Doxygen output for a library-based API won’t teach users how to actually use your API. You still have to describe scenarios for using a class or method, how to set your code up, what to do with the response, how to troubleshoot problems, and so on. In short, you still have to write actual help guides and tutorials.

With Swagger in the mix, you now have some additional challenges. You have two places where you’re describing your endpoints and parameters, and you have to either keep the two in sync, or you have to link between the two.

Peter Gruenbaum, who has published several tutorials on writing API documentation on Udemy, says that automated tools such as Swagger work best when the APIs are simple.

I agree. When you have endpoints that have complex interdependencies and require special setup workflows or other unintuitive treatment, the straightforward nature of Swagger’s Try-it-out interface will likely leave users scratching their heads.

For example, if you must first configure an API service before an endpoint returns anything, and then use one endpoint to get a certain object that you pass into the parameters of another endpoint, and so on, the Try it out features in the Swagger UI output won’t make a lot of sense to users.

Additionally, some users may not realise that clicking “Try it out!” makes actual calls against their own accounts based on the API keys they’re using. Mixing an invitation to use an exploratory sandbox like Swagger with real data can create some headaches later on when users ask how they can remove all of the test data, or why their actual data is now messed up. If your API executes orders for supplies or makes other transactions, it can be even more challenging.

(For these scenarios, I recommend setting up sandbox or test accounts for users.)

Finally, I found that only endpoints with simple request body parameters tend to work in Swagger. Another API I had to document included requests with request body parameters that were hundreds of lines long. With this sort of request body parameter, Swagger UI’s display fell hopelessly short of being usable. The team reverted to much more primitive approaches (such as tables and spreadsheets) for listing all of the parameters and their descriptions.

Some consolations

Despite the shortcomings of Swagger, I still highly recommend it for describing your API.

Swagger is quickly becoming a way for more and more tools (from Postman Run buttons to nearly every API platform) to quickly ingest the information about your API and make it discoverable and interactive with robust, instructive tooling. Through your Swagger specification, you can port your API onto many platforms and systems, as well as automatically set up unit testing and prototyping.

Swagger does provide a nice visual shape for an API. You can easily see all the endpoints and their parameters (like a quick-reference guide).

Based on this framework, you can help users grasp the basics of your API.

Additionally, I found that learning the Swagger specification and describing my API helped shape my own API vocabulary. By poring through the specification, I realised that there were four types of parameters: “path” parameters, “header” parameters, “query” parameters, and “request body” parameters. I learned that parameter data types with REST were a “Boolean”, “number”, “integer”, or “string.” I learned that responses provided “objects” containing “strings” or “arrays.”

In short, implementing the specification gave me an education about API terminology, which in turn helped me describe the various components of my API in credible ways.

Swagger may not be the right approach for every API, but if your API has fairly simple parameters, without many interdependencies between endpoints, and if it’s practical to explore the API without making the user’s data problematic, Swagger can be a powerful complement to your documentation. You can give users the ability to try out requests and responses for themselves.

With this interactive element, your documentation becomes more than just information. Through Swagger, you create a space for users to both read your documentation and experiment with your API at the same time. That combination tends to provide a powerful learning experience for users.

Glossary

API
Application Programming Interface. Enables different systems to interact with each other programmatically. Two types of APIs are web services and library-based APIs.
cURL
A command line utility often used to interact with REST API endpoints. Used in documentation for request code samples.
Endpoint
The end part of the request URL (after the base path). Also sometimes used to refer to the entire API reference topic.
JSON
JavaScript Object Notation. A lightweight syntax containing objects and arrays, usually used (instead of XML) to return information from a REST API.
OpenAPI
The official name for Swagger. Now under the Open API Initiative with the Linux Foundation (instead of SmartBear, the original development group), the OpenAPI specification aims to be vendor neutral.
REST API
Stands for Representational State Transfer. Uses web protocols (HTTP) to make requests and provide responses in a language agnostic way, meaning that users can choose whatever programming language they want to make the calls.
Swagger
An official specification for REST APIs. Provides objects used to describe your endpoints, parameters, responses, and security. Now called OpenAPI specification.
Swagger Editor

Swagger specification validator. An online editor that dynamically checks whether your Swagger specification file is valid.

Swagger UI
A display framework. The most common way to parse a Swagger specification file and produce the interactive documentation as shown in the Petstore demo.
YAML
Recursive acronym for “YAML Ain’t No Markup Language.” A human- readable, space-sensitive syntax used in the Swagger specification file.

Resources and further reading

See the following resources for more information on Swagger:

5.41 Swagger tutorial

About Swagger

Swagger is one of the most popular specifications for REST APIs for a number of reasons:

The Swagger spec provides a way to describe your API using a specific JSON or YAML schema that outlines the names, order, and other details of the API.

You can code this Swagger file by hand in a text editor, or you can auto-generate it from annotations in your source code. Different tools can consume the Swagger file to generate the interactive API documentation.

The Swagger Petstore example

In order to get a better understanding of Swagger, let’s explore the Petstore example.

There are three resources: pet, store, and user.

Create a pet

  1. In the Pet resource, expand the Post method.
  2. Click the yellow JSON in the Model Schema section:

    Posting a new pet

    This populates the body value with the JSON. This is the JSON you must submit in order to create a pet.

  3. Change the value for the first id tag. (Make it really unique so that others don’t use the same id.)
  4. Change name value to something unique. Here’s an example:

     {
       "id": 37987,
       "category": {
         "id": 0,
         "name": "string"
       },
       "name": "Mr. Fluffernutter",
       "photoUrls": [
         "string"
       ],
       "tags": [
         {
           "id": 0,
           "name": "string"
         }
       ],
       "status": "available"
     }
    
  5. Click the Try it out! button.

    Look and see the response.

    JSON response

Find your pet by the ID

  1. Expand the GET pet/{petId} method.
  2. Insert your pet’s ID in the petId value box.
  3. Click Try it out!

    The pet you created is returned in the response.

    By default, the response will be in XML. Change the Response Content Type selector to application/json and click Try it out! again.

    The pet response is returned in JSON format.

Sorting out the Swagger components

Swagger has a number of different pieces:

Swagger spec: The Swagger spec is the official schema about name and element nesting, order, and so on. If you plan on hand-coding the Swagger files, you’ll need to be extremely familiar with the Swagger spec.

Swagger editor: The Swagger Editor is an online editor that validates your YML-formatted content against the rules of the Swagger spec. YML is a syntax that depends on spaces and nesting. You’ll need to be familiar with YML syntax and the rules of the Swagger spec to be successful here. The Swagger editor will flag errors and give you formatting tips. (Note that the Swagger spec file can be in either JSON or YAML format.)

Swagger online editor

Swagger-UI: The Swagger UI is an HTML/CSS/JS framework that parses a JSON or YML file that follows the Swagger spec and generates a navigable UI of the documentation. This is the tool that transforms your spec into the Swagger Petstore-like UI output.

Swagger-codegen: This utility generates client SDK code for a lot of different platforms (such as Java, JavaScript, Scala, Python, PHP, Ruby, Scala, and more). This client code helps developers integrate your API on a specific platform and provides for more robust implementations that might include more scaling, threading, and other necessary code. An SDK is supportive tooling that helps developers use the REST API.

Some sample Swagger implementations

Before we get into this tutorial, check out a few Swagger implementations:

Most of them look pretty much the same, with minimal branding. You’ll notice the documentation is short and sweet in a Swagger implementation. This is because the Swagger display is meant to be an interactive experience where you can try out calls and see responses — using your own API key to see your own data. It’s the learn-by-doing-and-seeing-it approach.

Note a few limitations with the Swagger approach:

Create a Swagger UI display

In this activity, you’ll create a Swagger UI display for the weatherdata endpoint in this Mashape Weather API. (If you’re jumping around in the documentation, this is a simple API that we used in earlier parts of the course.) You can see a demo of what we’ll build here:

Swagger UI demo

a. Create a Swagger spec file

To create a Swagger spec file:

  1. Go to the Swagger online editor.
  2. Select File > Open Example and choose PetStore Simple. Click Open.

    You could just customize this sample YML file with the weatherdata endpoint documentation. However, if you’re new to Swagger it will take you some time to learn the spec. For the sake of convenience, just go to the following file, and then copy and paste its code into the Swagger editor: swagger.yaml.

    The Swagger editor shows you how the file will look in the output. You’ll also be able to see if there are any validity errors. Without this online editor, you would only know that the YML syntax is valid when you run the code (and see errors indicating that the YAML file couldn’t be parsed).

  3. Make sure the YAML file is valid in the Swagger editor. If there are any errors, fix them.
  4. Go to File > Download YAML and save the file as “swagger.yaml” on your computer. (You could also just copy the code and insert it into a blank file and call it swagger.yaml.)

You can also choose JSON as the format, but YAML is more readable and works just as well.

b. Set Up the Swagger UI

  1. Go to the Swagger UI Github project. Click the Download ZIP button. Download the files to a convenient location on your computer and extract the files.

    The only folder you’ll be working with here is the dist folder. Everything else is used only if you’re regenerating the files, which is beyond the scope of this tutorial.

  2. Drag the dist folder out of the swagger-ui-master folder so that it stands alone. Then delete the swagger-ui-master folder.
  3. Inside your “dist” folder, open index.html in a text editor.
  4. Look for the following code:

     $(function () {
       var url = window.location.search.match(/url=([^&]+)/);
       if (url && url.length > 1) {
         url = decodeURIComponent(url[1]);
       } else {
         url = "http://petstore.swagger.io/v2/swagger.json";
       }
    
  5. Change the url value from http://petstore.swagger.io/v2/swagger.json to the following: "swagger.yaml";.
  6. Drag the swagger.yaml file that you created earlier into the same directory as the index.html file you just edited.

  7. The Mashape API also requires a header authorization, so you’ll need to make another change. Scroll down the index.html file until you find the addApiKeyAuthorization function:

       function addApiKeyAuthorization(){
         var key = encodeURIComponent($('#input_apiKey')[0].value);
         if(key && key.trim() != "") {
             var apiKeyAuth = new SwaggerClient.ApiKeyAuthorization("api_key", key, "query");
             window.swaggerUi.api.clientAuthorizations.add("api_key", apiKeyAuth);
             log("added key " + key);
         }
    
  8. Change that block so that it looks like this:

           function addApiKeyAuthorization(){
             var key = encodeURIComponent($('#input_apiKey')[0].value);
             if(key && key.trim() != "") {
                 var apiKeyAuth = new SwaggerClient.ApiKeyAuthorization("api_key", key, "query");
                 swaggerUi.api.clientAuthorizations.add("key", new SwaggerClient.ApiKeyAuthorization("X-Mashape-Key", "APIKEY", "header"));
                 log("added key " + key);
             }
    
  9. Insert your API key in APIKEY. (Otherwise users will have to enter their own API keys.)

  10. Uncomment out following lines here by removing the /* and */:

     // if you have an apiKey you would like to pre-populate on the page for demonstration purposes...
       /*
         var apiKey = "myApiKeyXXXX123456789";
         $('#input_apiKey').val(apiKey);
       */
    
  11. Add in your API key in place of the myApiKeyXXXX123456789 value.

     var apiKey = "myApiKeyXXXX123456789";
     $('#input_apiKey').val(apiKey);   
    
  12. Save the file.

If the previous instructions were confusing, just copy the following code and replace your entire index.html file with it. The only thing you’ll need to customize is the var apiKey = "APIKEY";. Replace APIKEY in a couple of places with your own API key for Mashape.

<!DOCTYPE html>
<html>
<head>
  <meta charset="UTF-8">
  <title>Swagger UI</title>
  <link rel="icon" type="image/png" href="images_api/favicon-32x32.png" | prepend: site.baseurl }}" sizes="32x32" />
  <link rel="icon" type="image/png" href="images_api/favicon-16x16.png" | prepend: site.baseurl }}" sizes="16x16" />
  <link href='css/typography.css' media='screen' rel='stylesheet' type='text/css'/>
  <link href='css/reset.css' media='screen' rel='stylesheet' type='text/css'/>
  <link href='css/screen.css' media='screen' rel='stylesheet' type='text/css'/>
  <link href='css/reset.css' media='print' rel='stylesheet' type='text/css'/>
  <link href='css/print.css' media='print' rel='stylesheet' type='text/css'/>
  <script src='lib/jquery-1.8.0.min.js' type='text/javascript'></script>
  <script src='lib/jquery.slideto.min.js' type='text/javascript'></script>
  <script src='lib/jquery.wiggle.min.js' type='text/javascript'></script>
  <script src='lib/jquery.ba-bbq.min.js' type='text/javascript'></script>
  <script src='lib/handlebars-2.0.0.js' type='text/javascript'></script>
  <script src='lib/js-yaml.min.js' type='text/javascript'></script>
  <script src='lib/lodash.min.js' type='text/javascript'></script>
  <script src='lib/backbone-min.js' type='text/javascript'></script>
  <script src='swagger-ui.js' type='text/javascript'></script>
  <script src='lib/highlight.9.1.0.pack.js' type='text/javascript'></script>
  <script src='lib/highlight.9.1.0.pack_extended.js' type='text/javascript'></script>
  <script src='lib/jsoneditor.min.js' type='text/javascript'></script>
  <script src='lib/marked.js' type='text/javascript'></script>
  <script src='lib/swagger-oauth.js' type='text/javascript'></script>

  <!-- Some basic translations -->
  <!-- <script src='lang/translator.js' type='text/javascript'></script> -->
  <!-- <script src='lang/ru.js' type='text/javascript'></script> -->
  <!-- <script src='lang/en.js' type='text/javascript'></script> -->

  <script type="text/javascript">
    $(function () {
      var url = window.location.search.match(/url=([^&]+)/);
      if (url && url.length > 1) {
        url = decodeURIComponent(url[1]);
      } else {
        url = "swagger.yaml";
      }

      hljs.configure({
        highlightSizeThreshold: 5000
      });

      // Pre load translate...
      if(window.SwaggerTranslator) {
        window.SwaggerTranslator.translate();
      }
      window.swaggerUi = new SwaggerUi({
        url: url,
        dom_id: "swagger-ui-container",
        supportedSubmitMethods: ['get', 'post', 'put', 'delete', 'patch'],
        onComplete: function(swaggerApi, swaggerUi){
          if(typeof initOAuth == "function") {
            initOAuth({
              clientId: "your-client-id",
              clientSecret: "your-client-secret-if-required",
              realm: "your-realms",
              appName: "your-app-name",
              scopeSeparator: ",",
              additionalQueryStringParams: {}
            });
          }

          if(window.SwaggerTranslator) {
            window.SwaggerTranslator.translate();
          }

          addApiKeyAuthorization();
        },
        onFailure: function(data) {
          log("Unable to Load SwaggerUI");
        },
        docExpansion: "none",
        jsonEditor: false,
        apisSorter: "alpha",
        defaultModelRendering: 'schema',
        showRequestHeaders: false
      });

        function addApiKeyAuthorization(){
          var key = encodeURIComponent($('#input_apiKey')[0].value);
          if(key && key.trim() != "") {
              var apiKeyAuth = new SwaggerClient.ApiKeyAuthorization("api_key", key, "query");
              swaggerUi.api.clientAuthorizations.add("key", new SwaggerClient.ApiKeyAuthorization("X-Mashape-Key", "EF3g83pKnzmshgoksF83V6JB6QyTp1cGrrdjsnczTkkYgYrp8p", "header"));
              log("added key " + key);
          }
      }

      $('#input_apiKey').change(addApiKeyAuthorization);

      // if you have an apiKey you would like to pre-populate on the page for demonstration purposes...
      
        var apiKey = "EF3g83pKnzmshgoksF83V6JB6QyTp1cGrrdjsnczTkkYgYrp8p";
        $('#input_apiKey').val(apiKey);
      

      window.swaggerUi.load();

      function log() {
        if ('console' in window) {
          console.log.apply(console, arguments);
        }
      }
  });
  </script>
</head>

<body class="swagger-section">
<div id='header'>
  <div class="swagger-ui-wrap">
    <a id="logo" href="http://swagger.io">swagger</a>
    <form id='api_selector'>
      <div class='input'><input placeholder="http://example.com/api" id="input_baseUrl" name="baseUrl" type="text"/></div>
      <div class='input'><input placeholder="api_key" id="input_apiKey" name="apiKey" type="text"/></div>
      <div class='input'><a id="explore" href="#" data-sw-translate>Explore</a></div>
    </form>
  </div>
</div>

<div id="message-bar" class="swagger-ui-wrap" data-sw-translate>&nbsp;</div>
<div id="swagger-ui-container" class="swagger-ui-wrap"></div>
</body>
</html>

c. Upload the Files to a Web Host

You can’t view the Swagger UI display locally — you must view it on a web server. If you already have a web server, great. Just upload the dist folder there and view it.

You can also run a web server locally on your computer through XAMPP:

  1. Download and install XAMPP.
  2. After installation, in your Applications folder, open the XAMPP folder and start the ‘'’manager-osx’’’ console.
  3. Click the Manage Servers tab in the console manager.
  4. Select Apache Web Server and click Start.
  5. Open the htdocs folder where XAMPP was installed. On a Mac, the location is usually in /Applications/XAMPP/xamppfiles/htdocs.
  6. Drag the dist folder into this space.
  7. In your browser, go to localhost/dist.

The Swagger UI display should appear.

Interact with the Swagger UI

  1. Go to Google Maps and search for an address.
  2. Get the latitude and longitude from the URL, and plug it into your Swagger UI. (For example, 1.3319164 for lat, 103.7231246 for lng.)
  3. Click Try it out.

    If successful, you should see something in the response body like this:

     9 c, Mostly Cloudy at South West, Singapore
    

    Try working with each of your endpoints and see the data that gets returned.

Auto-generating the Swagger file from code annotations

Instead of coding the Swagger file by hand, you can also auto-generate it from annotations in your programming code. There are many Swagger libraries for integrating with different code bases. These Swagger libraries then parse the annotations that developers add and generate the same Swagger file that you produced manually using the earlier steps.

By integrating Swagger into the code, you allow developers to easily write documentation, make sure new features are always documented, and keep the documentation more current. Here’s a tutorial on annotating code with Swagger for Scalatra. The annotation methods for Swagger doc blocks vary based on the programming language.

For other tools and libraries, see Swagger services and tools.

5.5 More about YAML

About YAML

When you created the Swagger file, you used a syntax called YML. YML stands for “YAML Ain’t Markup Language.” This means that the YAML syntax doesn’t have markup tags such as < or >.

YAML
The YAML site itself is written using YAML, which you can immediately see is not intended for coding web pages.

YML is easier to work with because it generally removes the brackets, curly braces, and commas that get in the way of reading content.

YML is an attempt to create a more human readable data exchange format. It’s similar to JSON (JSON is actually a subset of YAML) but uses spaces to indicate the structure.

Many computers ingest data in a YML or JSON format. It’s a syntax commonly used in configuration files and an increasing number of platforms (like Jekyll), so it’s a good idea to become familiar with it.

YAML is a superset of JSON

YAML and JSON are practically different ways of structuring the same data. Dot notation accesses the values the same way. For example, the Swagger UI can read the swagger.json or swagger.yaml files equivalently. Pretty much any parser that reads JSON will also read YAML. However, some YAML parsers might not read JSON, because there are a few features YAML has that JSON lacks (more on that later).

YAML syntax

With a YML file, spacing is significant. Each two-space indent represents a new level:

level1:
  level2:
    level3:

Each level can contain either a single key-value pair (also referred to as a dictionary) or a sequence (a list of hyphens):

---
  level3: 
    - 
      itema: "one"
      itemameta: "two"
    - 
      itemb: "three"
      itembmeta: "four"

YAML files begin with ---. The values for each key can optionally be enclosed in quotation marks or not. If your value has something like a colon or quotation mark in it, then you’ll want to enclose it in quotation marks. And if there’s a double quotation mark, then enclose the value in single quotation marks, or vice versa.

Comparing JSON to YAML

Earlier in the course, we looked at various JSON structures involving objects and arrays. Here let’s look at the equivalent YAML syntax for each of these same JSON objects.

Here are some key-value pairs in JSON:

{
"key1":"value1",
"key2":"value2"
}

Here’s the same thing in YAML syntax:

key1: value1
key2: value2

These key-value pairs are also called dictionaries.

Here’s an array (list of items) in JSON:

["first", "second", "third"]

In YAML, the array is formatted as a list with hyphens:

- first
- second
- third

Here’s an object containing an array in JSON:

{
"children": ["Avery","Callie","lucy","Molly"],
"hobbies": ["swimming","biking","drawing","horseplaying"]
}

Here’s the same object with an array in YAML:

children:
  - Avery
  - Callie
  - lucy
  - Molly
hobbies:
  - swimming
  - biking
  - drawing
  - horseplaying

Here’s an array containing objects in JSON:

[  
   {  
      "name":"Tom",
      "age":39
   },
   {  
      "name":"Shannon",
      "age":37
   }
]

Here’s the same array containing objects converted to YAML:

-
    name: Tom
    age: 39
-
    name: Shannon
    age: 37

Hopefully by seeing the syntax side by side, it will begin to make more sense. Is the YAML syntax more readable? It might be difficult to see in these simple examples.

JavaScript uses the same dot notation techniques to access the values in YAML as it does in JSON. (They’re pretty much interchangeable formats.) The benefit to using YAML, however, is that it’s more readable than JSON.

However, YAML is more tricky sometimes because it depends on getting the spacing just right. Sometimes that spacing is hard to see (especially with a complex structure), and that’s where JSON (while maybe more cumbersome) maybe easier to troubleshoot.

Some features of YAML not present in JSON

YAML has some features that JSON lacks.

You can add comments in YAML files using the # sign.

YAML also allows you to use something called “anchors.” For example, suppose you have two definitions that are similar. You could write the definition once and use a pointer to refer to both:

api: &apidef Application programming interface
application_programming_interface: *apidef

If you access the value (e.g., yamlfile.api or yamlfile.application_programming_interface), the same definition will be used for both. The *apidef acts as an anchor or pointer to the definition established at &apidef.

For details on other differences, see Learn YAML in Minutes. To learn more about YML, see this YML tutorial.

5.6 RAML tutorial

About RAML

RAML stands for REST API Modeling Language and is similar to Swagger and other API specifications. RAML is backed by Mulesoft, a commercial API company, and uses a more YAML-based syntax in the specification.

Similar to Swagger, once you create a RAML file that describes your API, it can be consumed by different platforms to parse and display the information in attractive outputs. The RAML format, which uses YML syntax, tries to be human-readable, efficient, and simple.

Sample RAML output in API Console</a>
This is a sample RAML output in something called API Console

Auto-generating client SDK code

It’s important to note that with these specs (not just RAML), you’re not just describing an API to generate a nifty doc output with an interactive console. There are tools that can also generate client SDKs and other code from the spec into a library that you can integrate into your project. This can help developers to more easily make requests to your API and receive responses.

Additionally, the interactive console can provide a way to test out your API before developers code it. Mulesoft offers a “mocking service” for your API that simulates calls at a different baseURI. The push for using a spec is to design your API the right way from the start, without iterating with different versions as you try to get the endpoints right.

Sample spec for Mashape Weather API

To understand the proper syntax and format for RAML, you need to read the RAML spec and look at some examples. See also this RAML tutorial and this video tutorial.

Even so, the documentation for the RAML spec isn’t always so clear. For example, when I was trying to get the right syntax for the security scheme, the information was lacking on how to create security schemes that were based on a custom key in the header.

Here’s the Mashape Weather API formatted in the RAML spec:

#%RAML 0.8
---
title: Mashape Weather API
baseUri: https://simple-weather.p.mashape.com
version: v1

/aqi:
  get: 
    description: Get the air quality index (AQI). The AQI number indicates the level of pollution in the air. **Higher** numbers are worse.
    headers:
      x-mashape-key: 
        displayName: Mashape key
        description: This header is used to send data that contains your mashape API key
        type: string
    queryParameters:
      lat:
        displayName: Latitude
        description: The latitude coordinate
        type: number
        required: true
        example: 37.354108
      lng:
        type: number
        description: The longitude coordinate
        required: true
        example: -121.955236
    responses:
       200:
         body:
           application/text:
            example: |
               65


/weather:
  get: 
    headers:
      x-mashape-key: 
        displayName: Mashape key
        description: This header is used to send data that contains your mashape API key
        type: string
    description: Gets the weather forecast for the current day
    queryParameters:
      lat:
        displayName: Latitude
        description: The latitude coordinate
        type: number
        required: true
        example: 37.354108
      lng:
        type: number
        description: The longitude coordinate
        required: true
        example: -121.955236
    responses:
       200:
         body:
           application/text:
            example: |
               28 c, Partly Cloudy at Santa Clara, United States
/weatherdata:
  get: 
    headers:
      x-mashape-key: 
        displayName: Mashape key
        description: This header is used to send data that contains your mashape API key
        type: string
    description: Gets a detailed weather object containing a lot of different weather information in a JSON object.
    queryParameters:
      lat:
        displayName: Latitude
        description: The latitude coordinate
        type: number
        required: true
        example: 37.354108
      lng:
        type: number
        description: The longitude coordinate
        required: true
        example: -121.955236
    responses:
       200:
         body:
           application/json:
            example: |
              { 
                "query": {  
                "count": 1,
                "created": "2014-05-03T03:57:53Z",
                "lang": "en-US",
                "results": {
                "channel": {
                "title": "Yahoo! Weather - Tebrau, MY",
                "link": "http://us.rd.yahoo.com/dailynews/rss/weather/Tebrau__MY/*http://weather.yahoo.c/forecast/MYXX0004_c.html",
                "description": "Yahoo! Weather for Tebrau, MY",
                "language": "en-us",
                "lastBuildDate": "Sat, 03 May 2014 11:00 am MYT",
                "ttl": "60",
                "location": {
                  "city": "Tebrau",
                  "country": "Malaysia",
                  "region": ""
                },
                "units": {
                  "distance": "km",
                  "pressure": "mb",
                  "speed": "km/h",
                  "temperature": "C"
                },
                "wind": {
                  "chill": "32",
                  "direction": "170",
                  "speed": "4.83"
                },
                "atmosphere": {
                  "humidity": "66",
                  "pressure": "982.05",
                  "rising": "0",
                  "visibility": "9.99"
                },
                "astronomy": {
                  "sunrise": "6:57 am",
                  "sunset": "7:06 pm"
                },
                "image": {
                  "title": "Yahoo! Weather",
                  "width": "142",
                  "height": "18",
                  "link": "http://weather.yahoo.com",
                  "url": "http://l.yimg.com/a/i/brand/purplelogo//uh/us/news-wea.gif"
                },
                "item": {
                  "title": "Conditions for Tebrau, MY at 11:00 am MYT",
                  "lat": "1.58",
                  "long": "103.74",
                  "link": "http://us.rd.yahoo.com/dailynews/rss/weather/Tebrau__MY/*http://weather.yahocom/forecast/MYXX0004_c.html",
                  "pubDate": "Sat, 03 May 2014 11:00 am MYT",
                  "condition": {
                    "code": "28",
                    "date": "Sat, 03 May 2014 11:00 am MYT",
                    "temp": "32",
                    "text": "Mostly Cloudy"
                  },
                  "description": "\n<img src=\"http://l.yimg.com/a/i/us/we/52/28.gif\"/><br />\n<Current Conditions:</b><br />\nMostly Cloudy, 32 C<BR />\n<BR /><b>Forecast:</b><BR />\nSat - Scattered Thunderstorms. High: 32 Low: 26<br />\nSun - Thunderstorms. High: 33 Low: 27<br />\nMon - Scattered Thunderstorms. High: 32 Low: 26<br />\nTue - Thunderstorms. High: 32 Low: 26<br />\nWed - Scattered Thunderstorms. High: 32 Low: 27<br />\n<br />\n<a href=\"http://us.rd.yahoo.com/dailynews/rss/weather/Tebrau__MY/*http://weather.yahoo.com/forecast/MYXX0004_c.html\">Full Forecast at Yahoo! Weather</a><BR/><BR/>\n(provided by <a href=\"http://www.weather.com\" >The Weather Channel</a>)<br/>\n",
                  "forecast": [
                    {
                      "code": "38",
                      "date": "3 May 2014",
                      "day": "Sat",
                      "high": "32",
                      "low": "26",
                      "text": "Scattered Thunderstorms"
                    },
                    {
                      "code": "4",
                      "date": "4 May 2014",
                      "day": "Sun",
                      "high": "33",
                      "low": "27",
                      "text": "Thunderstorms"
                    },
                    {
                      "code": "38",
                      "date": "5 May 2014",
                      "day": "Mon",
                      "high": "32",
                      "low": "26",
                      "text": "Scattered Thunderstorms"
                    },
                    {
                      "code": "4",
                      "date": "6 May 2014",
                      "day": "Tue",
                      "high": "32",
                      "low": "26",
                      "text": "Thunderstorms"
                    },
                    {
                      "code": "38",
                      "date": "7 May 2014",
                      "day": "Wed",
                      "high": "32",
                      "low": "27",
                      "text": "Scattered Thunderstorms"
                    }
                  ],
                  "guid": {
                    "isPermaLink": "false",
                    "content": "MYXX0004_2014_05_07_7_00_MYT"
                  }
                }
              }
              }
               }
              }

Outputs

You can generate outputs using the RAML spec from a variety of platforms. Here are three ways:

Deliver doc through the Anypoint Platform Developer Portal

  1. Log into the Anypoint platform.
  2. Click APIs on the top navigation.
  3. Click Add new API and complete the details of the dialog box.
  4. Click the API you just added.
  5. In the API Definition box, click Edit in API Designer.
  6. Input your RAML spec here (copy it from the above section), and then click Save.
  7. Click APIs on the top navigation, and then click your API.
  8. In the API Portal section, click Create new portal.
  9. In the left pane, select API Reference.

    Note that you can add additional pages to your documentation here.

    Adding additional pages

    (Kudos to the Mulesoft team for recognizing that API documentation is more than just a set of reference endpoints.)

    One of the options here is an API Notebook. This is a unique tool designed by Mulesoft that allows you to provide interactive code examples that leverage your RAML spec. You can read more about API Notebooks here.

  10. Click the Set to visible icon (looks like an eye).
  11. Click Live Portal.

    AnyPoint Developer Portal

Deliver doc through the API Console Project

You can also download the same code that generates the output on the Anypoint Platform and create your own API Console.

  1. Download the API Console code from Github.
  2. Save your RAML file to some place locally on your computer (such as weather.raml on Desktop).
  3. In the code you downloaded from Github, go to dist/index.html in your browser.

    RAML Console

  4. Copy the RAML code you created.
  5. Insert your copied code into the Or parse RAML here text box. Then click Load RAML.

    The API Console loads your RAML content:

    RAML loaded

  6. To auto-load a specific RAML file, add this to the body of the index.html file:

     <div style="overflow:auto; position:relative">
     <raml-console src="examples/weather.raml"></raml-console>
       </div>
    

    In this example, the RAML file is located in examples/weather.raml.

  7. Remove the following line:

       <raml-initializer></raml-initializer>
    

    View the file in your web browser. Note that if the file doesn’t load in Chrome, open it in Firefox. Chrome tends to block local JavaScript for security reasons.

    Here’s a sample RAML API Console output that integrates the weather.raml file. Here’s a generic RAML API Console that allows you to insert your own RAML spec code.

Deliver doc through the RAML2HTML Utility

Here’s an example of what the RAML2HTML output looks like. It’s a static HTML output without any interactivity.

To generate this kind of output:

  1. Install RAML2HTML through either of these methods:
  2. In Terminal, enter this command:

     raml2html generate -i input_file.raml -o output_file.html
    

    For example, if you’re already in the directory where your RAML file is (named weather.raml), and you want the output file to be named index.html, then enter this:

     raml2html generate -i weather.raml -o index.html
    

    Here’s the result:

    RAML2HTML

    To see this example in your browser, go to learnapidoc.com/raml/examples/index.html.

Other platforms that consume RAML and Swagger

Restlet Studio is another platform to check out. Restlet Studio can process either Swagger or RAML specs.

One advantage to using Restlet Studio is that it provides a form where you can assemble the needed spec by populating different form values, which theoretically should make building the RAML or Swagger file easier.

Exploring more platforms in depth is beyond the scope of this tutorial, but the concept is more or less the same. Large platforms that process and display your API documentation can only do so if your documentation aligns with a spec their tools can parse.

5.7 API Blueprint tutorial

API Blueprint is another spec

Just as Swagger defines a spec for describing a REST API, API Blueprint is another spec (which you can read here). If you describe your API with this blueprint, then different tools can read and display the information.

The API Blueprint spec is written in a Markdown-flavored syntax. It’s not normal Markdown, but it has a lot of the same, familiar Markdown syntax. However, the blueprint is clearly a very specific schema that is either valid or not valid based on the element names, order, spacing, and other details. In this way, it’s not nearly as flexible or forgiving as pure Markdown. But it may be preferable to YAML.

Sample blueprint

Here’s a sample blueprint to give you an idea of the syntax:

FORMAT: 1A
HOST: http://polls.apiblueprint.org/

# test

Polls is a simple API allowing consumers to view polls and vote in them.

# Polls API Root [/]

This resource does not have any attributes. Instead it offers the initial
API affordances in the form of the links in the JSON body.

It is recommend to follow the “url” link values,
[Link](https://tools.ietf.org/html/rfc5988) or Location headers where
applicable to retrieve resources. Instead of constructing your own URLs,
to keep your client decoupled from implementation details.

## Retrieve the Entry Point [GET]

+ Response 200 (application/json)

        {
            "questions_url": "/questions"
        }

## Group Question

Resources related to questions in the API.

## Question [/questions/{question_id}]

A Question object has the following attributes:

+ question
+ published_at - An ISO8601 date when the question was published.
+ url
+ choices - An array of Choice objects.

+ Parameters
    + question_id: 1 (required, number) - ID of the Question in form of an integer

### View a Questions Detail [GET]

+ Response 200 (application/json)

        {
            "question": "Favourite programming language?",
            "published_at": "2014-11-11T08:40:51.620Z",
            "url": "/questions/1",
            "choices": [
                {
                    "choice": "Swift",
                    "url": "/questions/1/choices/1",
                    "votes": 2048
                }, {
                    "choice": "Python",
                    "url": "/questions/1/choices/2",
                    "votes": 1024
                }, {
                    "choice": "Objective-C",
                    "url": "/questions/1/choices/3",
                    "votes": 512
                }, {
                    "choice": "Ruby",
                    "url": "/questions/1/choices/4",
                    "votes": 256
                }
            ]
        }

## Choice [/questions/{question_id}/choices/{choice_id}]

+ Parameters
    + question_id: 1 (required, number) - ID of the Question in form of an integer
    + choice_id: 1 (required, number) - ID of the Choice in form of an integer

### Vote on a Choice [POST]

This action allows you to vote on a question's choice.

+ Response 201

    + Headers

            Location: /questions/1

## Questions Collection [/questions{?page}]

+ Parameters
    + page: 1 (optional, number) - The page of questions to return

### List All Questions [GET]

+ Response 200 (application/json)

    + Headers

            Link: </questions?page=2>; rel="next"

    + Body

            [
                {
                    "question": "Favourite programming language?",
                    "published_at": "2014-11-11T08:40:51.620Z",
                    "url": "/questions/1",
                    "choices": [
                        {
                            "choice": "Swift",
                            "url": "/questions/1/choices/1",
                            "votes": 2048
                        }, {
                            "choice": "Python",
                            "url": "/questions/1/choices/2",
                            "votes": 1024
                        }, {
                            "choice": "Objective-C",
                            "url": "/questions/1/choices/3",
                            "votes": 512
                        }, {
                            "choice": "Ruby",
                            "url": "/questions/1/choices/4",
                            "votes": 256
                        }
                    ]
                }
            ]

### Create a New Question [POST]

You may create your own question using this action. It takes a JSON
object containing a question and a collection of answers in the
form of choices.

+ question (string) - The question
+ choices (array[string]) - A collection of choices.

+ Request (application/json)

        {
            "question": "Favourite programming language?",
            "choices": [
                "Swift",
                "Python",
                "Objective-C",
                "Ruby"
            ]
        }

+ Response 201 (application/json)

    + Headers

            Location: /questions/2

    + Body

            {
                "question": "Favourite programming language?",
                "published_at": "2014-11-11T08:40:51.620Z",
                "url": "/questions/2",
                "choices": [
                    {
                        "choice": "Swift",
                        "url": "/questions/2/choices/1",
                        "votes": 0
                    }, {
                        "choice": "Python",
                        "url": "/questions/2/choices/2",
                        "votes": 0
                    }, {
                        "choice": "Objective-C",
                        "url": "/questions/2/choices/3",
                        "votes": 0
                    }, {
                        "choice": "Ruby",
                        "url": "/questions/2/choices/4",
                        "votes": 0
                    }
                ]
            }

For a tutorial on the blueprint syntax, see this Apiary tutorial or this tutorial on Github.

You can find examples of different blueprints here. The examples can often clarify different aspects of the spec.

Parsing the blueprint

There are many tools that can parse an API blueprint. Drafter is one of the main parsers of the Blueprint. Many other tools build on Drafter and generate static HTML outputs of the blueprint. For example, aglio can parse a blueprint and generate static HTML files.

For a more comprehensive list of tools, see the Tooling section on apiblueprint.org. (Some of these tools require quite a few prerequisites, so I omitted the tutorial steps here for generating the output on your own machineapio.)

Create a sample HTML output using API Blueprint and Apiary

For this tutorial, we’ll use a platform called Apiary to read and display the API Blueprint. Apiary is just a hosted platform that will remove the need for installing local libraries and utilities to generate the output.

a. Create a new Apiary project

  1. Go to apiary.io and click Quick start with Github. Sign in with your Github account. (If you don’t have a Github account, create one first.)
  2. Sign up for a free hacker account and create a new project.

    You’ll be placed in the API Blueprint editor.

    API Blueprint editor

    By default the Polls blueprint is loaded so you can see how it looks. This blueprint gives you an example of the required format for the Apiary tool to parse and display the content. You can also see the raw file here.

  3. At this point, you would start describing your API using the blueprint syntax in the editor. When you make a mistake, error flags indicate what’s wrong.

    You can read the Apiary tutorial and structure your documentation in the blueprint format. The syntax seems to accommodate different methods applied to the same resources.

    For this tutorial, you’ll integrate the Mashape weather API information info formatted in the blueprint format.

  4. Copy the following code, which aligns with the API Blueprint spec, and paste it into the Apiary blueprint editor.

     FORMAT: 1A
     HOST: https://simple-weather.p.mashape.com
    	
     # Weather API
    	
     Display Weather forecast data by latitude and longitude. Get raw weather data OR simple label description of weather forecast of some places.
    	
     # Weather API Root [/]
    	
     # Group Weather
    	
     Resources related to weather in the API.
    	
     ## Weather data [/weatherdata{?lat,lng}]
    	
     ### Get the weather data [GET]
    	
     Get the weather data in your area.
    	
     + Parameters
         + lat: 55.749792 (required, number) - Latitude
         + lng: 37.632495 (required, number) - Longitude
    	
     + Request JSON Message
    	
         + Headers
    	    
                 X-Mashape-Authorization: APIKEY
                 Accept: text/plain
    	
     + Response 200 (application/json)
    	            
         + Body
    	    
                 [
                     {
                   "query": {
                     "count": 1,
                     "created": "2014-05-03T03:57:53Z",
                     "lang": "en-US",
                     "results": {
                       "channel": {
                         "title": "Yahoo! Weather - Tebrau, MY",
                         "link": "http://us.rd.yahoo.com/dailynews/rss/weather/Tebrau__MY/*http://weather.yahoo.com/forecast/MYXX0004_c.html",
                         "description": "Yahoo! Weather for Tebrau, MY",
                         "language": "en-us",
                         "lastBuildDate": "Sat, 03 May 2014 11:00 am MYT",
                         "ttl": "60",
                         "location": {
                           "city": "Tebrau",
                           "country": "Malaysia",
                           "region": ""
                         },
                         "units": {
                           "distance": "km",
                           "pressure": "mb",
                           "speed": "km/h",
                           "temperature": "C"
                         },
                         "wind": {
                           "chill": "32",
                           "direction": "170",
                           "speed": "4.83"
                         },
                         "atmosphere": {
                           "humidity": "66",
                           "pressure": "982.05",
                           "rising": "0",
                           "visibility": "9.99"
                         },
                         "astronomy": {
                           "sunrise": "6:57 am",
                           "sunset": "7:06 pm"
                         },
                         "image": {
                           "title": "Yahoo! Weather",
                           "width": "142",
                           "height": "18",
                           "link": "http://weather.yahoo.com",
                           "url": "http://l.yimg.com/a/i/brand/purplelogo//uh/us/news-wea.gif"
                         },
                         "item": {
                           "title": "Conditions for Tebrau, MY at 11:00 am MYT",
                           "lat": "1.58",
                           "long": "103.74",
                           "link": "http://us.rd.yahoo.com/dailynews/rss/weather/Tebrau__MY/*http://weather.yahoo.com/forecast/MYXX0004_c.html",
                           "pubDate": "Sat, 03 May 2014 11:00 am MYT",
                           "condition": {
                             "code": "28",
                             "date": "Sat, 03 May 2014 11:00 am MYT",
                             "temp": "32",
                             "text": "Mostly Cloudy"
                           },
                           "description": "\n<img src=\"http://l.yimg.com/a/i/us/we/52/28.gif\"/><br />\n<b>Current Conditions:</b><br />\nMostly Cloudy, 32 C<BR />\n<BR /><b>Forecast:</b><BR />\nSat - Scattered Thunderstorms. High: 32 Low: 26<br />\nSun - Thunderstorms. High: 33 Low: 27<br />\nMon - Scattered Thunderstorms. High: 32 Low: 26<br />\nTue - Thunderstorms. High: 32 Low: 26<br />\nWed - Scattered Thunderstorms. High: 32 Low: 27<br />\n<br />\n<a href=\"http://us.rd.yahoo.com/dailynews/rss/weather/Tebrau__MY/*http://weather.yahoo.com/forecast/MYXX0004_c.html\">Full Forecast at Yahoo! Weather</a><BR/><BR/>\n(provided by <a href=\"http://www.weather.com\" >The Weather Channel</a>)<br/>\n",
                           "forecast": [
                             {
                               "code": "38",
                               "date": "3 May 2014",
                               "day": "Sat",
                               "high": "32",
                               "low": "26",
                               "text": "Scattered Thunderstorms"
                             },
                             {
                               "code": "4",
                               "date": "4 May 2014",
                               "day": "Sun",
                               "high": "33",
                               "low": "27",
                               "text": "Thunderstorms"
                             },
                             {
                               "code": "38",
                               "date": "5 May 2014",
                               "day": "Mon",
                               "high": "32",
                               "low": "26",
                               "text": "Scattered Thunderstorms"
                             },
                             {
                               "code": "4",
                               "date": "6 May 2014",
                               "day": "Tue",
                               "high": "32",
                               "low": "26",
                               "text": "Thunderstorms"
                             },
                             {
                               "code": "38",
                               "date": "7 May 2014",
                               "day": "Wed",
                               "high": "32",
                               "low": "27",
                               "text": "Scattered Thunderstorms"
                             }
                           ],
                           "guid": {
                             "isPermaLink": "false",
                             "content": "MYXX0004_2014_05_07_7_00_MYT"
                           }
                         }
                       }
                     }
                   }
                 }
                 ]
    	              
    
  5. Click Save and Publish.

b. Interact with the API on Apiary

In the Apiary’s top navigation, click Documentation. Then interact with the API on Apiary by clicking Switch to Console. Call the resources and view the responses.

You can switch between an Example and a Console view in the documentation. The Example view shows pre-built responses. The Console view allows you to enter your own values and generate dynamic responses based on your own API key. This dual display — both the Example and the Console views — might align better with user needs:

5.8 Static site generators

What are static site generators

Static site generators are a breed of website compilers that package up a group of files (usually written in Markdown) and make them into a website. There are more than 350 different static site generators. You can browse them at staticgen.com.

Jekyll is one of the most popular static site generators. All of my help content is on Jekyll. You can publish a fully functional tech comm website that includes content re-use, conditional filtering, variables, PDF output, and everything else you might need as a technical writer.

Here’s the documentation theme that I developed for Jekyll:

My Jekyll Documentation theme

There isn’t any kind of special API reference endpoint formatting here (yet), but the platform is so flexible, you can do anything with it as long as you know HTML, CSS, and JavaScript (the fundamental language of the web.

Whereas the Swagger, RAML, and API Blueprint REST specifications mainly just produce an interactive API console, with a static site generator, you have a tool for building a full-fledged website. With the website, you can include complex navigation, content re-use, translation, PDF generation, and more.

Static site generators give you a flexible web platform

Static site generators give you a lot of flexibility. They’re a good choice if you need a lot of flexibility and control over your site. You’re not just plugging into an existing API documentation framework or architecture. You define your own templates and structure things however you want.

With static site generators, you can do the following:

Developing content in Jekyll

One of the questions people ask about authoring content with static site generators is how you see the output and formatting given that you’re working strictly in text. For example, how do you see images, links, lists, or other formatting if you’re authoring in text?

When you’re authoring a Jekyll site, you open up a preview server that continuously builds your site with each change you save. I open up my text editor on the left, and the auto-generating site on the right. On a third monitor, I usually put the Terminal window so I can see when a new build is done (it takes about 10 seconds for my doc sites).

Writing in Jekyll

This setup works fairly well. Granted, I do have a Mac Thunderbolt 21-inch monitor, so it gives me more real estate. On a small screen, you might have to switch back and forth between screens to see the output.

Admittedly, the Markdown format is easy to use but also susceptible to error, especially if you have complicated list formatting. When you have ordered list items separated by screenshots and result statements, and sometimes the result statements have lists themselves or note formatting, it can be a bit tricky to get the display right.

But for the majority of the time, writing in Markdown is a joy. You can focus on the content without getting wrapped up in tags. If you do need complex tags, anything you can write in HTML or JavaScript you can include on your page.

Automating builds from Github

Let’s do an example in publishing in CloudCannon using the Documentation Theme for Jekyll (the theme I built). You don’t need to have a Windows machine to facilitate the building and publishing — you’ll do that via CloudCannon and Github.

Set up your doc theme on Github

  1. Go to the Github page for the Documentation theme for Jekyll and click Fork in the upper-right.

    When you fork a project, a copy of the project (using the same name) gets added to your own Github repository. You’ll see the project at https://github.com/{your github username}/documentation-theme-jekyll.

    Sometimes people fork repositories to make changes and then propose pull requests of the fork to the original repo. Other times people fork repositories to create a starting point for a splinter project from the original. Github is all about social coding — one person’s ending point is another person’s starting point, and multiple projects can be merged into each other. You can learn more about forking here.

  2. Sign up for a free account at CloudCannon.
  3. Once you sign in, click Create Site.
  4. While viewing your site, in the left sidebar, click Site Settings.
  5. On the Details tab, clear the Minify and serve assets from CDN check box. Then click Update Site. Why this step? The theme you’ll be connecting to uses relative link paths, which don’t play nicely with the CDN caching feature in CloudCannon.

  6. Click Storage Providers and then under Github, click Connect.

    You’ll be taken to Github to authorize CloudCannon’s access to your Github repository.

  7. When asked which repository to authorize, select the Documentation theme for Jekyll repository.
  8. Select the default write direction for changes. The default is for changes made on Github to be pushed to CloudCannon, so the arrow (which represents changes) flow from Github to CloudCannon. That’s the direction you want.
  9. Wait about 5 minutes for the files from your Github repository to sync over to CloudCannon. In the left sidebar, click File Browser. If you see a bunch of files with a green check mark, it means the files have synced over from the Github repo.
  10. View your CloudCannon site at the preview URL in the upper-left corner.

    Preview URL

    It should look just like the Documentation theme for Jekyll here.

Make an update to your Github repo

Remember your Github files are syncing from Github to CloudCannon. Let’s see that workflow in action.

  1. In your browser, go to your Github repository that you forked and make a change.

    For example, browse to the index.md file, click the Edit button (pencil icon), make an update, and then commit the update.

  2. Wait a minute or so, and look for the change at the preview URL to your site on CloudCannon (refresh the page). The change should be reflected.

    You’ve now got a workflow that involves Github as the storage provider syncing to a Jekyll theme hosted on CloudCannon.

What’s cool about CloudCannon and Jekyll

Jekyll is a good solution because it provides near infinite flexibility and fits well within the UX web stack.

CloudCannon provides an easy way to allow subject matter experts to author and edit content, since CloudCanonn allows you to create editable regions within your Jekyll theme. This would allow a tools team to maintain the site while providing areas for less technical people to author content.

However, CloudCannon wouldn’t be a good solution if your docs require authentication in a highly secure environment. Additionally, Jekyll only provide static HTML files. If you want users to log in, and then personalize what they see when logged in, Jekyll won’t provide this experience.

Publish the surfreport in the Aviator Jekyll theme using CloudCannon’s interface

Let’s say you want to use a theme that provides ready-made templates for REST API documentation. In this activity, you’ll publish the weatherdata endpoints in a Jekyll theme called Aviator. Additionally, rather than syncing the files from a Github repository, you’ll just work with the files directly in CloudCannon.

The Aviator API documentation theme by Cloud Cannon is designed for REST APIs. You’ll use this theme to input a new endpoint. If you’re continuing on from the previous course (Documenting REST APIs), you already have a new endpoint called surfreport.

Cloud Cannon Aviator theme

If you’re on a Mac (with Rubygems and Jekyll installed), building Jekyll sites is a lot simpler. But even if you’re on Windows, it won’t matter for this tutorial. You’ll be using CloudCannon, a SaaS website builder product, to build the Jekyll files.

a. Download the Jekyll Aviator theme

  1. Go to Aviator API documentation theme and click the Download ZIP button.

    Download ZIP button for Aviator theme

  2. Unzip the files.

b. Add the weatherdata endpoint doc to the theme

  1. Browse to the theme’s files. In the _api folder, open 1_1_books_list.md in a text editor and look at the format.

    In every Jekyll file, there’s some “frontmatter” at the top. The frontmatter section has three dashes before and after it.

    The frontmatter is formatted in a syntax called YML. YML is similar to JSON but uses spaces and hyphens instead of curly braces. This makes it more human readable.

  2. Create a new file called 1-6_weatherdata.md and save it in the same _api folder.
  3. Get the data from the weatherdata endpoint from this Weather API on Mashape. Put the data from this endpoint into the Aviator theme’s template.

The Aviator Jekyll theme has a specific layout that will be applied to all the files inside the _api folder (these files are called a collection). Jekyll will access these values by going to api.title, api.type, and so forth. It will then push this content into the template (which you can see by going to _layouts/multi.md).

Here’s what my 1-6_weatherdata.md file looks like. Be sure to put the response within square brackets, indented with one tab (4 spaces). You can also download the file here. Remove the raw and endraw tags at the beginning and end of the code sample (which I had to add to keep Jekyll from trying to process it).

---
title: /weatherdata
type: get
description: Get weather forecast by Latitude and Longitude. 
parameters:
  title: Weatherdata parameters
  data:
    - lat:
      - string
      - Required. Latitude.
    - lng:
      - string
      - Required. Longitude.
right_code:
  return: |
    [
    {
    "query": {
    "count": 1,
    "created": "2014-05-03T03:57:53Z",
    "lang": "en-US",
    "results": {
      "channel": {
        "title": "Yahoo! Weather - Tebrau, MY",
        "link": "http://us.rd.yahoo.com/dailynews/rss/weather/Tebrau__MY/*http://weather.yahoo.com/forecast/MYXX0004_c.html",
        "description": "Yahoo! Weather for Tebrau, MY",
        "language": "en-us",
        "lastBuildDate": "Sat, 03 May 2014 11:00 am MYT",
        "ttl": "60",
        "location": {
          "city": "Tebrau",
          "country": "Malaysia",
          "region": ""
        },
        "units": {
          "distance": "km",
          "pressure": "mb",
          "speed": "km/h",
          "temperature": "C"
        },
        "wind": {
          "chill": "32",
          "direction": "170",
          "speed": "4.83"
        },
        "atmosphere": {
          "humidity": "66",
          "pressure": "982.05",
          "rising": "0",
          "visibility": "9.99"
        },
        "astronomy": {
          "sunrise": "6:57 am",
          "sunset": "7:06 pm"
        },
        "image": {
          "title": "Yahoo! Weather",
          "width": "142",
          "height": "18",
          "link": "http://weather.yahoo.com",
          "url": "http://l.yimg.com/a/i/brand/purplelogo//uh/us/news-wea.gif"
        },
        "item": {
          "title": "Conditions for Tebrau, MY at 11:00 am MYT",
          "lat": "1.58",
          "long": "103.74",
          "link": "http://us.rd.yahoo.com/dailynews/rss/weather/Tebrau__MY/*http://weather.yahoo.com/forecast/MYXX0004_c.html",
          "pubDate": "Sat, 03 May 2014 11:00 am MYT",
          "condition": {
            "code": "28",
            "date": "Sat, 03 May 2014 11:00 am MYT",
            "temp": "32",
            "text": "Mostly Cloudy"
          },
          "description": "\n<img src=\"http://l.yimg.com/a/i/us/we/52/28.gif\"/><br />\n<b>Current Conditions:</b><br />\nMostly Cloudy, 32 C<BR />\n<BR /><b>Forecast:</b><BR />\nSat - Scattered Thunderstorms. High: 32 Low: 26<br />\nSun - Thunderstorms. High: 33 Low: 27<br />\nMon - Scattered Thunderstorms. High: 32 Low: 26<br />\nTue - Thunderstorms. High: 32 Low: 26<br />\nWed - Scattered Thunderstorms. High: 32 Low: 27<br />\n<br />\n<a href=\"http://us.rd.yahoo.com/dailynews/rss/weather/Tebrau__MY/*http://weather.yahoo.com/forecast/MYXX0004_c.html\">Full Forecast at Yahoo! Weather</a><BR/><BR/>\n(provided by <a href=\"http://www.weather.com\" >The Weather Channel</a>)<br/>\n",
          "forecast": [
            {
              "code": "38",
              "date": "3 May 2014",
              "day": "Sat",
              "high": "32",
              "low": "26",
              "text": "Scattered Thunderstorms"
            },
            {
              "code": "4",
              "date": "4 May 2014",
              "day": "Sun",
              "high": "33",
              "low": "27",
              "text": "Thunderstorms"
            },
            {
              "code": "38",
              "date": "5 May 2014",
              "day": "Mon",
              "high": "32",
              "low": "26",
              "text": "Scattered Thunderstorms"
            },
            {
              "code": "4",
              "date": "6 May 2014",
              "day": "Tue",
              "high": "32",
              "low": "26",
              "text": "Thunderstorms"
            },
            {
              "code": "38",
              "date": "7 May 2014",
              "day": "Wed",
              "high": "32",
              "low": "27",
              "text": "Scattered Thunderstorms"
            }
          ],
          "guid": {
            "isPermaLink": "false",
            "content": "MYXX0004_2014_05_07_7_00_MYT"
          }
        }
      }
    }
    }
    }
    ]
---
	
<div class="code-viewer">

<pre data-language="cURL">
curl --get --include 'https://simple-weather.p.mashape.com/weatherdata?lat=1.0&lng=1.0' \
  -H 'X-Mashape-Key: EF3g83pKnzmshgoksF83V6JB6QyTp1cGrrdjsnczTkkYgYrp8p' \
  -H 'Accept: application/json'
  </pre>

</div>

c. Publish your Jekyll project on CloudCannon

  1. Go to http://cloudcannon.com and, if you don’t already have an account, sign up for a free test account by clicking Sign Up
  2. After signing up and logging in, click Create Site.
  3. Log in and click Create Site.
  4. Type a name for the site (e.g., Aviator Test) and press your Enter key.
  5. Click the Upload Files button in the upper-right corner.

    Uploading to Cloud Cannon

  6. Open your Aviator theme files, select them all, and drag them into the upload file dialog box. (Don’t just drag the Aviator theme folder into CloudCannon).

  7. After the files finish uploading (and little green check marks appear next to the files), click the preview link in the upper-left corner:

    Preview link

  8. When prompted for a password for viewing the site to add, add one.
  9. Click the preview link to view the site.

The site should appear as follows:

CloudCannon Weatherdata endpoint

You can see my site at http://delightful-nightingale.cloudvent.net/. The password is stcsummit.

If your endpoint doesn’t appear, you probably have invalid YML syntax. Make sure the left edge of the response is at least one tab (4 spaces) in.

Each time you save the site, CloudCannon actually rebuilds the Jekyll files into the site that you see.

If you switch between the code editor and visual display, the code sample gets mangled. (The CloudCannon editor will convert the https path into a link.) This is a bug in CloudCannon that will be fixed.

Doc Websites Using Jekyll

Here are some websites using Jekyll:

5.9 Readme.io

Software as a service sites

You can publish documentation on hosted platforms specifically built for API and developer documentation. Two promising platforms are the following:

No need to spend time developing your own site

If you consider how much time it requires to build, maintain, troubleshoot, etc. your own website, then it really does make sense to consider an existing third-party platform where someone has already built all of this out for you.

Publish endpoint documentation on readme.io

In this tutorial we’ll explore how to publish content on readme.io in more depth.

Readme.io

In this workshop activity, you’ll publish this weatherdata endpoint documentation on readme.io.

a. Set up a readme.io project

  1. Click the Sign Up button in the upper-right corner and sign up for an account.
  2. Add a Project Name (e.g., Weather API), Subdomain (e.g., weather-api), and Project Logo. Then click Create Project.

    Project Settings

b. Configure API settings

  1. In the left sidebar, under Settings, click API Settings.

    This is where you add the authentication information necessary for making calls to the API.

  2. For the API Base URL, enter https://simple-weather.p.mashape.com.
  3. Leave the other settings not mentioned here at the defaults.
  4. In the Static Headers section, click Add Header add these two headers as key-value pairs in the appropriate fields:

     X-Mashape-Key APIKEY
     Accept application/json
    

    Readme.io static headers

  5. Select the API Explorer check box (if it’s not already selected).
  6. Click Save.

c. Add endpoint documentation

  1. In the left sidebar, click Documentation.
  2. Click + to add a new page, and choose the RESTful API template.
  3. Select the GET method next to title.
  4. Add in the documentation from the weatherdata endpoint documentation. For example, add the description, parameters, cURL call, and response.

    Inputting weatherdata into readme.io

  5. Click Save.
  6. At the top of the screen, click the project name to view the site.

d. Interact with the documentation

  1. Click Documentation in the header to go to your site.
  2. Click the Weatherdata endpoint in the sidebar.
  3. In the Try It Out section, insert some values into the lat and lng fields, and then click Try It.

    Try it on readme.io

The experience is similar to Swagger in that the response appears directly in the documentation. This API Explorer gives you a sense of the data returned by the API.

Limitations with Readme.io

Readme.io is a pretty sweet platform, and you don’t have to worry about describing your API based on a specification based on either RAML or Swagger. But this also has downsides. It means that your doc is tied to the Readme.io platform. Additionally, you can’t auto-generate the output from annotations in your source code.

Additionally, if the cloud location for your docs isn’t an option, that may also pose challenges. Finally, there isn’t any content re-use functionality, so if you have multiple outputs for your documentation that you’re single sourcing, Readme.io may not be for you.

Even so, the output is sharp and the talent behind this site is top-notch. The platform is constantly growing with new features, so maybe all of this functionality will eventually be there.

6.0 Miredot

How Miredot works

One of the tools you can use to generate API documentation from source – as long as your source is Java-based – is Miredot.

Miredot is a plugin for Maven, which is a build tool that you integrate into your Java IDE. Miredot can generate an offline website that looks like this:

Miredot example

You can read the Getting started guide for Miredot for instructions on how to incorporate it into your Java project.

Miredot supports many annotations in the source code to generate the output. The most important annotations they support include those from Jax-rs and Jackson. See Supported Frameworks for a complete set of supported annotations.

Example annotations

Here’s an example of what these annotations look like. Look at the CatalogService.java file. In it, one of the methods is updateCategory.

You can see that above this method is a “doc block” that provides a description, the parameters, method, and other details:

    /**
     * Update category name and description. Cannot be used to edit products in this category.
     *
     * @param categoryId The id of the category that will be updated
     * @param category   The category details
     * @summary Update category name and description
     */
    @PUT
    @Path("/category/{id}")
    @Consumes({MediaType.APPLICATION_XML, MediaType.APPLICATION_JSON})
    public void updateCategory(@PathParam("id") Long categoryId, Category category);

Miredot consumes this information and generates an output.

Activity: Explore Miredot’s output

  1. First browse the Miredot sample code.
  2. To see how this information gets rendered in the Miredot output, go to the Petstore example docs, expand Catalog > Category on the right, and then select PUT. Or go directly here.

Miredot update category

If you browse the navigation of Miredot’s output, it’s an interesting-looking solution. This kind of documentation might fit a Java-based REST API well.

But the authoring of the docs would really only work for Java developers. It wouldn’t work well for technical writers unless you’re plugged into the source control workflow.

6.1 Custom UX solutions

Beautiful API doc sites require front-end design skills

If you want to build a beautiful API doc website that rivals sites such as Parse.com and others, you’ll most likely need to involve a UX engineer to build it. Fortunately, this is a solution that many UX engineers and other web developers are usually excited to tackle.

Getting help from your UX team

When it makes sense to partner with UX

If you want to integrate your API documentation into your main website, ask the person designing your main website for strategies on integrating the doc site into it. This integration might allow you to leverage authentication (if needed) and other interaction points (such as with forums or support tickets).

Web platform languages

Many custom websites are built using a variety of JavaScript, HTML, and CSS tools. Most likely you’ll be able to supply a batch of Markdown or HTML files to the web developer to integrate. ## Solution at Badgeville

When I worked at Badgeville, our solution for publishing API documentation was to use custom scripts that pulled some information from source files and pushed them into templates.

Use scripts to generate JSON from source code

The source files were stored on Github, and the writers could edit the descriptions of the parameters, fields, etc. Our developers created scripts that would look into the code of the source files and render content into JSON files in a specific structure.

Import the JSON into your web CMS

Since we published all help content on a Drupal site, we hired a Drupal development agency that would take information from a JSON file and push the information into a custom-built template.

After the scripts were integrated into the Drupal site, we would have developers periodically run the build scripts to generate a batch of JSON files.

The upload scripts checked to ensure the JSON files were valid, and then they were pushed into the templates and published. Each upload would overwrite any existing content with the same file names.

Developing custom solutions

If your documentation is published on a web-based CMS, you can probably find a development agency to create a similar script (if you don’t have in-house engineers to create them).

A lot of companies have custom solutions for their API documentation. Sometimes this kind of solution just makes sense and allows you to right-size the workflow to fit your specific information.

6.2 Help authoring tools

Can I use a help authoring tool?

Help authoring tools (HATs) refer to the common toolset often used by technical writers. Common HATs include Madcap Flare, Adobe Robohelp, Author-it, and more. Sure, you can use these tools to create API documentation.

Here’s a sample help output from Flare for the Photobucket API:

Publishing API docs

Pros of using a help authoring tool

Some advantages of using a HAT include the following:

+ Comfortable authoring environment for writers

If writers are going to be creating and publishing the documentation, using a tool technical writers are familiar with is a good idea.

+ Integrates reference information with guides and tutorials.

You won’t have a division between an output that is generated from a reference doc generator (such as Swagger) and user guide material. It can be one seamless whole.

+ Handles the toughest tech comm scenarios.

When you have to deal with versioning, translation, content re-use, conditional filtering, authoring workflows, and PDF output, you’re going to struggle to make this work with the other tools mentioned in this course.

+ HATs reinforce the fact that API doc is more than endpoints

HATs reinforce the fact that good API documentation is more than just a set of endpoints and parameters. Good API documentation includes guides and tutorial topics as well. Developers rarely write that kind of information, yet it’s just as important as the reference documentation. HATs lend themselves more to these guide and tutorial topics.

Cons of using a help authoring tool

Some disadvantages of using a HAT include the following:

- Most HATs don’t run on Macs

Using a HAT also presents some disadvantages. First, almost no HAT runs on a Mac. But many developers and designers prefer Macs because they have a much better development platform. For example, to make a cURL call, you just open Terminal and paste in the call. With a Windows machine, installing cURL and libcurl is much more onerous and harder to use.

- Dated UI won’t help sell the product

The output from a help authoring tool usually looks dated. The problem with the dated tripane look and feel is that API documentation is the interface that users navigate. There isn’t a separate GUI interface that the help complements. The help is front and center as the information product that users get.

If you want to promote the idea that your API is modern and awesome, you want a website that looks modern and awesome as well. In fact, you might have a UX developer create the website itself. If you lead with an outdated tripane site that loads frames, developers may not be as excited to use your API.

In Flare’s latest release, you can customize the display in pretty significant ways, so maybe it will help end the dated tripane output look and feel.

- Doesn’t integrate with other site components

Many of the API doc sites are single-website experiences. The API docs are usually part of the main website, not a link that opens in its own window, separate from the other content.

If you can output a format that another site can consume, great. But if you split and divide the user into separate sites, you’re following a less common pattern with API doc sites.

- Removes authoring capability from developers

If you’re hoping for developers to contribute to the documentation, it’s going to be hard to get buy-in if you’re using a HAT. HATs are tools for writers, not developers. Then again, if you don’t expect developers to contribute, then this becomes a moot point.

6.3 Design patterns

What are design patterns

Design patterns are common themes in the way something is designed. In looking over the many API doc sites, I tried to find some common design patterns in the way the content was published. I already mentioned the division between guides, tutorials, and reference documentation. Here I want to explore more design-specific elements in API doc sites.

Earth patterns, Venefice. Flickr

Several design patterns with API docs

Here are several design patterns with API doc sites:

I’ll explore each of these elements in depth in upcoming pages.

Some non-patterns

Here are some non-patterns. By this, I mean these are elements that aren’t as common in API doc sites:

By non-patterns, it’s not to say these elements aren’t a good idea. But generally they aren’t emphasized in many of the API doc sites.

6.4 Design pattern: Structure and templates

Using a template

If you have a lot of endpoints, you can construct a template that forces specific values in the same template. This is important because you want to establish a consistency with each endpoint. You’re basically filling in the blanks.

You could just remember to add the exact same sections on each page, but this requires more manual consistency.

Structure, by Rafal Zych

Pushing values into more stylized outputs

You might want to insert various values (descriptions, methods, parameters, etc.) into a highly stylized output. Rather than work with all the style tags in your page directly, you can create values that exist as an object on a page. A custom script can loop through the objects and insert the values into your template.

Templates in Jekyll

Different authoring tools have different ways of processing templates. In Jekyll, a static site generator, this is how you do it.

In the frontmatter of a page, you list out the key value pairs:

resource_name: surfreport
resource_description: Gets the surf conditions for a specific beach.
endpoint: /surfreport

And so on.

You then use a for loop to cycle through each of the items and insert them into your template:

{% for p in site.endpoints %}
<div class="resName">{{p.resource_name}}</div>
<div class="resDesc">{{p.resource_description}}</div>
<div class="endpointDef">{{p.endpoint}}</div>

Templates make it easy to change display globally

This approach makes it easy to change your template without reformatting all of your pages. For example, if you decide to change the order of the elements on the page, or if you want to add new classes or something, you just alter the template. The values remain the same, since they can be processed in any order.

Note that this kind of structure is really only necessary if you have a lot of different endpoints. If you only have a handful, there’s no need to really automate the template process.

6.5 Website platform

One integrated website

Many API doc sites provide one integrated website to find all of the information. You usually aren’t opening help in a new window, separate from the other content. The website is branded with the same look and feel as the product. Here’s an example from Yelp:

Yelp API documentation

Documentation as product interface

I hinted at this earlier, but with API documentation, there isn’t an application interface that the documentation complements. In most cases, the API documentation itself is the product that users navigate to use your product. As such, users will expect more from it.

Integrating information across the entire site

One of the challenges in using documentation generated from Swagger, Miredot, or some other document generator is figuring out how to integrate it with the rest of the site. Ideally, you want users to have a seamless experience across the entire website. If your endpoints are rendered into their own separate view, how do you integrate the endpoint reference into the rest of the documentation?

If you can integrate the branding and search, users may not care. But if it feels like users are navigating several sites poorly cobbled together, the UX experience will be somewhat fragmented.

Think about other content that users will interact with, such as Marketing content, terms of service, support, and so on. How do you pull together all of this information into a single site experience without resorting to an overbloated CMS like Drupal or some other web framework?

6.6 Abundant code samples

Developers love code examples

More than anything else, developers love code examples. Usually the more code you can add to your documentation, the better.

Here’s an example from Evernote’s API:

Evernote code examples

The writers at Parse emphasize the importance of code samples:

Liberally sprinkle real world examples throughout your documentation. No developer will ever complain that there are too many examples. They dramatically reduce the time for developers to understand your product. In fact, we even have example code right on our homepage.

Syntax highlighting

For code samples, you want to incorporate syntax highlighting. There are numerous syntax highlighters that you can usually incorporate into your platform. For example, Jekyll uses either Pygments or Rouge. These highlighters have stylesheets prepared to highlight languages based on specific syntax.

When you include a code sample, you usually instruct the syntax highlighter what language to use. If you don’t have access to a syntax highlighter for your platform, you can always manually add the highlighting using syntax highlighter library.

Code formatting

Another important element in code samples is to use consistent white space. Although computers can read minified code, users usually can’t or won’t want to look at minified code. Use a tool to format the code with the appropriate spacing and line breaks.

Sometimes development shops have an official style guide for formatting code samples. This might prescribe details such as the following:

For example, here’s a JavaScript style guide.

If developers don’t have an official style guide, ask them to recommend one online, and compare the code samples against the guidelines in it.

6.7 Long-ish pages

Minimize clicking

One of the most stark differences between regular GUI documentation and developer documentation is that developer doc pages tend to be longer. In a post on designing great API docs, the writers at Parse explain

Minimize Clicking

It’s no secret that developers hate to click. Don’t spread your documentation onto a million different pages. Keep related topics close to each other on the same page.

We’re big fans of long single page guides that let users see the big picture with the ability to easily zoom into the details with a persistent navigation bar. This has the great side effect that users can search all the content with an in-page browser search.

A great example of this is the Backbone.js documentation, which has everything at your fingertips.

Examples of long pages

Here’s the Backbone.js documentation:

Backbone JS

For another example of a long page, see the Reddit API: Backbone JS

Why long pages?

Why do API doc sites tend to have long-ish pages? Here are a few reasons:

Is it a best practice to make long pages?

Usually the long pages on a site are the reference pages. Personally, I’m not a fan of listing every endpoint on the same page. Either way you approach it, developers probably won’t care that much. They will care much more about the content on the page rather than the page length.

6.8 API Interactivity

API explorers

A recurring feature in many API doc publishing sites is interactivity. Swagger, readme.io, Apiary, and many other platforms allow you to try out calls and see responses.

For APIs not on these platforms, wiring up an API Explorer is often done by engineers. Since you already have the API wiring to make calls and receive responses, creating an API Explorer is a feasible task for a UI developer.

Here’s a sample API explorer from Twitter:

Twitter API Explorer

Novel or actually instructive?

Are API explorers novel, or extremely instructive? If you’re going to be making a lot of calls, there’s no reason why you couldn’t just use cURL to quickly make the request and see the response. The API Explorer provides more of a GUI, however, that makes the endpoints accessible to more people. You don’t have to worry about entering exactly the right syntax in your cURL call – you just have to fill in the blanks.

However, API Explorers tend to work better with simpler APIs. If your API requires you to retrieve data before you can use a certain endpoint, or if the data you submit is a JSON object in the body of the post, or you have some other complicated interdependency with the endpoints, the API Explorer might not be as helpful.

Nevertheless, clearly it is a design pattern to provide this kind of interactivity in the documentation.

Dynamically populated code samples with API keys

If your users log in, you can store their API keys and dynamically populate the calls with API keys. Not doing so seems a bit lazy with the user experience. The API key can most likely be a variable that stores the user’s API key.

However, if you store customer API keys on your site, this might create authentication and login requirements that make your site more complicated. If you have users logging in and dynamically populating the explorer with their API keys, you’ll probably need a front-end designer and web developer to pull this off. readme.io is one of the platforms that allows you to store API keys for users and dynamically populate your code samples with them.

6.9 Challenging factors

Requirements that may cause problems

A lot of the solutions we’ve looked at tend to break down when you start applying more difficult requirements in your tech comm scenario. If you have to deal with some of these challenges, you may have to resort to more traditional tech comm tooling.

You can handle all of this through a custom platform such as Jekyll, but it’s not going to be a push-button experience. It will require a higher degree of technical skill and maneuvering.

With my Jekyll doc theme, I’m single sourcing one of my projects into about 9 different outputs (for different product lines and programming languages). Jekyll provides a templating language called Liquid that allows you to do conditional filtering, content re-use, variables, and more.

To handle PDF, I’m using a tool called Prince that converts a list of HTML pages into a PDF document, complete with running headers and footers, page numbering, and other print styling.

To handle authentication, I upload the content into a Salesforce site.com and use Salesforce as the authentication layer. It’s my least favorite part of the solution, but a more integrated authentication will probably involve some engineering resources to help out.

7.0 Tools versus content

Don’t forget the content

Although this course has focused heavily on tools, I want to emphasize that content always trumps tooling. The content should be your primary focus, not the tools you use to publish the content.

Once you get the tooling infrastructure in place, it should mostly take a back seat to the daily tasks of content development.

Dave's Bike Tools, Bri Pettis, Flickr

I’ve changed my doc platforms numerous times, and rarely does anyone seem to care or notice. As long as it “looks decent,” most project managers and users will focus on the content much more than the design. In some ways, the design should be invisible and unobtrusive, not foregrounding the focus on the content. In other words, the user shouldn’t be distracted by the tooling.

For the most part, users and reviewers won’t even notice all the effort behind the tools. Even when you’ve managed to single source content, loop through a custom collection, incorporate language switchers to jump from platform to platform – the feedback you’ll get is, “This sentence is incorrect.” Or, “There’s a typo here.”

7.0 The job market for API technical writers

Demand is high

Technical writers who can write developer documentation are in high demand, especially in the Silicon Valley area. There are plenty of technical writers who can write documentation for graphical user interfaces but not many who can navigate the developer landscape to provide highly technical documentation for developers working in code.

In this section of my API documentation course, I’ll dive into the job market for API documentation.

Ability to read programming languages

In nearly every job description for technical writers in developer documentation, you’ll see requirements like this:

Ability to read code in one or more programming languages, such as Java, C++, or Python.

You may wonder what the motivation is behind these requirements, especially if the core APIs are RESTful. Here’s the most common scenario. The company has a REST API for interacting with their services. However, to make it easy for developers, the company provides SDKs and client implementations in various languages for the REST API.

Take a look at Algolia’s API for an example. You can view the documentation for their REST API here. However, when you implement Algolia (which provides a search feature for your site), you’ll want to follow the documentation for your specific platform.

Algolia client implementations

Although users could construct their own code when using the REST endpoints, most developers would rather leverage existing code to just copy and paste what they need.

When I worked at Badgeville, we developed a collection of JavaScript widgets that provided code developers could easily copy and paste into their web pages, making a few adjustments as needed. Sure developers could have created their own JavaScript widget code based on calls to the REST endpoints, but sometimes it can be tricky to know how to retrieve all the right information and then manipulate it in the right way in your language.

Remember that developers are typically using a REST API as a third-party service. The developer’s main focus is his or her own company’s code; they’re just leveraging your REST API as an additional, extra service. As such, the developer wants to just get in, get the code, and get out. This is why companies need to provide multiple client references in as many languages as possible — these client implementations make it easy for developers to implement the API.

If you were recruiting for a technical writer to document Algolia, how would you word the job requirements? Can you now see why even though the core work involves documenting the REST API, it would also be good to have an “ability to read code in one or more programming languages, such as Java, C++, or Python.”

Technical writers who are former programmers

When faced with these multi-language documentation challenges, hiring managers often search for technical writers who are former programmers to do the tasks. There are a good number of technical writers who were once programmers, and they can command more respect and competition for these developer documentation jobs.

But even developers will not know more than a few languages. Finding a technical writer who commands a high degree of English language fluency in addition to possessing a deep technical knowledge of Java, Python, C++, .NET, Ruby, and more is like finding a unicorn. (In other words, these technical writers don’t really exist.)

If you find one of these technical writers, the person is likely making a small fortune in contracting rates and has a near limitless choice of jobs. Companies often list knowledge of multiple programming languages as a requirement, but they realize they’ll never find a candidate who is both a Shakespeare and a Steve Wozniak.

Why does this hybrid individual not exist? In part, it’s because the more a person enters into the worldview of computer programming, the more they begin thinking in computer terms and processes. Computers by definition are non-human. The more you develop code, the more your brain’s language starts thinking and expressing itself with these non-human, computer-driven gears. Ultimately, you begin communicating less and less to humans using regular speech and fall more into the non-human, mechanical lingo.

This is both good and bad — good because other engineers in the same mindset may better understand you, but bad because anyone who doesn’t inhabit that perspective and embrace the terminology already will be somewhat lost.

Remember that the terminology and model will vary from one language and platform to the next. One user may speak fluently in Ruby, but that language may not connect with somebody who is a .NET developer. Consequently speaking “geek” can both connect with some developers and backfire with other developers.

Wide, not deep understanding of programming

Although you may have client implementations in a variety of programming languages, the implementations will be brief. The core documentation needed will most likely be for the REST API, and then you will have a variety of reference implementations or demo apps in these other languages.

You don’t need to have deep technical knowledge of each of the platforms to document them. You’re probably just scratching the surface with each of them.

As such, your knowledge of programming languages has to be more wide than deep. It will probably be helpful to have a grounding in fundamental programming concepts, and a familiarity across a smattering of languages instead of in-depth technical knowledge of just one language.

Having broad technical knowledge of 6 programming languages isn’t really easy to pull off, though. As soon as you throw yourself into learning one language, the concepts will likely start blending together.

And unless you’re immersed in the language on a regular basis, the details may never fully sink in. You’ll be like Sisyphus, forever rolling a boulder up a hill (learning a programming language), only to have the boulder roll back down (forgetting what you learned) the following month.

Undoubtedly, technical writers are at a disadvantage when it comes to learning programming. Full immersion is the only way to become fluent in a language, whether referring to programming languages or spoken languages like Spanish. I studied Spanish for 3 years in high school, but it wasn’t until I lived in Venezuela and interacted with locals for 6 months continuously speaking Spanish that the language finally clicked for me.

As such, you might consider diving deep into one core programming language (like Java) and briefly playing around in other languages (like Python, C++, .NET, Ruby, Objective C, and JavaScript).

Of course, you’ll need to find a lot of time for this as well. Don’t expect to have much time on the job for actually learning it. It’s best if you can make learning programming one of your “hobbies.”

Diverse technical landscape

The technical landscape is diverse, so the generalizations I’m providing here may not hold true in all companies. You may be in a Java or JavaScript shop where all you need to know is Java/JavaScript. If that’s the case, you’ll need to develop a deeper knowledge of the programming language so you can provide more depth.

However, with the proliferation of REST APIs, this scenario is much less common. Companies can’t afford to cater only to one programming language. Doing so drastically reduces their audience and limits their revenue. The advantages of providing a universally accessible API using any language platform usually outweigh the specifics you get from a native library API.

The company I currently work for has a Java, .NET, and C++ API that each do the same thing but in different languages. Maintaining the same functionality across three separate platforms is a serious challenge for developers. Not only is it difficult to find skill sets for developers across these three platforms, having multiple code bases makes it harder to test and maintain the code. It’s three times the amount of work, not to mention three times the amount of documentation.

Additionally, since native library APIs are implemented locally in the developer’s code, it’s almost impossible to get users to upgrade to the latest version of your API. You have to send out new library files and explain how to upgrade versions, licenses, and other deployment code.

If you’ve ever tried to get a big company with a lengthy deployment process on board with making updates every couple of months to the code they’ve deployed, you realize how impractical it is. Rolling out a simple update can take 6 months or more.

It’s much more feasible for API development shops to move to a SaaS model using REST, and then create various client implementations that briefly demonstrate how to call the REST API using the different languages. With a REST API, you can update it at any time (hopefully maintaining backwards compatibility), and developers can simply continue using their same deployment code.

The more you can facilitate implementation in the user’s desired language, the higher your chances of implementation — which means greater product adoption, revenue, and success.

Consolations for technical writers

This proliferation of code and platforms creates more pressure on the multi-lingual capabilities of technical writers. Here’s one consolation, though. If you can understand what’s going on in one programming language, then your description of the reference implementations in other programming languages will follow highly similar patterns.

What mainly changes across languages are the code snippets and some of the terms. You may refer to “functions” instead of “classes,” and so on. Even so, getting all the language right can be a serious challenge, which is why it’s so hard to find technical writers who have skills for producing developer documentation.

With this scenario of having multiple client implementations, you’ll face other challenges, such as maintaining consistency across the various platforms. As you try to single source your explanations for various languages, your documentation code will become complex and difficult to maintain.

Additionally, product managers may want you to push out separate outputs within each programming language channel to keep things simple for the users. Can you imagine pushing out a dozen different outputs across different languages for content that follows highly similar patterns and has common explanations but differs in just enough ways to make single sourcing from the same core content an act of sorcery? Here is where you have to put your technical writing wizard hat on and pull off level 9 incantations.

Not an easy problem to solve

The diversity and complexity of programming languages is not an easy problem to solve. To be a successful API technical writer, you’ll likely need to incorporate at least a regular regiment of programming study.

Fortunately, there are many helpful resources (my favorite being Safari Books Online). If you can work in a couple of hours a day, you’ll be surprised at the progress you can make.

Some of the principles that are fundamental to programming, like variables, loops, and try-catch statements, will begin to feel second-nature, since these techniques are common across almost all programming languages. You’ll also be equipped with a confidence that you can learn what you need to learn on your own (this is the hallmark of a good education).

But in discussions with hiring managers looking to fill 6-month contracts for technical writers already familiar with their programming environment, it will be a hard sell to persuade the manager that “you can learn anything.”

The truth is that you can learn anything, but it may take a long time to do so. It can take years to learn Java programming, and you’ll never get the kind of project experience that would give you the understanding that a developer possesses.

Strategies to get by

When you work in developer documentation environments, one strategy is to interview engineers about what’s going on in the code, and then try your best to describe the actions in as clear speech as possible.

You can always fall back on the idea that for those users who need Python, the Python code should look somewhat familiar to them. Well-written code should be, in some sense, self-descriptive in what it’s doing. Unless there’s something odd or non-standard in the approach, engineers fluent in code should be able to get a sense of what the code is doing.

In your documentation, you’ll need to focus on the higher level information, the “why” behind the approach, the highlighting of any non-standard techniques, and the general strategy behind the code.

Just remember that even though someone is a developer, it doesn’t mean he or she is an expert with all code. For example, the developer may be a Java programmer who knows just enough iOS to implement something on iOS, but for more detailed knowledge, the developer may be depending on code samples in documentation.

Conversely, a developer who has real expertise in iOS might be winging it in Java-land and relying on your documentation to pull off a basic implementation.

More detail in the documentation is always welcome, but you have to use a progressive-disclosure approach so that expert users aren’t bogged down with novice-level detail; at the same time, you have to make this additional detail available for those who need it. Expandable sections, additional pages, or other ways of grouping the more basic detail (if you can provide it) might be a good approach.

There’s a reason developer documentation jobs pay more — the job involves a lot more difficulty and challenges, in addition to technical expertise. At the same time, it’s just these challenges that make the job more interesting and rewarding.

7.1 Overview to native library APIs

About native library APIs

In previous parts of the course, we focused exclusively on REST APIs. Native library APIs (also called class-based APIs or just APIs) are notably different in the following ways:

We will focus this section on Java APIs, since they’re probably one of the most common. However, many of the concepts and code conventions mentioned here will apply to the other languages, with minor differences.

Eclipse

Do you have to be a programmer to document native library APIs?

Because native library APIs are so dependent on a specific programming language, the documentation is usually written or driven by engineers rather than generalist technical writers. This is one area where it helps to be a former software engineer when doing documentation.

Even so, you don’t need to be a programmer. You just need a minimal understanding of the language. Technical writers can contribute a lot here in terms of style, consistency, clarity, tagging, and overall professionalism.

You know what happens when engineers write — the content is cryptic and often incomplete. Usually the developer assumes everyone is as knowledgeable as he or she is, and any kind of extra explanatory detail, examples, cross-references, glossaries, or other helpful information is omitted.

My approach to teaching native library API doc

There are many books and online resources you can consult to learn a specific programming language. This section of the course will not try to teach you Java. However, to understand a bit about Java API documentation (which uses a document generator called Javadoc), you will need some understanding of Java.

To keep the focus on API documentation, we’ll take a documentation-centric approach to understanding Java. You’ll learn the various parts of Java by looking at a specific Javadoc file and sorting through the main components.

What you need to install

For this part of the course, you need to install the following:

To make sure you have Java installed, you can do the following:

Also, start Eclipse and make sure it doesn’t complain that you don’t have the JDK.

(Since we’ll just be using Java within the context of Eclipse, Windows users don’t need to add Java to their class path. But if you want to be able to compile Java from the command line, you can do this.)

7.12 Course summary

During this course, we explored the following:

You learned new tools and technologies, ways to publish and create interactive experiences for your audience. Now as you go forward into API documentation, hopefully you have a solid foundation to start.

If you have feedback on this course, please let me know.

7.2 Getting the Java source

About the sample project

In order to understand documentation for Java APIs, it helps to have a context of some sort. As such, I created a simple little Java application to demonstrate how the various tags get rendered into the Javadoc.

ACME project

The sample Java project is a little application about different tools that a coyote will use to capture a roadrunner. There are two classes (ACMESmartphone and Dynamite) and another class file called App that references the classes.

This program doesn’t really do anything except print little messages to the console, but it’s hopefully simple enough to be instructive in its purpose — to demonstrate different doc tags, their placement, and how they get rendered in the Javadoc.

Clone the source on Github

One of your immediate challenges to editing Javadoc will be to get the source code into your IDE. The acmeproject is here on Github.

First clone the source using version control. We covered some version control basics earlier in the course.

You can clone the source in a couple of ways:

git clone https://github.com/tomjohnson1492/acmeproject

Or click Clone in Desktop and navigate to the right path in Github Desktop.

(If you don’t want to clone the source, you could click Download ZIP and download the content manually.)

Open the right location in Eclipse

  1. After you’ve cloned or downloaded the Java project, open Eclipse. Go to File > New Java Project.
  2. Clear the Use default location check box, and then browse to where you cloned the Github project.

    Import existing Java project

  3. Click Finish.

    The Java files should be visible within your Eclipse IDE.

Maven projects

Java projects often have a lot of dependencies on packages that are third-party libraries or at least non-standard Java utilities. Rather than requiring users to download these additional packages and add them to their class manually, developers frequently use Maven to manage the packages.

Maven projects use a pom.xml file that defines the dependencies. Eclipse ships with Maven already installed, so when you import a Maven project and install it, the Eclipse Maven plugin will retrieve all of the project dependencies and add them to your project.

The sample project doesn’t use Maven, but I wanted to add a note about Maven here anyway because chances are if you’re getting a Java project from developers, you won’t import it in the way previously described. Instead, you’ll import it as an existing Maven project.

To import a Maven project into Eclipse:

  1. In Eclipse, go to File > Import > Maven > Existing Maven Projects and click Next.
  2. In the Root Directory field, click Browse and browse to the Java project folder (which contains the Maven pom.xml file) and then click Open. Then click Finish in the dialog box.
  3. In the Project Explorer pane in Eclipse, right-click the Java folder and select Run as Maven Install.

Maven retrieves the necessary packages and builds the project. If the build is successful, you will see a “BUILD SUCCESS” message in the console. You then use the source code in the built project.

7.3 Java in a nutshell

Overview

To understand the different components of a Javadoc, you have to first understand a bit about Java. Just being familiar with the names of the different components of Java will allow you to enter conversations and understand code at a high level. When you describe different aspects of sample code, knowing when to call something a class, method, parameter, or enum can be critical to your documentation’s credibility.

I’ll run you through a brief crash course in the basics. For more detail about learning Java, I recommend consulting lynda.com and safaribooksonline. Below I’ll focus on some basic concepts in Java that will be important in understanding the Javadoc tags and elements.

About Java

Java is one of the most common languages used because of its flexibility. Java isn’t tied to a specific language platform because Java code compiles into byte code. The platform you deploy your code on contains a Java Virtual Machine (JVM) that interprets the byte code. Hence through JVMs, different platforms can interpret and run Java code. This gives Java more flexibility with different platforms.

Classes

Classes are templates or blueprints that drive pretty much everything in Java. It’s easiest to understand classes through an example. Think of a class like a general blueprint of a “bicycle.” There are many different types of bicycles (Trek bikes, Specialized bikes, Giant bikes, Raleigh bikes, etc.). But they’re all just different instances of the general class of a bicycle.

In Java, you start out by defining classes. Each class is its own file, and begins with a capital letter. The file name matches the class name, which means you have just one class per file.

Each class can contain a number of fields (variables for the class) and methods (subroutines the class can do).

Before the class name, an access modifier indicates how the class can be accessed. Several options for access modifiers are:

Here’s an example of a class:

public class Bicycle{

//code...

}

You mostly need to focus on public classes, since these are the classes that will be used by your audience. The public classes are the API of the library.

Methods

Methods are subroutines or actions that the class can do. For example, with a bicycle you can pedal, brake, and turn. A class can have as many methods as it needs.

Methods can take arguments, so there are parentheses () after the method name. The arguments are variables that are used within the code for that method. For example:

add(a, b) {
sum = a + b;
}

Methods can return values. When a method finishes, the value can be returned to the caller of the method.

Before the method name, the method indicates what type of data it returns. If the method doesn’t return anything, void is listed. Other options are String or int.

Here’s an example of some methods for our Bicycle class:

class Bicycle {

	void turn() {
	// code ...
	}
	void pedal(int rotations) {
	System.out.println("Your speed is " + rotations + " per minute".);
	}
	
	int brake(int force, int weight) {
	torque == force * weight;
	return torque;
	}
}

See how the brake method accepts two arguments — force and weight? These arguments are integers, so Java excepts whole numbers here. (You must put the data type before the parameters in the method.) The arguments passed into this method get used to calculate the torque. The torque is then returned to the caller.

In Javadoc outputs, you’ll see methods divided into two groups:

Somewhere in your Java application, users will have something called a main method that looks like this:

	public static void main(String[] args) {
	
	}

Inside the main method is where you add your code to make your program run. This is where the Java Virtual Machine will look to execute the code.

Fields

Fields are variables available within the class. A variable is a placeholder that is populated with a different value depending on what the user wants.

Fields indicate their data types, because all data in Java is “statically typed” (meaning, its format/length is defined) so that the data doesn’t take up more space than it needs. Some data types include byte, short, long, int, float, or double. Basically these are numbers or decimals of different sizes. You can also specify a char, string, or boolean.

Here’s an example of some fields in class:

class Bicycle {

String brand;
int size;

}

Many times fields are “encapsulated” with getter-setter methods, which means their values are set in a protected way. Users call one method to set the field’s value, and another method to get the field’s value. This way you can avoid having users set improper values or incorrect data types for the fields.

Fields that are constant throughout the Java project are called ENUMS. Alternatively, the fields are given public static final modifiers.

Objects

Objects are instances of classes. They are the Treks, Raleighs, Specialized, etc., of the Bicycle class.

If I wanted to use the Bicycle class, I would create an instance of the class. The instance of the class is called an object. Here’s what it looks like when you “instantiate” the class:


Bicycle myBicycle = new Bicycle();

You write the class name followed by the object name for the class. Then assign the object to be a new instance of the class. Now you’ve got myBicycle to work with.

The object inherits all of the fields and methods available to the class.

You can access fields and methods for the object using dot notation, like this:

myBicycle Bicycle = new Bicycle();

myBicycle.brand = "Trek";
myBicycle.pedal();

You probably won’t see many objects in the native library. Instead, the developers who implement the API will create objects. However, if you have a reference implementation or sample code on how to implement the API, you will see a lot of objects.

Constructors

Constructors are methods used to create new instances of the class. The default constructor for the class looks like the one above, with new Bicycle().

The constructor uses the same name as the class and is followed by parentheses (because constructors are methods).

Often classes have constructors that initialize the object with specific values passed in to the constructor.

For example, suppose we had a constructor that initialized the object with the brand and size:

public class Bicycle{

public Bicycle(String brand, int size) {
	this.brand = model;
	this.size = size;
}

}

Now I use this constructor when creating a new Bicycle object:

Bicycle myBicycle = new Bicycle ("Trek", 22);

It’s a best practice to include a constructor even if it’s just the default.

Packages

Classes are organized into different packages. Packages are like folders or directories where the classes are stored. Putting classes into packages helps avoid naming conflicts.

When you create your class, if it’s in a package called vehicles, you list this package at the top of the class:

package vehicles

public class Bicycle{

}

Classes also set boundaries on access based on the package. If the access modifier did not say public, the class would only be accessible to members of the same package. If the access modifier were protected, the class would only be accessible to the class, package, and subclasses.

When you want to instantiate the class (and your file is outside the package), you need to import the package into your class, like this:

import vehicles

	public static void main(String[] args) {
	
	}

When packages are contained inside other packages, you access the inner packages with a dot, like this:

import transportation.motorless.vehicles.

Here I would have a transportation package containing a package called motorless containing a package called vehicles. Package naming conventions are like URLs in reverse (com > yoursite > subdomain).

Maven handles package management for Java projects. Maven will automatically go out and get all the package dependencies for a project when you install a Maven project.

Exceptions

In order to avoid broken code, developers anticipate potential problems with exception handling. Exceptions basically say, if there’s an issue here, flag the error with this exception and then continue on through the code.

Different types of errors throw different exceptions. By identifying the type of exception thrown, you can more easily troubleshoot problems when code breaks because you know the specific error that’s happening.

You can identify a specific exception the class throws in the class name after the keyword throws:

public class Bicycle throws IOException {

}

When you indicate the exception here, you list the type of exception using a specific Javadoc tag (explained later).

Inheritance

Some classes can extend other classes. This mean a class inherits the properties of another class. When one class extends another class, you’ll see a note like this:

public class Bicycle extends Vehicle {

}

This means that Bicycle inherits all of the properties of Vehicle and then can add to them.

Interfaces

An interface is a class that has methods with no code inside the method. Interfaces are intended to be implemented by another class that will insert their own values for the methods. Interfaces are a way of formalizing a class that will have a lot of subclasses, when you want all the subclasses to standardize on common strings and methods.

JAR files and WAR files

The file extension for a class is .java, but when compiled by the Java Development Kit into the Java program, the file becomes .class. The .class file is binary code, which means only computers (in this case, the Java Virtual Machine, or JVM) can read it.

Developers often package up java files into a JAR file, which is like a zip file for Java projects. When you distribute your Java files, you’ll likely provide a JAR file that the developer audience will add to their Java projects.

Developers will add their JAR to their class path to make the classes available to their project. To do this, you right-click your project and select Properties. In the dialog box, select Java Build Path, then click the Libraries tab. Then click Add JARs and browse to the JAR.

When you deliver a JAR file, developers can use the classes and methods available in the JAR. However, the JAR will not show them the source code, that is, the raw Java files. For this, users will consult the Javadoc.

If you’re distributing a reference implementation that consists of a collection of Java source files (so that developers can see how to integrate your product in Java), you’ll probably just send them a zip file containing the project.

A WAR file is a web application archive. A WAR is a compiled application that developers deploy on a server to run an application. Whereas the JAR is integrated into a Java project while the developers are actively building the application, the WAR is the deployed program that you run from your server.

That’s probably enough Java to understand the different components of a Javadoc.

Summary

Here’s a quick summary of the concepts we talked about:

7.4 Create a Javadoc

Javadoc overview

Javadoc is the standard output for Java APIs, and it’s really easy to build a Javadoc. The Javadoc is generated through something called a “doclet.” Different doclets can parse the Java annotations in different ways and produce different outputs. But by and large, almost every Java documentation uses Javadoc. It’s standard and familiar to Java developers.

Characteristics of Javadoc

Here are some other characteristics of Javadoc:

Generate a Javadoc

  1. Go to File > Export.
  2. Expand Java and select Javadoc. Then click Next.
  3. Select your project and package. Then in the right pane, select the classes you want included in the Javadoc. Don’t select the class that contains your main method.

    Generating a Javadoc

  4. Select which visibility option you want: Private, Package, Protected, or Public. Generally you select Public.

    Your API probably has a lot of helper or utility classes used on the backend, but only a select number of classes will actually be used by your developer audience. These classes are made public. It’s the public classes that your developer audience will use that form the API aspect of the class library.

  5. Make sure the Use standard doclet radio button is selected.
  6. Click the Browse button and select the output location where you want the Javadoc generated.
  7. Click Next.

    Javadoc next screen

    Here you can select if you want to omit some tags, such as @author and @deprecated. Generally you don’t include the @author tag, since it may only be important internally, not externally. You can also select different options in the Javadoc frame. If you have a custom stylesheet, you can select it here. Most likely you would only make superficial style changes such as with colors.

  8. Click Next.

    Overview page

    Here you can select an HTML page that you want to be your overview page in the Javadoc. You can select any HTML page and it will be included in the index.

  9. Click Finish.

Javadoc and error checking

Javadoc also checks your tags against the actual code. If you have parameters, exceptions, or returns that don’t match up with the parameters, exceptions, or returns in your actual code, then Javadoc will show some warnings.

Try removing a parameter from a method and generate the Javadoc again. Make sure the console window is open.

7.5 Javadoc tags

About Javadoc tags

Javadoc is a document generator that looks through your Java source files for specific annotations. It parses out the annotations into the Javadoc output. Knowing the annotations is essential, since this is how the Javadoc gets created.

The following are the most common tags used in Javadoc. Each tag has a word that follows it. For example, @param latitude means the parameter is “latitude”.

Javadoc tag Description
@author A person who made significant contribution to the code. Applied only at the class, package, or overview level. Not included in Javadoc output. It’s not recommended to include this tag since authorship changes often.
@param A parameter that the method or constructor accepts. Write the description like this: @param count Sets the number of widgets you want included.
@deprecated Lets users know the class or method is no longer used. This tag will be positioned in a prominent way in the Javadoc. Accompany it with a @see or {@link} tag as well.
@return What the method returns.
@see Creates a see also list. Use {@link} for the content to be linked.
{@link} Used to create links to other classes or methods. Example: {@link Foo#bar} links to the method bar that belongs to the class Foo. To link to the method in the same class, just include #bar.
@since 2.0 The version since the feature was added.
@throws The kind of exception the method throws. Note that your code must indicate an exception thrown in order for this tag to validate. Otherwise Javadoc will produce an error. @exception is an alternative tag.
@Override Used with interfaces and abstract classes. Performs a check to see if the method is an override.

Comments versus Javadoc tags

A general comment in Java code is signaled like this:

// sample comment...

/*
sample comment
*/

Javadoc does nothing with these comments.

To include content in Javadoc, you add two asterisks at the start, before the class or method:

/**
*
*
*
*
*/

(In Eclipse, if you type /** and hit return, it autofills the rest of the syntax automatically.)

The format for adding the various elements is like this:

/**
* [short description]
* <p>
* [long description]
*
* [author, version, params, returns, throws, see, other tags]
* [see also]
*/

Here’s a real example of Javadoc comments for a method.

/**
* Zaps the roadrunner with the amount of volts you specify.
* <p>
* Do not exceed more than 30 volts or the zap function will backfire.
* For another way to kill a roadrunner, see the {@link Dynamite#blowDynamite()} method.
*
* @exception IOException if you don't enter an data type amount for the voltage
* @param voltage the number of volts you want to send into the roadrunner's body
* @see #findRoadRunner
* @see Dynamite#blowDynamite
*/
public void zapRoadRunner(int voltage) throws IOException {
   if (voltage < 31) {
       System.out.println("Zapping roadrunner with " + voltage + " volts!!!!");
   }
   else {
    System.out.println("Backfire!!! zapping coyote with 1,000,000 volts!!!!");
   }
}

Where the Javadoc tag goes

You put the Javadoc description and tags before the class or method (no need for any space between the description and class or method).

What elements you add Javadoc tags to

You add Javadoc tags to classes, methods, and fields.

Public versus private modifiers and Javadoc

Javadoc only includes classes, methods, etc. marked as public. Private elements are not included. If you omit public, the default is that the class or method is available to the package only. In this case, it is not included in Javadoc.

The description

There’s a short and long description. Here’s an example showing how the description part is formatted:

/**
* Short one line description.
* <p>
* Longer description. If there were any, it would be
* here.
* <p>
* And even more explanations to follow in consecutive
* paragraphs separated by HTML paragraph breaks.
*
* @param variable Description text text text.
* @return Description text text text.
*/
public int methodName (...) {
// method body with a return statement
}

(This example comes from Wikipedia entry.)

The short description is the first sentence, and gets shortened as a summary for the class or method in the Javadoc. After a period, the parser moves the rest of the description into a long description. Use <p> to signal the start of a new paragraph. You don’t need to surround the paragraphs with opening and closing <p> tags – the Javadoc compiler automatically adds them.

Also, you can use HTML in your descriptions, such as an unordered list, code tags, bold tags, or others.

After the descriptions, enter a blank line (for readability), and then start the tags. You can’t add any more description content below the tags. Note that only methods and classes can have tags, not fields. Fields (variables) just have descriptions.

Note that the first sentence is much like the shortdesc element in DITA. This is supposed to be a summary of the entire class or method. If one of your words has a period in it (like Dr. Jones), then you must remove the space following the period by adding Dr.&nbsp;Jones to connect it.

Avoid using links in that first sentence. After the period, the next sentence shifts to the long paragraph, so you really have to load up that first sentence to be descriptive.

The verb tense should be present tense, such as gets, puts, displays, calculates…

What if the method is so obvious (for example, printPage) that your description (“prints a page”) becomes obvious and looks stupid? Oracle says in these cases, you can omit saying “prints a page” and instead try to offer some other insight:

Add description beyond the API name. The best API names are “self-documenting”, meaning they tell you basically what the API does. If the doc comment merely repeats the API name in sentence form, it is not providing more information. For example, if method description uses only the words that appear in the method name, then it is adding nothing at all to what you could infer. The ideal comment goes beyond those words and should always reward you with some bit of information that was not immediately obvious from the API name. – How to write javadoc comments

Avoid @author

Commenting on Javadoc best practices, one person says to avoid @author because it easily slips out of date and the source control provides better indication of the last author. (Javadoc coding standards

Order of tags

Oracle says the order of the tags should be as follows:

@author (classes and interfaces)
@version (classes and interfaces)
@param (methods and constructors)
@return (methods)
@throws (@exception is an older synonym)
@see
@since
@serial
@deprecated

@param tags

@param tags only apply to methods and constructors, both of which take parameters.

After the @param tag, add the parameter name, and then a description of the parameter, in lowercase, with no period, like this:

@param url the web address of the site

The parameter description is a phrase, not a full sentence.

The order of multiple @param tags should mirror their order in the method or constructor.

Stephen Colebourne recommends adding an extra space after the parameter name to increase readability (and I agree).

As far as including the data type in the parameter description, Oracle says:

By convention, the first noun in the description is the data type of the parameter. (Articles like “a”, “an”, and “the” can precede the noun.) An exception is made for the primitive int, where the data type is usually omitted. – How to write doc comments using Javadoc

The example they give is as follows:

@param ch the character to be tested

However, the data type is visible from the parameters in the method. So even if you don’t include the data types, it will be easy for users to see what they are.

Note that you can have multiple spaces after the parameter name so that your parameter definitions all line up.

@param tags must be provided for every parameter in a method or constructor. Failure to do so will create an error and warning when you render Javadoc.

Note that usually classes don’t have parameters. There is one exception: Generics. Generic classes are classes that work with different type of objects. The object is specified as a parameter in the class in diamond brackets: <>. Although the Javadoc guidance from Oracle doesn’t mention them, you can add a @param tag for a generic class to note the parameters for the generic class. See this StackOverflow post for details. Here’s an example from that page:

/**
* @param <T> This describes my type parameter
    */
    class MyClass<T>{

        }

@return tag

Only methods return values, so only methods would receive a @return tag. If a method has void as a modifier, then it doesn’t return anything. If it doesn’t say void, then you must include a @return tag to avoid an error when you compile Javadoc.

@throws tag

You add @throws tags to methods or classes only if the method or class throws a particular kind of error.

Here’s an example:

@throws IOException if your input format is invalid

Stephen Colebourne recommends starting the description of the throws tag with an “if” clause for readability.

The @throws feature should normally be followed by “if” and the rest of the phrase describing the condition. For example, “@throws if the file could not be found”. This aids readability in source code and when generated.

If you have multiple throws tag, arrange them alphabetically.

Doc comments for constructors

It’s a best practice to include a constructor in a class. However, if the constructor is omitted, Javadoc automatically creates a constructor in the Javadoc but omits any description of the constructor.

Constructors have @param tags but not @return tags. Everything else is similar to methods.

Doc comments for fields

Fields have descriptions only. You would only add doc comments to a field if the field were something a user would use.

Cases where you don’t need to add doc comments

Oracle says there are 3 scenarios where the doc comments get inherited, so you don’t need to type them:

When a method in a class overrides a method in a superclass When a method in an interface overrides a method in a superinterface When a method in a class implements a method in an interface – How to write Javadoc comments

@see tags

The @see tag provides a see also reference. There are various ways to denote what you’re linking to in order to create the link. If you’re linking to a field, constructor, or method within the same field, use #.

If you’re linking to another class, put that class name first followed by the # and the constructor, method, or field name.

If you’re linking to a class in another package, put the package name first, then the class, and so on. See this sample from Oracle:

@see #field
@see #Constructor(Type, Type...)
@see #Constructor(Type id, Type id...)
@see #method(Type, Type,...)
@see #method(Type id, Type, id...)
@see Class
@see Class#field
@see Class#Constructor(Type, Type...)
@see Class#Constructor(Type id, Type id)
@see Class#method(Type, Type,...)
@see Class#method(Type id, Type id,...)
@see package.Class
@see package.Class#field
@see package.Class#Constructor(Type, Type...)
@see package.Class#Constructor(Type id, Type id)
@see package.Class#method(Type, Type,...)
@see package.Class#method(Type id, Type, id)

How to write Javadoc comments

You can create links to other classes and methods using the {@link} tag.

Here’s an example from Javadoc coding standards on making links:

/**
* First paragraph.
* <p>
* Link to a class named 'Foo': {@link Foo}.
* Link to a method 'bar' on a class named 'Foo': {@link Foo#bar}.
* Link to a method 'baz' on this class: {@link #baz}.
* Link specifying text of the hyperlink after a space: {@link Foo the Foo class}.
* Link to a method handling method overload {@link Foo#bar(String,int)}.
*/
public ...

To link to another method within the same class, use this format: {@link #baz}. To link to a method in another class, use this format: {@link Foo#baz}. However, don’t over hyperlink. When referring to other classes, you can use <code> tags.

To change the linked text, put a word after #baz like this: @see #baz Baz method.

Previewing Javadoc comments

In Eclipse, you can use the Javadoc tab at the bottom of the screen to preview the Javadoc information included for the class you’re viewing.

More information

7.6 Exploring the Javadoc output

About the Javadoc output

The Javadoc output hasn’t changed much in the past 20 years, so in some sense it’s predictable and familiar. On the other hand, the output is dated and lacks some critical features, like search, or the ability to add more pages. Anyway, it is what it is.

Class summary

The class summary page shows a short version of each of the classes. The description you write for each class (up to the period) appears here. It’s kind of like a quick reference guide for the API.

Class summary

You click a class name to dive into the details.

Class details

When you view a class page, you’re presented with a brief summary of the fields, constructors, and methods for the class. Again this is just an overview. When you scroll down, you can see the full details about each of them.

full class details http://docs.oracle.com/javase/7/docs/api/

Other navigation

If you click Package at the top, you can also browse the classes by package. Or you can go to the classes by clicking the class name in the left column. You can also browse everything by clicking the Index link.

Left pane

For more information about how the Javadoc is organized, click the Help button.

7.6 Making edits to Javadoc tags

Common scenarios

It’s pretty common for developers to add Javadoc tags and brief comments as they’re creating Java code. In fact, if they don’t add it, the IDE will usually produce a warning error.

However, the comments that developers add are usually poor, incomplete, or incomprehensible. A tech writer’s job with Javadoc is often to edit the content that’s already there, providing more clarity, structure, inserting the right tags, and more.

When you make edits to Javadoc content, look for the following:

In this exercise, you’ll make some edits to the Javadoc tags and see how they get rendered in the output.

Make some edits

Make some edits to a class and method. Then regenerate the Javadoc and find your changes.

7.7 Doxygen, another document generator

Doxygen overview

An alternative to Javadoc is Doxygen. Doxygen works highly similarly to Javadoc, except that you can process more languages (Java, C++, C#, and more) with it. Doxygen is most commonly used with C++. Additionally, there’s a GUI tool (called Doxywizard) that makes it really easy to generate the file.

You can download the Doxywizard tool when you install Doxygen. See the Doxygen download page for more information.

Here’s Doxygen’s front-end GUI generator (Doxywizard):

Doxygen front-end GUI generator

Here’s the Doxygen output:

Doxygen Sample

By the way, you don’t need to use the wizard. You can also just generate Doxygen through a configuration file. This is how developers typically run Doxygen builds from a server.

In contrast to Javadoc, Doxygen also allows you to incorporate external files written in Markdown. And Doxygen provides a search feature. These are two features that Javadoc lacks.

Doxygen is maintained by a single developer and, like Javadoc, hasn’t changed much over the years. In my opinion, the interface is highly dated and kind of confusing.

Integrating builds automatically

In a lot of developer shops, document generators are integrated into the software build process automatically. Doxygen allows you to create a configuration file that can be run from the command line (rather than using the frontend GUI). This means when developers build the software, the reference documentation is automatically built and included in the output.

Other document generators

You don’t need to limit yourself to either Javadoc or Doxygen. There are dozens of different document generators for a variety of languages. Just search for “document generator + {programming language}” and you’ll find plenty. However, don’t get very excited about this genre of tools. Document generators are somewhat old, produce static front-ends that look dated, are often written by engineers for other engineers, and not very flexible.

Perhaps the biggest frustration of document generators is that you can’t really integrate the rest of your documentation with them. You’re mostly stuck with the reference doc output. You’ll need to also generate your how-to guides and other tutorials, and then link to the reference doc output. As such, you don’t end up with a single integrated experience of documentation. Additionally, it will be hard to create links between the two outputs.

7.8 Creating non-ref docs with native library APIs

About non-reference docs

Although much attention tends to be given to the reference documentation with APIs, actually the bulk of what technical writers usually do with native library API docs is provide non-reference documentation. This is the stuff that engineers rarely write. Engineers will throw a quick description of a class in a file and generate a Javadoc, and they’ll give that Javadoc to the user as if it represents a complete set of documentation, but reference docs don’t tell even half the story.

Reference docs can be an illusion for real doc

Jacob Kaplan Moss says that reference docs can be an illusion:

… auto-generated documentation is worse than useless: it lets maintainers fool themselves into thinking they have documentation, thus putting off actually writing good reference by hand. If you don’t have documentation just admit to it. Maybe a volunteer will offer to write some! But don’t lie and give me that auto-documentation crap. – Jacob Kaplan Moss

Other people seem to have similar opinions:

Auto-generated documentation that documents each API end-point directly from source code have their place (e.g., its great for team that built the API and its great for a reference document) but hand-crafted quality documentation that walks you through a use case for the API is invaluable. It should tell you about the key end-points that are needed for solving a particular problem and it should provide you with code samples.”

In general, document generators don’t tell you a whole lot more than you would discover by browsing the source code itself. Some people even refer to auto-generated docs as a glorified source-code browser.

Reference docs are feature-based, not task-based

One of the main problems with reference documentation is that it’s feature based rather than task based. It’s the equivalent of going tab-by-tab through an interface and describing what’s on each tab, what’s in each menu, and so on. We know that’s a really poor way to approach documentation, since users organize their mental model by the tasks they want to perform.

When you write API documentation, consider the tasks that users will want to do, and then organize your information that way. Reference the endpoints as you explain how to accomplish the tasks. Users will refer to the reference docs as they look for the right parameters, data types, and other class details. But the reference docs won’t guide them through tasks alone.

7.9 My biggest tip: Test everything

Testing overview

Walking through all the steps in documentation yourself, as a technical writer, is critical to producing good documentation. But the more complex setup you have, the more difficult it can be to walk through all of the steps. Especially with developer documentation, the tasks required to test out your documentation are not trivial. Still, they are essential to creating user-centered documentation. This is my biggest tip for having success as a technical writer creating API documentation: test everything.

Testing everything

Step 1: Set up a test environment

The first step to testing your instructions is to set up a test environment. Usually the QA team already has this environment in place, so sometimes all you need to do is ask how to access it. Get the appropriate URLs, login IDs, roles, etc. from your QA team. Ask them if there’s anything you shouldn’t alter or delete, since sometimes the same testing environment is shared among groups. Without this test environment, it will be really difficult to make any progress in testing your instructions.

Although the test environment seems like a no-brainer, it really isn’t. A lot of times, developers test systems on their local machines, so setting up a web instance requires someone to get a server, install the latest build, and give you access to it.

Other times the platform requires extensive architecture to set up. For example, you might have to build a sample Java app to interact with the system. Setting up these resources on your own machine may prove challenging.

If you’re documenting a hardware product, you may not get a test instance of the product to play with. I once worked at a government facility documenting a million-dollar storage array. The only time I ever got to see it was by signing into a special data server room environment, accompanied by an engineer, who wouldn’t dream of letting me actually touch the thing, much less swap out a storage disk, run commands in the terminal, replace a RAID, or do some other task for which I was supposedly writing instructions.

Many times the QA and engineering teams work on local instances of the system, meaning they build the system on their local machines and run through test code there. You may need to submit a special request for an engineer to put the latest build on a server you can access.

Sometimes you may also have to jump over security hurdles. For example, connections to Amazon Web Services from internal systems may require you to go through an intermediary server. So to connect to the AWS test instance, you would have to SSL to the intermediary server, and then connect from the intermediary to AWS.

You may also need to construct certain YML files necessary to configure a server with the settings you want to test. Understanding exactly how to create the YML files, the directories to upload them to, the services to stop and restart, and so on can require a lot of asking around for help.

When you’re ready to submit a test call (assuming you have a REST API), you can probably use cURL, which makes it easy, but you’ll no doubt need to include an authorization in the header of the call. The authorization often uses a hash of a combination of factors.

Can you see how just getting the test system set up and ready can be challenging? Still, if you want to write good documentation, this is essential. Good developers know and recognize this need, and so they’re usually somewhat accommodating in helping set up a test environment to get you started.

For example, I asked an engineer to explain, step-by-step, how I was to connect to an intermediary jump host server required at my work. This server required a configuration that controlled the responses from the API. After explaining how to do it, he made sure that I could successfully connect from a terminal prompt on my own, and I didn’t let the discussion go until I was successful.

Never let a developer say “Oh, you just do a, b, and c.” Then you go back to your station and nothing works, or it’s much more complicated than he or she let on.

After I could connect successfully to the intermediary, I documented it in great detail. I even included a list of error messages I encountered and added them to a troubleshooting section.

In setting up the test system, I also learned that part of my documentation was unnecessary. I thought that field engineers would need to configure a database with a particular code themselves, when it turns out that IT operations really does this configuration. I didn’t realize this until I started to ask how to configure the database, and an engineer (a different one from the engineer who said the database would need configuration) said that my audience wouldn’t be able to do that configuration, so it shouldn’t be in the documentation.

It’s little things like that, which you learn as you’re going through the process yourself, that make accessing a test environment vital to good documentation.

Step 2: Test the instructions yourself

After setting up the test environment, the next step is to test your instructions. Again, this isn’t rocket science, but it’s critical to producing good documentation.

One benefit to testing your instructions is that you can start to answer your own questions. Rather than taking the engineer’s word for it, you can run a call, see the response, and learn for yourself.

In fact, a lot of times you can confront an engineer and tell him or her that something isn’t working correctly, or you can start to make suggestions for improving things. You can’t do this if you’re just taking notes about what engineers say, or if you’re just writing documentation from specs.

When things don’t work, you can identify and log bugs. This is helpful to the team overall and increases your own credibility with the engineers. It’s also immensely fun to log a bug against an engineer’s code, because it shows that you’ve discovered flaws and errors in the system.

Other times the bugs are within your own documentation. For example, I had one of my parameters wrong. Instead of verboseMode, the parameter was simply verbose. This is one of those details you don’t discover unless you test something, find it doesn’t work, and then set about figuring out what’s wrong.

Wrestling with assumptions

While testing your documentation, you must recognize that what may seem clear to you may be confusing to another, because all documentation builds on assumptions that may or may not be shared with your audience.

For example, you may assume that users already know how to SSH onto a server, create authorizations in REST headers, use cURL to submit calls, and so on.

Usually documentation doesn’t hold a user’s hand from beginning to end, but rather jumps into a specific task that depends on concepts and techniques that you assume the user already knows.

Making assumptions about concepts and techniques your audience knows can be dangerous. These assumptions are exactly why so many people get frustrated by instructions and throw them in the trash.

For example, my 10-year-old daughter is starting to cook. She feels confident that if the instructions are clear, she can follow almost anything (assuming we have the ingredients to make it). However, she says sometimes the instructions tell her to do something that she doesn’t know how to do — such as sauté something.

To sauté an onion, you cook onions in butter until they turn soft and translucent. To julienne a carrot, you cut them in the shape of little fingers. To grease a pan, you spray it with Pam or smear it with butter. To add an egg white only, you use the shell to separate out the yolk. To dice a pepper, you chop it into little tiny pieces.

The terms can all be confusing if you haven’t done much cooking. Sometimes you must knead bread, or cut butter, or fold in flour, or add a pinch of salt, or add a cup of packed brown sugar, or add some confectioners sugar, and so on.

Sure, these terms are cooking 101, but if you’re 10-years-old and baking for the first time, this is a world of new terminology. Even measuring a cup of flour is difficult — does it have to be exact, and if so, how do you get it exact? You could use the flat edge of a knife to knock off the top peaks, but someone has to teach you how to do that. When my 10-year-old first started measuring flour, she went to great lengths to get it exactly 1 cup.

The world of software instruction is full of similarly confusing terminology. For the most part, you have to know your audience’s general level so that you can assess whether something will be clear.

For example, does a user know how to clear their cache, or update Flash, or ensure the JRE is installed, or clone a git repository? Do the users know how to open a terminal, deploy a web app, import a package, cd on the command line, or chmod file permissions?

This is why checking over your own instructions by walking through the steps yourself becomes problematic. The first rule of usability is to know the user, and also to recognize that you aren’t the user.

With developer documentation, usually the audience’s skill level is far beyond my own, so adding little notes that clarify obvious instruction (such as saying that the $ in code samples signals a command prompt and shouldn’t be typed in the actual command) isn’t essential. But adding these notes can’t hurt, especially when some users of the documentation are product marketers rather than developers.

The solution to addressing different audiences doesn’t involve writing entirely different sets of documentation, as some have suggested. You can link potentially unfamiliar terms to a glossary or reference section where beginners can ramp up on the basics. You can likewise “sidebar out” into special advanced topics for those scenarios when you want to give some power-level instruction but don’t want to hold a user’s hand through the whole process. You don’t have to offer just one path through the doc set.

The problem, though, is learning to see the blind spots. If you’re the only one testing your instructions, they might seem perfectly clear to you. I think most developers also feel this way after they write something. They usually take the approach of rendering the instruction in the most concise way possible, assuming their audience knows exactly what they do.

But the audience doesn’t know exactly what you know, and although you might feel like what you’ve written is crystal clear, because c’mon, everyone knows how to clear their cache, in reality you won’t know until you test your instructions against an audience.

Step 3: Test the instructions against an audience

Almost no developer can push out their code without running it through QA, but for some reason technical writers are usually exempt from QA processes. There are some cases where tech docs are “tested” by QA, but whenever this happens I usually get the strange feedback, as if a robot were testing my instructions.

QA people test to see whether the instructions are accurate. They don’t test whether a user would understand the instructions or whether concepts are clear. And QA team members are poor testers because they already know the system too well in the first place.

Before publishing, every tech writer should submit his or her instructions through a testing process, a “quality assurance” process in the most literal sense of the word.

Strangely, few IT shops actually have a consistent structure for this doc-quality-assurance role. You wouldn’t dream of setting up an IT shop without a quality assurance group for developers, but few technical writers have access to a dedicated editor or to a usability group to ensure quality.

When there are editors for a team, the editors usually play a style-only role, checking to make sure the content conforms to a consistent voice, grammar, and diction in line with the company’s official style guide.

While conforming to the same style guide is important, it’s not as important as having someone actually test the instructions. Users can overlook poor grammar — blogs and YouTube are proof of that. But users can’t overlook instructions that don’t work, that don’t speak to the real steps and challenges they face.

I haven’t had an editor for years. In fact, the only time I’ve ever had an editor was at my first tech writing job, where we had a dozen writers. But the editor there focused mostly on style.

I remember one time our editor was on vacation, and I got to play the editor role. I tried testing out the instructions and found that about a quarter of the time, I got lost. The instructions either missed a step, needed a screenshot, built on assumptions I didn’t know, or had other problems.

The response, when you give instructions back to the writer, is usually the old “Oh, users will know that.” The problem is that we’re usually so disconnected with the actual user experience — we rarely see users trying out docs — we can’t recognize the “users-will-know-how-to-do-that” statement for the fallacy that it is.

How do you test instructions without a dedicated editor, without a group of users, and without any formal structure in place? At the least, you can ask a colleague to try out the instructions.

Ask a colleague to try out your instructions

Other technical writers are usually both curious and generous when you ask them to try out instructions. And when other technical writers start to walk through your steps, they recognize discrepancies in style that are worthy of discussion in themselves.

Although usually other technical writers don’t have time to go through your instructions, and they usually share your same level of technical expertise, having someone test your instructions is better than no one.

Tech writers are good testing candidates precisely because they are writers instead of developers. As writers, they usually lack the technical assumptions that a lot of developers have (those assumptions that can cripple documentation).

Additionally, tech writers who test your instructions know exactly the kind of feedback you’re looking for. They won’t feel ashamed and dumb if they get stuck and can’t follow your instructions. They’ll usually let you know where your instructions are lacking. I got confused right here because I couldn’t find the X button, and I didn’t know what Y meant, they’ll say. They know what you need to hear.

In general, it’s always good to have a non-expert test something rather than an expert, because experts can often compensate for shortcomings in documentation with their own expertise. Novices can’t compensate.

Another reason tech writers make good testers is because this kind of activity fosters good team building and knowledge sharing. At a previous job, I worked in a large department that had, at one time, about 30 UX engineers. The UX team held periodic meetings during which they submitted a design for general feedback and discussion.

By giving other technical writers the opportunity to test your documentation, you create the same kind of sharing and review of content. You build a community rather than having each technical writer always work on independent projects.

What might come out of a user test is more than highlighting shortcomings about poor instruction. You may bring up matters of style, or you might foster great team discussions through innovative approaches to your help. Maybe you’ve integrated a glossary tooltip that is simply cool, or an embedded series button. When other writers test your instructions, they not only see your demo, they understand how helpful that feature is in a real context.

Should you watch when users test?

One question in testing users is whether you should watch them in test mode. Undeniably, when you watch users, you put some pressure on them. Users don’t want to look stupid when they’re following what should be relatively simple instructions.

But if you don’t watch users, the whole testing process is much more nebulous. Exactly when is a user trying out the instructions? How much time are they spending on the tasks? Are they asking others for help, googling terms, and going through a process of trial and error to arrive at the right answer?

If you watch a user, you can see exactly where they’re getting stuck. Usability experts prefer it when users actually share their thoughts in a running monologue. They’ll tell users to let them know what’s running through their head every now and then.

In other usability setups, you can turn on a web cam to capture the user’s expression while you view the screen in a gotomeeting screenshare. This can allow you to give the user some privacy while also watching them directly.

In my documentation projects, I’m sorry to admit that I’ve veered far away from usability testing. It’s been years since I’ve actually tested my documentation this way, despite the eye-opening benefits I get when I do it. (Writing about it now, I’m making serious plans to mend my ways.)

At some point in my career, someone talked me into the idea of “agile testing.” When you release your documentation, you’re basically submitting it for testing. Each time you get a question from users, or a support incident gets logged, or someone sends an email about the doc, you consider that feedback and potential bugs to log against the documentation. (And if you don’t hear anything, then the doc must be great, right?)

Agile testing methods are okay. You should definitely act on this feedback. But hopefully you can catch errors before they get to users. The whole point of any kind of quality assurance process is to ensure users get a quality product.

Additionally, the later in the software cycle you catch an error, the more costly it is. For example, suppose you discover that a button or label or error message is really confusing. It’s much harder to change it post-release than pre-release. I particularly hate it when the interface has typos or misspellings that I have to follow in documentation commands just to keep the two in sync. (For example, Click the Multi tenancy button.)

Enjoyment benefits from testing

One of the main benefits to testing is that it makes writing documentation much more enjoyable. There’s nothing worse than ending up as a secretary for engineers, where your main task is to listen to what engineers say, write up notes, send it to them for review, and listen to their every word as if they’re emperors who give you a thumbs up or thumbs down. That’s not the kind of technical writing work that inspires or motivates me.

Instead, when I can walk through the instructions myself, and confirm whether they work or not, that’s when things become interesting. Actually, the more you learn about the knowledge domain itself, the work of technical writing increases dramatically.

In contrast, if you just stick to technical editing, formatting, publishing, and curating — these activities are all worthwhile, but they are not fulfilling as a career. Only when you get your synapses firing in the knowledge domain you’re writing in, as well as your hands dirty testing and trying out all the steps and processes, does the work of technical writing start to become engaging.

Accounting for the necessary time

Note well that it takes time to try out the instructions yourself and with users. It probably doubles or triples the documentation time. You don’t always have this time before release. I can’t say that I’ve tested out all the different parts of my documentation, because I have four different programming languages across various platforms, but for the doc that I do test, it makes a world of difference.

One way to shorten the testing period is by leveraging the test scripts used by your QA team. These test cases often give you a clear picture about the functionality provided by the system, along with sample calls to see if each piece works. QA scripts are usually much more thorough than you need, but they’re helpful in pointing you in the right direction.

8.2 Conclusion

Congrats, you finished

Congratulations, you finished the Publishing API Docs course. By now, you should have a solid understanding of the variety and possibilities for publishing API documentation.

At this point, think about your requirements, your audience, and try to pick the right tools for your situation.

Questions to consider

Here are a few questions to consider:

What publishing tools did you choose?

I’m curious to know what publishing tools you chose. There are many options, and I could only cover a fraction of them during this course. Drop me a note to let me know what publishing tool or platform you’re using, and how it’s working out.

8.5 Answers

This page provides answers to some of the exercises during the course.

cURL parameters

Dot notation

Here’s what your dot notation should look like:

var sarahjson = john.children[0].child1;
var greenjson = john.eyes;
var nikejson = john.shoes.brand;
var goldenrodjson = john.favcolors[1];
var jimmyjson = john.children[1].child2;

Dot Notation: Windspeed

data.query.results.channel.wind.speed