Search results

What should your documentation metrics look like? Q&A with Zoomin about their 2024 Technical Content Benchmark Report

by Tom Johnson on Apr 14, 2024
categories: ai technical-writing

Zoomin recently released a Technical Content Benchmark Report for 2024. This report explains the company's second benchmark report on documentation metrics, which analyzes data from 97.6 million user sessions across 136 countries. The report provides insights into average metrics like page views, bounce rates, time on page, GPT search usage, and more. In this Q&A with Rita Khait from Zoomin, she discusses how to interpret and use these benchmarks to set goals, improve content findability and performance, and demonstrate documentation's value to stakeholders and the business.

Note: This is a sponsored post.

Here’s the report:

Zoomin's 2024 Technical Content Benchmark Report
Zoomin's 2024 Technical Content Benchmark Report

[Tom]: It's great to see a metrics benchmark report. I haven't seen this before. Can you tell me why you decided to put this report together? What's the backstory?

[Rita]: Throughout my work, I came across various reports based on questionnaires but never came across a report based on real user interaction analysis. Additionally, our customers frequently ask me about their performance metrics. For example, I see how I am performing and how it is trending over time, but does this mean that it is overall good or weak performance? These are the main reasons for putting together this report — collected and analyzed from 97.6 million user sessions to technical documentation across the globe.

[Tom]: Can you tell us your role with Zoomin and tech comm?

[Rita]: I’ve been working in Zoomin for almost three years now, focusing on customer enablement and consulting on content value. My main goal is to empower customers to realize the value their documentation brings to the business, and how to talk about it — with management, other departments, and outside the organization.

Before Zoomin, I was in the instructional design field, and developed learning content, and guided various teams in the creation of meaningful and engaging learning solutions. At the beginning of my journey, I worked at KPMG Russia, dedicating over six years to devising educational strategies.

With a Master’s degree in Educational Psychology, I enjoy implementing scientific research with hands-on experience to create impactful learning solutions and strategies for engaging the end-users.

[Tom]: Is the basic idea that this benchmark report gives you a sense of the average metric for each category for your documentation content, no matter what your content domain might be?

[Rita]: Yes, that is exactly that. It draws on data from a range of industries and businesses of various sizes.

[Tom]: The report "provides insights derived from 97.6 million user sessions from 136 countries." Where does all the data come from? Does each instance of Zoomin report metrics at this high level?

[Rita]: Correct, we analyze anonymised data of user interactions with each instance — documentation portals, communities, and In-product Help widget interactions.

[Tom]: Even if some tech writers have lost faith in metrics, most managers want to see documentation quantified in some way. Do you recommend including specific documentation metrics in goals? Why? What would you say to someone who is disillusioned by metrics?

[Rita]: You are correct, when it comes to conversation with higher management, they are very data-driven, and are usually very interested in looking into the performance over the benchmark. When working with your business stakeholders, I always suggest:

  1. Look at the company-wide goals — what are they about? Bringing new revenue? Improving customer satisfaction and self-service? Anything else?
  2. How can this be translated into your goals? Let’s take “Bring new revenue” as an example, which for documentation teams may mean “support marketing content / ungate site or some of the materials.”
  3. Which metrics support this? How can you measure success or failure? In our example with new revenue, metrics might to track your page views, compare your metrics with those of non-gates sites and see what is lacking. Examine search metrics, click-through rate, and bounce rate.
  4. Choose around 3 metrics that would be the northern star for your goal and guide your content strategy for the next year.
  5. In addition to the benchmark analysis, track your core metrics performance over time to see whether you go in the right direction; keep track of any changes to the UX and content structure that you are making and how these are influencing these metrics.

Re what would you say to someone who is disillusioned by metrics? — don’t try to analyze everything at once. Instead, pinpoint 2-3 specific questions you seek answers to and monitor these metrics over time. Measuring for the sake of measuring never works. Start with a clear hypothesis, and prompt your analysis by asking critical questions like “so what?” or “what next?” to guide your analysis. Sometimes you might get to a result that you are not expecting to see — good or bad, learn from the data and suggest options for improvement.

[Tom]: Page views jumped from 39% to 50%, a substantial increase? What do you think is driving the increase? Are information consumption patterns accelerating?

[Rita]: Absolutely, it’s a significant increase. My view is that users are adopting a more dynamic approach to browsing, favoring quick scans of documents and faster overall consumption rates. They’re navigating through more pages in the same timeframe compared to last year, which suggests a shift towards more efficient information processing. It is, however, purely my opinion.

Furthermore, it’s important to note that Zoomin as a company, as well as the majority of our customers, has focused heavily on search efficiency this year. Increased search capabilities are likely contributing to the increase as well.

Last but not least, more and more portals are opening up to the public and removing the mandatory login at least for some of the content. This change brings a wider, more varied audience to documentation, which results in an increase in page views across various segments.

[Rita]: This is partially true. The GPT search for technical content is still catching up — I believe that on documentation sites users are more accustomed to search by keywords, so there is definitely an element of user education and adoption. Also, we shouldn’t forget that as technical content becomes more and more public and not gated, searches come directly from Google, for example. There are ways to impact GPT search adoption rates, but that is probably for another conversation.

[Tom]: The report says the bounce rate is 38%. How do you calculate this? I thought bounce rate was one of those metrics no longer available after increased privacy regulations restricted the kind of metrics previously available.

[Rita]: We calculate the bounce rate by collecting cookie information (as the first party), while ensuring compliance with privacy regulations. We’ve added all the necessary legal disclaimers and provided opt-out options to adhere to the main legal requirements in the countries we operate. The bounce rate widget shows the percentage of sessions that concluded after accessing just one page on the application.

[Tom]: The metrics find an average of 4.8 minutes viewing 5.2 pages. This is a long time for users to spend looking at documentation. How do these metrics compare with metrics for a website or blog, which are much shorter?

[Rita]: I haven’t explored a comparison between documentation and marketing/edutainment materials — that wasn’t our objective as they serve two different purposes and suggest different type of engagement. Technical documentation often provides step-by-step instructions or action items for users to follow while referencing it. Sure, sometimes you come for knowledge, but the main use-case of documentation is still the onboarding and implementation stage. So — yes, you might skim through several pages until you find a specific action you need to perform and then stay longer on the page that meets their immediate needs.

[Tom]: Suppose someone spends 3 minutes viewing a page. How do you interpret that metric? Is something wrong with the page? Is something right with the page? Besides the benchmarking of metrics, do you have general interpretations of what certain metrics might mean?

[Rita]: The State of Self-service and Experience Report that we did together with Frost and Sullivan indicates that users typically expect to spend about 6.7 minutes to find an answer in documentation portals. Therefore, if someone spends 3 minutes viewing a page, it can be considered a positive indicator, as long as the duration doesn’t exceed 7 minutes. However, it’s important to assess the context of your specific case. If the content is highly engaging and contains more learning materials, a longer viewing time might be a desired behavior on your site. To assess the effectiveness of your documentation, try searching for the top 5 frequently asked questions on your site yourself (assuming you know that the content is well-performing). Time how long it takes to find the answers — this will serve as a benchmark for ideal performance on your site for your specific case.

[Tom]: Metrics for GPT is a new thing. What are you finding? You said the click-through-rate (CTR) for Zoomin GPT was 12%. Can you unpack that a bit? Does that mean the answer was insufficient and the user sought more information, or that links incentivized them to dive deeper into the content?

[Rita]: Actually, this is the opposite. If the GPT response is sufficient enough, there is no need to click on a link to read more. However, for long and complex instructions you would expect to see the full content — so it really depends on the content that you have and the expected behavior.

[Tom]: What does the report say about PDF downloads?

[Rita]: The current report does not provide a breakdown of specific engagement metrics. However, our analysis of the Technical Content Trends from last year revealed that only 0.87% of all sessions resulted in a PDF download. This year, that number has decreased to 0.80%. I firmly believe that PDFs are no longer the end goal, they are simply an additional service to customers who prefer to save the documentation for later, offline use. HTML-first is now the expected UX standard, as HTML formatting for technical documentation provides better accessibility and usability.

[Tom]: How specific are these metrics to Zoomin's platform versus other content platforms?

[Rita]: We focus on using universal metrics that apply across all platforms for analyzing content performance. However, it’s important to understand the meaning of each metric and how to calculate them based on the data available to you. If you only have raw metrics in count, you may need to consider percentages, such as percentage of pages viewed or click-through rate.

The only aspect that is more specific to Zoomin is how we define engagement. This includes actions like PDF downloads, shares, feedback, and saving topics to collections. You can find more information about Zoomin’s engagement options here.

[Tom]: The report says 37% of companies are adopting conversational AI for customer service. Does "customer service" mean user interfaces for customers interfacing with service agents, or does customer service also include documentation?

[Rita]: Both — on the service agents’ side there are AI solutions to search through past tickets, documentation integrated within the agents console, as well as AI-driven email suggestions.

On the end-user side we are talking about AI-infused support portals and communities, AI-driven explicit case deflection.

[Tom]: How do these benchmark metrics compare with Zoomin's own documentation metrics?

[Rita]: Our own documentation team has done a tremendous amount of work both on the UX and the content side over the past year — we have fully redesigned the Documentation Portal, added In-product Help within the Zoomin Analytics product as well as our own case deflection for the Salesforce Community. In addition, the team has focused on findability of content, and ensured that they cover all frequent questions that end up in our Tier1 support.

As a result, metrics such as pages viewed, searches with results, and bounce rates have shown improvement compared to last year and have exceeded this year’s benchmark. These improvements are a testament to our team’s hard work and dedication. With all of the new solutions in place, it would be interesting to see how these metrics perform over time.

[Tom]: If you could pick just one metric by which to evaluate content, what metric would that be and why?

If you could choose just one metric to evaluate content, which would it be and why?

For fully gated portals, the search click-through rate would be crucial. Users heavily rely on the internal search experience in such portals. When people have specific questions, they tend to search rather than browse through tiles. Therefore, search click-through rate would be the primary indicator of the effectiveness of your self-service efforts.

For publicly available content, I would closely monitor the bounce rate. This helps ensure that there is no unusual behavior on the site, especially as it becomes more SEO-optimized.

However, my all-times-favorite metric is returning users. This metric indicates the adoption of your content and the trust users place in your documentation. In the ideal scenario, we would expect this metric to be above the benchmark, with a positive trend over time. By the way, did you know that returning users to your documentation are more likely to renew or upsell? Listen to our podcast featuring David Hoare from ServiceNow, where he discusses internal research on the impact of documentation on business.

You can download the Technical Content Benchmark Report for 2024 here.


About Rita Khait

Rita Khait

Rita Khait is the Director of Customer Enablement at Zoomin Software. She leads Zoomin’s benchmarking initiatives and the online learning academy to optimize customer engagement and business results. Previously, Rita developed instructional content and strategic educational solutions, bringing over six years of experience from KPMG Russia. She holds a Master’s degree in Educational Psychology, blending scientific research with practical applications to enhance user engagement.

About Tom Johnson

Tom Johnson

I'm an API technical writer based in the Seattle area. On this blog, I write about topics related to technical writing and communication — such as software documentation, API documentation, AI, information architecture, content strategy, writing processes, plain language, tech comm careers, and more. Check out my API documentation course if you're looking for more info about documenting APIs. Or see my posts on AI and AI course section for more on the latest in AI and tech comm.

If you're a technical writer and want to keep on top of the latest trends in the tech comm, be sure to subscribe to email updates below. You can also learn more about me or contact me. Finally, note that the opinions I express on my blog are my own points of view, not that of my employer.