Guest Post: Wikis Are the Future of Technical Documentation

Mick Davidson

The following is a guest post by Mick Davidson, a technical writer with 20 years of professional writing experience.

Before I get started I’d like to thank Tom for giving me this opportunity to bang on about why I think wikis are the future for technical documentation.

Like many writers, up to a few years ago I was plodding around using backwoods technology, stuck with systems that had once been great but now begged to be retired. From a personal angle, I felt I was stuck with dull tools, tools that were not joined up, couldn’t talk to each other, and excited me not one jot.

Then in through the window came the wiki — and everything changed. Now, three years on and two wikis later, I am 100% convinced they are going to be one of biggest and best documentation tools we have. In short: revolutionary.

There are perhaps four reasons for this:

  1. They are simple to use.
  2. They are very flexible.
  3. They expand as your content grows.
  4. They make life as a technical author an absolute joy.

The latter is particularly important for me as I do not go to work to trudge through thousands of words with something as dull as a word processing package. I don’t want to use a system that hedges me in: I want more. I want a technology that gives me what I need and a lot more besides. And I want excitement!

There is at least one wiki system that offers all this. This system is called Confluence, by Atlassian, an Australian company that has, in less than ten years, come from nowhere to being a major player with 18,000 clients worldwide including Adobe, Twitter and Facebook. This is enterprise with a capital E. But don’t just take my word for it, see for yourself at

There are many other wikis out there, and I’m sure many of them are very good, though I’m not sure they are all enterprise level. Small can be beautiful, but corporations want technology that punches at the same weight as they do — or higher, and because they want to feel safe. MS Office is everywhere because it delivers the goods for many businesses. However, MS Office does not and cannot do what I felt we needed. And nor did any other system I investigated, all of which were far too rigid and expensive.

What we needed was a flexible, extensible system with built-in macros that enabled us to extend what we once called documentation into what is now known as content. Content can be words and graphics, as well as slideshows, audio and video etc.

The system also had to allow us to build and expand the structure as we needed to, easily and without fuss. As a lone author, I don’t want to spend my time building pages and structure — I want to spend it on content that is, for users, as interesting and rewarding to read and experience as it is for me to create.

So, apart from all that, what does working with a wiki offer technical writers? How does it let us do our job better? Why is it more fun?

For a start, the wiki offers simplicity in every direction. In Confluence, and this may not be true of all wikis, it’s ridiculously easy to create a set of pages that are linked to each other and to pages elsewhere. Structures form of their own volition as your content expands. Styles and formatting, which are already set up but can be changed by editing the CSS, are enforced throughout all pages, even when you import a Word document. Sure, you can’t stop people adding their own styles, but you can’t in Word either. And, as technical authors, we obviously don’t need to be told to stick to our in-house styles, so using what’s there isn’t a problem.

Two of the biggest advantages are searching and linking. For me, these two things are possibly the most crucial elements. If there was one thing clients used complained about, it was the limited search capabilities offered by more traditional documentation. Before the wiki we had something like 70 user guides, how-to guides, white papers, and various other documents. If you wanted to find something you had to go through each document, one by one — which is a tedious and often frustrating process, especially if you’re looking for topics that might be in more than one guide. A wiki can be viewed as a database of information that lies behind a very attractive front end — all of which is searchable at the same time.

With a wiki, everything is connected. Run a top level search and you search all your user information — in a second! And if you get too many results, you can simply modify or refine your search to focus it onto content sub-sets, by using labels (or tags as they are also known). In Confluence you can also search by date, the page’s author, and content type. For example, you can search the graphics as well as page names and words. In the end, both you and your clients benefit from this functionality.

And yes, I know you can achieve quite complex searches in other systems, but in my experience, you cannot create them as easily and quickly as you can in a wiki.

Wikis also offer advantages with linking. Links not only provide an alternate way of navigating, they take you to related content, wherever it is, so you can surf pages in the same way you’d surf content in any other website. The fact that you can link to specific topics directly from your software (surrogate help?) is brilliant for lone authors and those with limited budgets and time.

And these are just a couple of the benefits. We haven’t begun to explore collaboration. We’re now getting developers to write user content of their own free will, and clients help us improve documentation by adding comments.

Did I tell you that I’ve just started setting up a user forum in our wiki? Yes, me, the technical author, not a programmer or a web developer. With the wiki this opportunity is in my hands.

Then there’s the ridiculously cheap price. Well, in Atlassian’s case anyway. I doubt that you’ll find anything that offers so excellent value for money as a Confluence wiki. Ok, maybe you will if you’re looking at their other products such as JIRA.

For the last 12 months I’ve been writing all our user information in our Confluence wiki. Before that I used another enterprise-level wiki for two years. Enough time, I think, to get a very good picture of what wikis can do and understand what their potential is. And I strongly believe they are the future. So much so, that nowadays I’m only really interested in working for businesses that use them. Anything less would be, well, less.

But don’t take my word for it. Find a wiki that seems suit your needs, download a trial version and see how you get on. I doubt very much that you’ll regret it.

Mick Davidson has been a professional writer for 20 years as a journalist and technical author. He works mainly in the software industry and attended Atlassian’s recent Unite conference in London, where he took nine pages of notes. He can be contacted via at You can follow his blog at

Madcap FlareAdobe Robohelp

By Tom Johnson

I'm a technical writer working for the 41st Parameter in San Jose, California. I'm interested in topics related to technical writing, such as visual communication, API documentation, information architecture, web publishing, JavaScript, front-end design, content strategy, Jekyll, and more. Feel free to contact me with any questions.

33 thoughts on “Guest Post: Wikis Are the Future of Technical Documentation

  1. Mark Baker


    I think you do an excellent job of laying out the advantages of Wikis, and I expect that wikis will see growing use in technical communications. However, I don’t see them as the universal future of technical communication. and I think it is possible that they will turn out to be more of a transitional step than a final destination.

    As I have said before, I think we are going to see increasing segmentation for tech comm based on different business needs ( Wikis are a good solution for companies where collaboration is the most pressing business issue. Those for whom translation and reuse are the most pressing issue are moving to DITA. Those with other priorities are moving to other systems. Wikis may be the future of one segment of tech comm, but there is no current reason to think they are the future of all tech comm.

    The reason I say that wikis may prove to be transitional (and I would say the same thing of DITA) is that so many of the implementations of both are still in the early stages and we don’t have a lot of data on their long term sustainability.

    There is a long history of content management failures, due essentially to creeping disorder over time. Joe Gollner has shown that most content management implementations never pay back their implementation and maintenance costs within their effective lifespan. We are already seeing that content management overhead can be a major cost in DITA implementations, and it is well known that wikis, is not carefully managed, can become complete rat’s nests in which it is impossible to find anything.

    The simplicity of the wiki model comes at a price. A wiki imposes little or no content management discipline of its contributors (the more discipline, the more hoops to jump through, and thus less ease of use). This can lead to a wiki that works brilliantly in year one becoming unusable by year five. It doesn’t have to, of course. The people who manage it can prevent this from happening if they work hard enough, but the history of systems managed by human dedication, rather than by system rules, is not particularly encouraging.

    A possible future, then, is one in which content is treated *as* as database, following the kinds of system logic that make other large dynamic data sets reliable.

    On the other hand, that may prove to be too much of a stretch for technical communications, as least in the general case, in which case you might turn out to be right, and wikis might turn out to be the future more mass technical communications. But there we will have to pass through the crucible of the coming content management crisis before we see where things are really going to end up.

    1. mick davidson

      I think you do an equally good job of explaining your view as well. I agree, it is all very experimental at the moment. The way I see it is that wikis score because they are very simple to understand and use. They may not be right in every case, but because they take no time to learn, someone who’s busy knows that the effort to result ratio is very high.

      I also think that for a lot of businesses that fact that it takes little effort to create something that looks good (i.e. is at enterprise level) is a big plus. You only have to look at Atlassian’s client list to know that.

      I agree about the lack of content management, but that’s where I come in. As the tech writer it is my job to monitor the client-facing content and make sure it’s up to standard. If it isn’t I have to fix it asap. We’re fortunate in that our wiki has a limited number of client users, perhaps about 300, so we’re not facing the sort of numbers that would mean bad content is spotted quickly by them. I get automatically emailed every time changes are made and can review and fix them very quickly. What this means in reality is that other staff are now contributing to the user info because they’re not put off by having poor grammar etc. They contribute because they know someone else is going to tidy their contributions up. We decided to do this from the outset, and so far it’s worked.

      The only thing that concerns me is the quantity of info in there and how searchable it is. At a top level, i.e. if you search the whole wiki, then you will get 1000s of results (potentially). But with Confluence you can limit the search to individual spaces an even to individual sets of pages. So if you’re prepared to make an extra couple of clicks, you can limit the results massively.

      When we started looking at a documentation system we wanted it to be easy to use, flexible and extensible. With Confluence we get all three and a lot more besides. At the moment, I can’t think of any reason for changing that.

      1. Mark Baker


        You raise a really good point about the effort to result ratio. It’s an important metric, but one that applies differently in different situations.

        For a casual user, the effort to result ratio for something like FrameMaker is unfavorable, because the is both a steep learning curve and a steep configuration/setup curve, and you are still essentially doing everything by hand, so there is no productivity multiplier.

        In structured writing, you have a similar learning curve and perhaps an even steeper configuration/setup curve, but it is offset by automation which, if fully exploited, is a huge productivity multiplier, meaning that the effort to result ratio is favorable despite the startup costs.

        With a wiki, you eliminate most of the configuration/startup costs of both DTP and structured writing, so you get a more favorable effort to result ratio than DTP despite the lack of a productivity multiplier. This is especially true when you exploit that ease of use to open up authorship to a wider group, which is actually a kind of productivity multiplier itself, of the many-hands-make-light-work variety.

        This means that when you go from DTP to wiki, you should expect to see some pretty good short-term cost savings. Whether those saving can be maintained in the long run will depend on what happens on the content management front.

        As you correctly say, in a wiki, you are the content management. There is no productivity multiplier for content management in a wiki any more than there is a productivity multiplier for authoring — just simplicity and a low cost of entry.

        That lack of automation of content management can be a problem as your content set grows, because the content management overhead does not necessarily scale linearly with the content set. Content management is concerned not only with individual pages, but with the relationship between pages, and the number of relationships grows far faster than the number of pages (

        Managing all those relationships by hand can be prohibitive, which is why wikis can lose their overall cohesion over time and turn into separate clumps of pages. Books never ran into this kind of content management issue because they were always separate clumps of pages with few or no connections between books. When a book got too big to manage, you simply split it into multiple independent books. You can take the same approach with wiki content, but then you lose a lot of the power of a wiki as a hypertext medium.

        On the other hand, if your content set never scales up to the kind of size where this becomes a problem, then there is no reason a wiki should not work well indefinitely. Tech pubs projects exist at every scale, from a few words (“Heat and serve”, “Lather, rise, repeat.”) to millions of pages (aircraft-carrier maintenance and operations), and no one solution will scale either up or down to meet the needs of every point on that spectrum.

        On the search issue, search only works well with a good relevance engine, and a relevance engine only works well with lots of content. If your wiki is on the web, then Google will do a better job of searching it than its internal search ever could. If not, you will run into the problem I describe here: That’s not a wiki issue, however, but an issue with any information set that is too big to browse and too small to search.

        1. mick davidson

          I agree about the learning curves with some of the more advanced doc software, and that’s exactly why we avoided them, and also because we didn’t want to find ourselves being lumbered with a system that we weren’t happy with. The problem with a lot of systems is that you don’t really know how they are going to work until you’ve tried them for several months, and by then you’ve made such an investment in time and money (and buy-in from colleagues and clients) that you can’t back out.

          With the wiki we had a fairly cheap entry point so cost wasn’t an issue, but we saw that it would work for us from the beginning because wikis are, in essence, very simple. And when you only have one tech author to about 25 developers, you need simple. :)

          Our Confluence wiki is divided up into spaces (spaces are the wiki’s building blocks), so we have a space for all user info plus another with overview info ( both client facing). So that structure is simple and very well maintained/policed.

          We have about 20 internal spaces – effectively our intranet. These are not policed and also have the highest number of contributors. We let staff do what they want here. The CSS takes care of most things and most people stick to the built in styles, but even if they don’t it isn’t an issue. We keep the barrier to participation and collaboration very low to encourage people in. However, if something’s going to go into the client areas, then it’s thoroughly reviewed by myself and anyone else who’s interested.

          I agree about information overload and how that affects the search, but you can search each space individually and even add a macro that will only search the page your on and it’s subpages. We encourage people to use this as they feel the need.

          That said, I think in the future we might have a third client area that only has the How-To and FAQ type info. Most clients just want to know how to do specific things, so we can limit the space to just that sort of info and keep all the heavily detailed info in a separate space.

          We don’t know of course, we’re seeing how the wiki evolves through user input and necessity. Which is one of the beauties of this system, it’s digital Lego. :)

        2. Tom Johnson

          Interesting comment, Mark. This part stood out to me: “With a wiki, you eliminate most of the configuration/startup costs of both DTP and structured writing, so you get a more favorable effort to result ratio than DTP despite the lack of a productivity multiplier. This is especially true when you exploit that ease of use to open up authorship to a wider group, which is actually a kind of productivity multiplier itself, of the many-hands-make-light-work variety.”

          This is where the philosophies diverge. Unless you can leverage numerous people authoring content on a wiki, then you haven’t really gotten the wiki off the ground. Theoretically, the distributed authoring model of a wiki should give you far more returns than you can get from structured authoring. But figuring out that dynamic and incorporating authors is really tough.

      2. Tom Johnson

        I think the ability to limit searches to specific spaces is pretty key. If the search function makes it easy to limit the search to a filter, that’s a huge advantage in wiki software, precisely because wikis host many products on the same platform.

        1. Mark Baker

          I wrote a blog post recently explaining why searches work better when there is *more* content to search (, but this discussion prompted me to make a follow up post distinguishing between cases where small is better and where large is better. I call it the difference between findability and searchability (

    2. Tom Johnson

      Insightful comment. You’re right about the need to work hard to manage the content on a wiki. I’m struggling with that problem right now. My wiki has about 1,100 pages, and no one has acted as content manager for the site for 3 years.

      One of the disadvantages of a wiki is lack of content ownership. Someone creates a page for a specific need, the need passes, the person changes roles, leaves the company, or moves on to other projects. Meanwhile, the content stagnates and becomes out of date. It clutters up the wiki with irrelevant categories and inaccurate information. Multiple this by about 50 people, and you have a content nightmare. You suddenly have 50 people to track down to ask them about the currency of the content. Is it still accurate, is it still needed? Who owns this content? If I delete it, will someone freak out?

      On the other hand, I love seeing multiple people come together to contribute to a platform of information. If wikis have a dedicated content manager, I think it can work. At least I’m trying to do that.

      1. mick davidson

        A very quick thought on this because I’ve just woken up…
        I agree about old content and the need to manage it. Actually, the need to get rid of it so that it doesn’t clutter up the search.

        I must have a think about this and how we deal with it in the wiki. Perhaps we also need somewhere to archive it off to but I’d imagine that some content can be deleted altogether.

        I’d need input from other depts about doing this. Which is another task altogether. :)

  2. Steve Arrants

    Is there any user data available yet? How satisfied they are, other metrics?

    And you say “Then there’s the ridiculously cheap price.”

    How ridiculously cheap?

    1. mick davidson

      At the moment we do not have any metrics other than the word of mouth feedback we get from clients. We asked them what they wanted before we launched the wiki, and they all said ‘a wiki, or something very similar.’ We don’t get a lot of feedback but generally speaking when we do, it’s positive.

      And because clients can leave comments (actually they can edit the user info themselves if they like) they are getting involved in improving our content – with very little effort on their part. This sort of thing doesn’t happen very often, but it’s much better than the input we had before.

      As for price, by going to Atlassian we slashed our costs by about 50%. You can find out more about their wiki pricing here:

      1. Steve Arrants

        Well, I checked pricing for a site with 1000 users.

        Atlassian license — $8000-$12000 a year depending on lic. type
        Zen Foundation — $6000 a year
        Other plugins — $6000 a year (ballpark figure for 2-3 plugins)

        SO we’re looking at a yearly cost of between $20000-$24000 a year. There is no way I can justify that to my management. Collaboration and curation are nice, but not at those prices.

        1. Tom Johnson

          Steve, Mediawiki (free) might be a solution that works for you if you’re on a tight budget. However, always take open source / free software with a grain of salt. It may be free, but it also may require more hours of development to implement and customize.

  3. Sarah Maddox

    Hallo Mick

    What a great post! It’s just bubbling with enthusiasm and brimming with information about why a wiki has worked for you. I’m so looking forward to reading all your tips and ideas over on your own blog too.

    A quick disclaimer: I work at Atlassian. I’m a technical writer there, and I use Confluence as a tool for documentation.

    I like your four points about why wikis are a great tool, even revolutionary. I’d add a fifth one:
    5. They make life as a reader an absolute joy. :)

    Sure, there are pain points too. But the biggest thing I’ve noticed is that our readers use the documentation wiki enthusiastically, and above all vocally. Our Google Analytics stats show that the documentation wiki has more visits than the company website, which is in itself information-rich. Our readers add comments on the pages in huge numbers. Too many for us to manage, but luckily they’re often happen to swap information amongst themselves. We have over 50 community contributors who can edit the content too. And all the staff in the entire company can edit the content.

    It’s very interesting to see you say that the search is so great. It’s something the company is constantly working on, to find the best way to give people the results they want.

    I agree with Mark that a wiki is not the right solution for every environment. How could it be? No single tool ever will be. But I do think it’s a great tool for many environments. When I first started working on a wiki, nearly five years ago, I couldn’t stop talking about and writing about the simplicity and pleasure of it. Interacting directly with readers – just so cool. Even though some of them can be nasty at times!

    And of course things will change. But wikis are constantly changing too. They’re keeping up with us. Mostly. :)

    Ha ha, nine pages of notes from that one-day conference. I hope we’ll get to read them soon.

    Thanks for a refreshing post!
    Cheers, Sarah

    1. mick davidson

      thanks! :) I believe your Point 5 is also correct because I’m reading it all day long and I don’t even think about it as an experience, so it’s obviously not difficult on the eyes etc. I do try to keep a fair amount of white space though.

      Another benefit, which you mention, is interacting with clients. I’ve got a couple of conversations going on in the wiki at the moment about printing pages and access to content held in a spreadsheet. We could have these conversations in emails but we all know how badly wrong that can go. Having them in the wiki means they are part of the content and can therefore be searched (as in they can be searched as part of the general content or you can search just on the comments – how brilliant is that?) but it also means that everyone else can see the conversations, follow them and contribute if they like. So not only are we changing what we do because of direct client feedback, everyone can see what’s going on and will hopefully see how simple it is to get things changed.

      I hope that this sort of thing keeps growing because I find it’s very positive for clients and for us.

      Search is an ongoing problem, the more content you have, the more results you get. So how can we reduce the pile? It’s a tricky one to answer. Everyone wants one-click results, but perhaps this is no longer realistic. Perhaps we have to accept that to get really meaningful results we have to be prepared to run an initial search and then use filters to get more focussed results. That said, using Confluence’s ridiculously-easy-to-add page tree search macro gets you right to what you want, assuming you add it to the right place. :)

      Personally, one of my favourite things is trying to find new ways I can improve how we present info and how we use it. The wiki has a million and one plugins and macros; it’s great fun trying to see how they can be used to make things better. We’d never have had that opportunity using Word etc.

      I’ve written up half my notes for an internal report, the rest will gather dust as they’re only really for my reference. In the bar in the evening after the Unite event, someone from Atlassian came up to me and said “You’re the guy who was sat at the front taking notes aren’t you?” So, fame at last! :)

      BTW, I make a point of sitting at the front because then you hear everything clearly, make eye contact with the speakers (handy if you’ve an ulterior motive) and are within grabbing distance of freebies should they be thrown into the crowd. :)

  4. DiSc


    very interesting post. Lately I have been pondering the use of wikis in technical documentation as well, and I frankly only see advantages, at least if compared to DocBook, DITA, or, say, RoboHelp (that is where my experience with authoring tools stops).

    But how do wikis compare on, for example, single-sourcing? I know for example that Dokowiki has plugins to export to PDFs and DocBook, and I assume Confluence does as well, but how reliable is that?

    XML-based solutions may be clunky, but I know exactly what I will have at the end of an XSL trasformation. I do not know if wikis are as reliable.



    1. mick davidson

      Thanks for your comment. I can’t say much about single sourcing I’m afraid. We use the wiki as a place for clients and staff to store, write and access data, it’s very straightforward in that regard.

      However, you can export to Word, PDF, HTML and XML (haven’t tried the latter two) and you can also use it on mobile devices, though I’m not sure the wiki is designed for the latter. That said, I have a new iPod Touch and I can view everything really well on that (i.e. it looks like it does in a web browser), I’ve even edited pages using it! So it’s pretty flexible.

      Confluence has 100s of plugins so there’s probably some that could do what you want, you could easily find out by searching the plugin area in Atlassian’s website (

      One area of weakness for me is the Word export, but only because the formatting isn’t very good. If I export content, I want the pages to be formatted as if they had been written in Word, but that doesn’t really happen. However, the PDF export is very good, about 95% right I’d say. I’d be quite happy to allow clients to print to PDF but not to Word.


      1. Sarah Maddox

        Hallo mick and DiSc,

        There are a number of options for single sourcing in Confluence, some via the core functionality and some via plugins and add-ons. I wrote a post about it a while ago:

        There are even more options available now. I’ve covered them in my new book. And in fact, I can personally attest to the reliability of the DocBook export (via Scroll Wiki DocBook Exporter, from K15t Software) because I wrote the book on a Confluence wiki and then exported it to DocBook for the publication processes. That was a very interesting and rewarding experience. Here’s a post about it:

        For people who are especially interested in technical documentation on Confluence, I’m hoping that my book will be just what you need. I’ve poured my 4+ years of experience into it. Here’s a link to the book on, just in case you’re interested. :)


        1. DiSc

          Hello Sarah,

          Thank you for the links. I need to find the time to download the COnfluence trial version and play around with it a while. Your book is also on my to-read list.


    2. Tom Johnson

      I want to join in on the single sourcing discussion. My familiarity is with Mediawiki rather than Confluence. Also, this comment is less about single sourcing and more about content re-use. But here’s an example of how to re-use content conditionally on Mediawiki. Check out this page: Conference venue. If you look at the source code, there’s not a whole lot there. After the first paragraph, you see {{:Riverton Office Building}}. This tag pulls in content from this page: Riverton Office Building. On that Riverton Office Building page, I’ve surrounded some text with tags. That means “do not include this content on the transclusion.” I could also add some tags there that say . Then that text would only appear on the transclusion, and not on the original page. (Note that Mediawiki calls this function “transclusion” rather than content re-use.)

      This is pretty simplistic, I know. Still, you could get a lot more sophisticated with these techniques. Of course as things get more complex, it’s harder for non-technical people to contribute. But you could still do it. Some of the code and techniques used on Wikipedia are pretty sophisticated.

      1. Mark Baker

        The problem with that kind of transclusion is that it isn’t manageable at anything but the smallest scale. It creates a hard dependency between the topics that means you can’t edit a transcluded topic without also examining the effect on the transluding topics. (And that supposes that the wiki provides a means to detect and locate transcluding topics.)

        If you get any significant amount of this kind of transclusion in your wiki, managing it is going to be a nightmare. (Which is, by the way, why CMS systems largely abandoned this model years ago.)

        DITA’s approach of placing the transcludable material in separate topics is a partial improvement, thought it comes with significant costs and management headaches as well, because you have to granulate all of your content to make it potentially reusable, and then have to draw complex maps to stitch it all back together again, reuse or not.

        I’m planning a post on the SPFE blog explaining why doing content management by hand is far more expensive than people expect it to be, and why content management systems often fail as a result.

      2. DiSc

        Hello Tom,

        Thank you for weighing in. I had not explicitly though about conditional content in wikis, but it is very much what I wanted to say in the comment above. We write docs in DocBook and deliver them as HTML and PDF. Some things, like internal links, differ between the two, and we achieve that with DocBook tags.

        I have to say, I have never come across that kind of syntax in Dokuwiki, but I see Mediawiki potentially supports something similar, so I suppose other wikis do as well.


      3. Sarah Maddox

        Content re-use is a great discussion. Thanks for the information about how MediaWiki does it, Tom. That’s very interesting.

        In Confluence, there are 3 macros involved. (A macro is a special piece of content that you can add to a page, to invoke a particular piece of functionality.)

        Using the “Include Page” macro you can include the entire content of one page into another. In wiki markup, it’s {include:Page name}. You don’t have to use wiki markup — there’s a graphical UI for inserting macros too.

        If you want to include just part of one page (page A) into another (page B), first use the Excerpt macro to define the chunk of content on page A. Then on Page B, use the Excerpt Include macro to pull in the text.

        The included content is pulled in each time the page loads. So if you update the content on page A, people will see the updated content next time they load page B.

        As Mark says, using excerpts can get out of hand, because it’s difficult to find all the places where the inclusions are defined. Instead, we prefer to put each piece of reusable content onto a separate page, and use the Include Page macro to reuse it elsewhere. You can insert as many pieces of text into a single page as you like. In other words, one page can have any number of Include Page macros.

        To manage the inclusions, we define an area of the wiki that we call an “inclusions library”, and store all the reusable content there. This can be text, images, diagrams – any content at all.

        In core Confluence, there isn’t functionality for conditional inclusions, other than the Excerpt Include macros. In other words, you can’t say: “Include this piece of text if I’m in X space”, or “Include this piece of text if the reader is in group Y”.

        BUT I do know of a user macro that a developer has written, which does allow conditional inclusions. I’ve been planning to blog about it, but at the moment I’m having trouble finding the time!

        Also, K15t Software are developing a plugin that will provide some sophisticated inclusions logic. Roll on version 1.0 of the Scroll Versions plugin!

        Cheers, Sarah

        1. mick davidson

          Sarah et al,
          I use the {Include page} macro in a couple of places. It’s dead simple to install, and only needs one parameter – the name of the page you want to include. When you open that page it renders everything that’s in the pages you’ve added the macro for. very simple, very effective.

          So far I’ve used to to group together a number of separate pages into one page so that people can search the info on all of those pages from one place. This means that if they can’t find the info on that page, then they know it doesn’t exist in that context. But if they only want to see/search info that is more specific, then they go to the appropriate page rather than the one with all the info.

          For example:
          I have a spreadsheet with hundreds of rows of text which are arranged in groups. This information changes over time, with new ones being added and some being taken away. On top of that, we have to include links from each row to the content in the wiki.

          The spreadsheet approach is ok, but it’s a hassle to update and is very bulky. We want small, easy-to-manage and quick-to-read chunks of info. But we also want to still be able to run one search and find the relevant info as we can in the spreadsheet. We also, if the info is in an attached doc, people can download it. Then the attached version gets updated but the downloaded one doesn’t – oops! We definitely want to avoid that happening, so not having it available to download is important.

          Using the {Include page} macro to gather up all the groups from their individual pages on to one page allows us to do that with very little effort. It’s very simple to set up and use, though I appreciate that seeing an example would probably make it clearer.

          Also, we try not to duplicate content, and I’ve yet to find a place where re-use is needed. But what we do if we need content from another page, is to create it in the place it makes sense to live in, then link to that.

          And if that info happens to be under a sub-heading on a page, then I make sure the URL has that in the address. For example,(www.mywiki/main_page/#subheading) makes sure you jump straight to a place on a page called Subheading.

          Again, being a lone author I don’t have time to do things complicated, so I do things simple. Which is why we chose a wiki in the first place. :)

          1. William Hayes

            The thing I love about using a wiki (e.g. the Web) for documentation is that instead of transclusion (if I understand that concept correctly), I can link to the other topics. That allows me to reduce the redundancy of my content. When I’ve added ScreenSteps to Confluence I nearly gotten a docugasm.

            I try to keep pages focused on a topic/subject/point and then use interlinking to keep everything tied together. It’s pretty much how I write code. I write and overview (main subroutine), and then functions (topic pages) with function libraries for things re-used in many programs (e.g. general content referenced by several ‘overviews’.

            Regarding staleness/content management – have we ever seen a system where we didn’t have a content manager required. Even for printed manuals, we had to have an admin insert update pages and remove out-dated manuals. Same as keeping a house – you can’t build it and never maintain or clean it (unless you are planning an X-month complete rebuild cycle – personally I’d rather clean my house occasionally rather than start all over when it gets too filthy to live in anymore :)

            Great post and some great comments!

  5. mick davidson

    Have you recovered yet? :)

    I use links a lot, which is one of the main benefits of a wiki system: it can be joined up very easily. Following links mean you can surf and search easily and quickly.

    Also, if you look at a page’s info (available from a menu), you can see (amongst other things) the incoming and outgoing links. Which means you can follow both to check their content when updating the current page and update that at the same time.

    We use links to go to related info, as in ‘see also xxx’, or to link to a subpage (or one further up the tree) and also with certain keywords. For example, if you are reading about Module A, and it mentions Module C, then we link to Module C too.

    And the links stay current even if you move a page, so you never have to worry about updating them. So not only is everything simpler to do, you can actually do more. Which is not a bad thing when you’re a lone author.

  6. Mark Baker


    Certainly linking is a great alternative to transclusion for a lot of things. Transclusion is really linking in reverse: whereas a link takes the reader to the content, transclusion brings the content to reader (that is, in the context of their current content).

    We should make a distinction between dynamic transclusion (where content is pulled into a frame on a web page when the web page is displayed) and static transclusion (where content is physically included in multiple web pages by a publishing system, which then displays the pages statically on the web). Perhaps the latter should not be called transclusion at all.

    In any case, it seems that most tech writers really are not thinking this way yet, which means that systems that do reuse (like DITA) mostly just do static transclusion — physically including the content in multiple publications.

    Wikis, by contrast, are doing dynamic transclusion, which, to my way of thinking, is both more sophisticated and simpler and than the static reuse that DITA does. On the other hand, the lack of any kind of formal content management in a wiki limits how much of this you can manage, even with the best hand-management practices. Longer term I expect/hope that we will see systems that provide both. The SPFE Open Toolkit won’t support this in the first release, but the architecture should support it well.

    I think it is worth of note that, despite the amount of time we have been creating help systems and web sites, most tech writers still don’t seem to really understand hypertext. People who give advice on writing for the web still talk mostly about brevity. They hardly even mention hypertext as a design approach, and some are positively hostile to linking. Paper thinking still dominates the profession, even when the content is delivered into hypertext media. Again, I expect/hope that tech writers will eventually grok hypertext and start to use it.

    Once that happens, we should expect to see a big shift in tools. Once people really start to understand hypertext, we should see a much reduced emphasis on reuse as people realize that linking and transclusion provide a superior approach to the problems that reuse addresses.

    1. mick davidson

      So are you saying that tech writers who work with wikis are actually on the cutting edge? What a great position to be in, far better than being backroom boys and girls: not gone, but forgotten. :)

      That said, I’ve been looking around the internet for job specs with the word ‘wiki’ in them, and they are very few and far between. At the Atlassian Unite conference in London last week, my discussions with others definitely lead me to believe that tech writers who use wikis are at the pointy end of where the business is going. So although it’s great to be at the forefront of change, the opportunities are limited sadly.

      Thanks for explaining how transclusion works and how it is used. I haven’t started using it in Confluence yet, but I’m on the look out for opportunities to do so.

      I find this amazing:
      “…most tech writers still don’t seem to really understand hypertext. People who give advice on writing for the web still talk mostly about brevity. They hardly even mention hypertext as a design approach, and some are positively hostile to linking. Paper thinking still dominates the profession, even when the content is delivered into hypertext media.”

      After all this time we as a profession don’t understand hypertext? Really? How is that possible?

      And as for people being hostile to linking – why on earth would you be? It’s just another tool, and let’s face it, without linking the internet would be dead in the water wouldn’t it?

      1. Mark Baker


        Well, if the people working on wikis are truly designing hypertext, then yes, I would consider them to be on the cutting edge of design, if not the cutting edge of information development process. You can, of course, use a wiki and not think about hypertext as an approach to information design at all, so just using a wiki does not mean you are on the cutting edge. :-)

        The problem with doing hypertext on a wiki is that it means all the links have to be created and maintained by hand, and that is expensive, which naturally limits the amount of linking that happens.

        Part of the hostility to linking may indeed be that cost. In DITA, for instance, where static reuse is the holy grail, people are actively encouraged to remove linking from topics because of the very high cost of managing static links in statically reused content.

        Another aspect of the hostility is probably writers trying to hang on to the power to direct the reader along a path the writer has chosen. Hypertext liberates the reader to choose their own path through content, and many writers are not happy about giving up the power to dictate to the reader what they should read and when.

        So if you really want to be on the cutting edge, I think you have to be in a structured writing system that supports soft linking. And you have to be willing to assist the reader in choosing their own path, rather than dictating a path to them.

        1. mick davidson

          My philosophy is: create links, make it possible for readers to follow their own path, not the own I feel I ought to dictate to them. They know what they want far better than I. My job is to make sure that the content is relevant and useful, their job is to use that as they see fit.

          However, it is possible to want to keep a reader on a specific page. For example, if they’re following a How-To guide, then you want them to read it from beginning to end, not wander off. But if they want to know how to do the task, then wandering off is pointless. But if there are links to related info, then that is actually useful to the reader, and they can follow those links as and when they see fit.

          If authors really are scared that readers will wander off, then maybe they ought to change profession: we’re here to serve and help, not to dictate. I love this idea that we have power over our users – what kind of way is that too look at the relationship between the writer and the reader? I find this very weird.

          There is an overhead with links, but I’ve yet to find this an issue in Confluence. I do find the occasional broken link (because the landing page has been deleted) but I know that if I move a page, the links from following pages still work without me having to do a thing.

          Of course creating them does take a few seconds, but I don’t consider that much of a problem. In Confluence’s new editor when you enter a square bracket and then start typing a word, Confluence assumes you’re trying to create a link and starts suggesting pages to link to, based on what you are typing. So you don’t have to do anything other than choose an option. Is it any wonder that I love this tool? :)

  7. Gurpreet Singh

    Very interesting read Mick!

    Wiki’s are definitely the future of technical publication.

    I have worked in an organization that used Confluence wiki to manage documentation for a fleet of commercial EDA software having hundreds of User Guides with over 500k pages of content. We even went ahead one step and generated chm, pdf and webhelp though a network of perl scripts, pluging and wiki tweaks daily. It was certainly not easy!

    – Gurpreet (

    1. mick davidson

      Sounds like I’m preaching to the converted with you then. :)
      It sounds like you’ve achieved some incredible things with it too. Perhaps you should be a case study.


Leave a Reply

Your email address will not be published. Required fields are marked *

You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <s> <strike> <strong>