Stuck in a system

Confluence, Tech Comm, ChocolateI’ve been reading Sarah Maddox’s new book, Confluence, Tech Comm, Chocolate, and have been impressed. I enjoy the energy and speed in Sarah’s writing. If you’ve read her blog before, her book has the same tone.

This is not a book review, because I’m not yet finished with the book. But it doesn’t take too many pages to come to some realizations worth noting. My primary realization: I wish I had a Confluence wiki rather than a Mediawiki wiki. I intend to explore Confluence some more. After reading the chapter on installation, it seemed easy enough, so I am uploading the bin file right now and will attempt to make it work with Linux commands on our team server (I don’t know Linux, sorry, so I’m really relying on the Sarah’s documentation here).

While that bin file is uploading, let me elaborate on what I think is a larger problem. I think vendors all too quickly forget how hard it is to change systems. I didn’t choose Mediawiki. I didn’t choose Joomla. I didn’t choose SharePoint. I didn’t choose Author-it. I didn’t choose many of the help authoring tools that are at my disposal. What’s on “the menu,” what’s “off the menu” — many times decisions are made by others.

But regardless of how one gets saddled with a tool, the longer you use it, the more difficult it becomes to break free of it. For example, our blog for our tech group runs on Joomla. Why? Because the developer who initially set it up was familiar with Joomla, and it works well as a content management system, which was its initial purpose. Other developers coded a community framework into Joomla, adding a custom extension. This extension was integrated into other systems, such as JIRA. And then other developers hacked in some blogging features we needed, such as tags and comments.

The other week we discussed a redesign for the site, and several of us felt that if we wanted to switch from Joomla to WordPress, now would be the time. However, after some discussion, it was decided that the cost would be too high. Developers would need to almost reinvent a dozen customizations and integration points. It would be easier and cheaper just to stay with our existing platform.

One could say the same about our wiki platform. Mediawiki was free. Some helpful extensions integrated nicely with our user database. Others added a WAM (web-access-management) component to enable single-sign-on, and little by little, it was made to work well with our other systems. Now Tom comes along and tries to make it work for documentation, but alas, Mediawiki handles documentation poorly. Most notably, its lack of content spaces makes it highly problematic. I noted in a previous post — Subpage Titles on Wikis — Challenges, Conventions, and Compromises — the problem of having all product information in the same space. If all of your product information lives in the same space, then page titles, search, and navigation all become problematic.

I won’t go to the trouble of re-articulating all the issues here. My basic point is this: As I learned more about the way Confluence provides so much customization around spaces — you can skin a space, you can provide different versions in different spaces, you can customize rights for different spaces, and so on — I started to feel jealous and then eager to experiment with this other wiki platform.

But herein lies the problem, and it is a universal problem, not just with IT. Once you’re stuck with a system, knee-deep integrated into your environment, how do you get out of it?

For example, suppose you decided to purchase a heavy-duty enterprise wide help authoring system, one that costs more than $100K. About a year into the solution, you start to feel that it’s the wrong direction. But you’ve already spent all your money on that solution, and your manager won’t be too happy to learn that all of this money was in vain, that another solution was better, cheaper, simpler, and easier. Do you keep going down the original path, because you’re already so invested in it? Should you simply try to “make it work,” because you’re three-quarters of the way across the river, and it’s silly to change horses mid-stream?

Or do you jump off that horse, swim back to the original side, and beg for money to buy a different horse, one that swims much better, and then attempt to re-cross the river? Further, suppose you’re not the only rider on the horse. Instead of the main rider, you’re a feeble child, dependent on a parent to influence for decisions? (That is often the case in an IT organization. The technical writers don’t have the bank account information for their organization, nor the free will to spend it.)

This scenario reminds me of a conference presentation I once attended on project management. It was called The Abilene Paradox. Basically, it’s a big analogy about why projects fail, and the collective mentality that supports failure. The Abilene Paradox was depicted in a video showing a group of people sitting around on a Sunday afternoon, trying to decide what to do for dinner. The father in the family says, “Well, we could go to that old diner in Abilene.” Abilene, a small city in Texas, is about 50 miles away, so it’s no small journey.

No one wants to go, but no one has a better idea, and before they know it, they all pile into a hot stuffy truck and travel 50 miles down an old highway toward Abilene. The whole way there, almost no one talks, because no one really wants to go there in the first place. When they reach the diner in Abilene, they order dinner and have a meager conversation, and then get back in the truck and drive home.

The trip takes most of the afternoon and evening, and later they start talking about why they decided to go to Abilene. It turns out the father suggested the trip in jest, not really thinking they would go. But when others agreed, he decided to agree as well. Everyone chimes in about how he or she never wanted to go but went along with it because he or she thought the others were behind the idea.

The paradox is that you have a group of people all behind a decision, working to see it through, when in fact none of them actually wants that decision.

How can you get out from under the Abilene paradox? How can you buck off ineffective systems and install what you really want, even when you’ve sunk so much money into an existing solution?

I’m not sure, but that is the crux. When you walk down vendor halls at conferences, and a salesman shows you how slick his or her tool works, it’s not that I’m not open to the idea. The tool probably does work better than the tool I’m using. It’s probably more efficient, more flexible, and better suited to the task. But the cost of switching, and moving in another direction is … oh… so… hard.

In discussing the difficulty in switching solutions, my colleague pointed out to me the theory of “sunk costs.” The basic idea of sunk costs is that however much money you’ve sunk into something, that money should not affect future decisions because the money is not recoverable. But the fact that you’ve already sunk money into a solution persuades you mentally to believe that it was the right solution, even when it wasn’t.

The Wikipedia entry on sunk costs has a great story to elaborate. The idea is that if you’ve sunk a lot of money into a solution, you’re more apt you are to believe it’s the right solution:

In 1968 Knox and Inkster, in what is perhaps the classic sunk cost experiment, approached 141 horse bettors: 72 of the people had just finished placing a $2.00 bet within the past 30 seconds, and 69 people were about to place a $2.00 bet in the next 30 seconds. Their hypothesis was that people who had just committed themselves to a course of action (betting $2.00) would reduce post-decision dissonance by believing more strongly than ever that they had picked a winner. Knox and Inkster asked the bettors to rate their horse’s chances of winning on a 7-point scale.

What they found was that people who were about to place a bet rated the chance that their horse would win at an average of 3.48 which corresponded to a “fair chance of winning” whereas people who had just finished betting gave an average rating of 4.81 which corresponded to a “good chance of winning”. Their hypothesis was confirmed: after making a $2.00 commitment, people became more confident their bet would pay off. Knox and Inkster performed an ancillary test on the patrons of the horses themselves and managed (after normalization) to repeat their finding almost identically. (See Sunk Costs.)

Sunk costs present another challenge in getting out of a system. When you’ve sunk money into a solution, you’re less likely to see the problem in an unbiased way. The sunk costs blind you with a myopia to see the solution you’ve purchased as the superior one.

Clearly the best way out of these situations is to avoid them in the first place. Before you agree to a tool or other system, evaluate it in depth. Do research, pilot tests, interview other people who have bought the product, and so on.

However, no matter how much research you do, chances are you won’t fully understand the strengths and weaknesses of the system until you’ve actually used it in a real scenario. And to really put a system to the test, you need to use it against a real project, with real users. That kind of pilot testing may take months, maybe even a year. By that time, the tool landscape may well have changed so that your initial evaluation is no longer current. An entirely new set of variables may be in effect.

 

Madcap FlareAdobe Robohelp

This entry was posted in general on by .

By Tom Johnson

I'm a technical writer working for the 41st Parameter in San Jose, California. I'm primarily interested in topics related to technical writing, such as visual communication (video tutorials, illustrations), findability (organization, information architecture), API documentation (code examples, programming), and web publishing (web platforms, interactivity) -- pretty much everything related to technical writing. If you're trying to keep up to date about the field of technical communication, subscribe to my blog either by RSS, email, or another method. To learn more about me, see my About page. You can also contact me if you have questions.

20 thoughts on “Stuck in a system

  1. Sarah Maddox

    Hallo Tom,

    Thank you for the lovely shout out for my book and blog. I saw your tweet and came to this post wondering what systems you were stuck in. What a nice surprise to see the picture of the book. :)

    The situation and problem you describe are so true. One thing crossed my mind, about measuring costs. Instead of counting the money and resources already invested in a solution, we should count the cost of continued investment in that solution, especially if it’s already creaking at the seams.

    Estimating the money saved or earned by moving to the new system is more difficult. I do think that an expert’s gut feel should be given some concrete value too.

    Cheers, Sarah

    1. Tom Johnson

      Thanks Sarah. I appreciate your comment and tips. I think your book is excellent so far. Sometimes it takes me longer to get through books, depending on what my interests are at the time. I try to mention what I’m reading even if I’m not finished yet.

      Re measuring continued investment, that’s a good strategy.

  2. Fer O'Neil

    “Stuck with a system,” sounds quite familiar I’m sure to most who read this post.
    I feel the same way about all the content management tools/strategies that are presented at tech comm conferences–it’s good to learn new things, but tech writers probably aren’t the correct audience to advocate for sweeping changes to “how we do things.” Sure, I would love to convert to all the new technologies and get away from writing PDF documentation, but I don’t make those decisions. If only we could sign up our managers/directors et al for the webinars/conferences and make them attend!

    1. Tom Johnson

      It is unfortunate that the people who often make these decisions aren’t in the field using the purchased tools.

      Hey, I see you’re speaking at Lavacon. That’s great. I’m planning to attend that conference and will no doubt run into you there.

  3. Leigh White

    Excellent post, Tom. Such a common lament of so many organizations and groups. There are no clear answers, I don’t think, once that horse is out of the barn…or stuck in the barn, as it were. The best-advised course is not to get stuck in an unsuitable system in the first place and that requires a lot of upfront due diligence: creating a clear list of needs and requirements independent of any particular tool or vendor; presenting those requirements in an RFI to selected vendors; eliminating those who can’t meet the requirements, regardless of how shiny or popular their tool is; asking the shortlisted vendors to present demos with your content and your scenarios (not their canned demos). Though no tool is going to be perfect, you can at least know up front where the pain will be. And, if you’re being forced to piggyback on another group’s tool choice, you can give the decision-makers fair warning about how that tool will not be suitable for your group. Who knows, your information could influence the decision even if you think you have no influence. Thanks for the food for thought!

    Best,
    Leigh

    1. Tom Johnson

      Leigh, thanks for the comment and advice. I think you’re right — doing extra due diligence up front is the way to go. Certainly this is some of the learning one receives from experience. Next time …

  4. Richard Hamilton

    Tom,

    Thanks for the shout out on Sarah’s book. I have a couple of thoughts on your points:

    1) One way to make transitions somewhat easier is to make sure your content and its metadata are marked up in a vendor-neutral, industry standard. Don’t allow content to become “trapped” in a proprietary format/system. This doesn’t solve the problem of re-implementing existing features in a new system, but it does mean you may be able to move content to a new system without expensive conversions. I suggest making that a requirement on any system you use (along with a requirement that you can import and export your content in vendor-neutral, industry standard formats).

    2) A good tool that you are expert at using is better than a better tool that you are not expert at using. Unless a new tool is a game-changer for what you are trying to do, think hard before changing. So, for example, I still prefer make over ant because *for what I’m trying to do* ant is not a game changer. But, when I started editing DITA, I moved to Oxygen from emacs (which I’ve used since before you were born:-) because for DITA, Oxygen is a game-changer.

    1. Tom Johnson

      Dick, thanks for the insight here. I think sticking with industry standard markup is a definite must-do principle. The ability to export and import freely should certainly be a major consideration.

      Re your second point, I also agree with you here. It’s hard not to fall prey to the “everything looks like a nail because I have a hammer” syndrome, but sometimes expertise with one tool allows one to see applications and workarounds more quickly than with a less familiar tool.

  5. Rio

    This topic has obsessed me for awhile, and I’d like to add a very simple thought as to how to avoid getting saddled with huge, ostensibly productivity-enhancing but ultimately burdensome systems: start with the end-users’ evaluation. Usually, the people who pick and implement systems are completely detached from its ultimate function. Why does no one ever ask the opinion of the clerk, the student, the receptionist, the writer, the researcher, etc., as to the usefulness of something, or even conduct interviews for the adaptations?

    1. Tom Johnson

      Rio, good point. You’re right that we usually overlook the user experience in the tools we consider, instead favoring authoring features or other efficiencies. I guess it depends on who actually is the user — in the case of a help authoring tool, the users include technical writers, while the end-users include the people using the application.

  6. Jen

    I find this topic really interesting. Although I have no direct experience working within this sort of system, two corresponding situations come to mind: that of Steve Jobs and Toyota. Both of these entities either completely scrapped a project and reworked it or made major recalls in order to deal with a problem. While these are very broad examples and also from companies with major $$$, I think the final products/reputations benefited from the moves. Also, this problem seems like a common one when we find ourselves having difficulty innovating. Alas, these concepts are so much easier to examine theoretically than in actuality. Thanks for the exploration of the concept.

    1. Tom Johnson

      Jen, thanks for the insight. I like the comparison to Jobs and Toyota. I think you’re spot on here — the ability to completely discard years of labor because something isn’t working just right takes a lot of guts and foresight.

  7. Mark Baker

    Tom, you’ve identified a really important problem. I agree with Richard that part of the solution is to make your content be as independent as possible from your tools, though I will demur on industry standards as the best way to achieve this.

    As far as system lock-in is concerned, the problem isn’t really whether a format is proprietary or an open standard. The problem is whether there are system semantics embedded in the content. If the content is system-specific in structure or semantics, it is not easily portable to another system, regardless of whether the format is standard or proprietary. A format can be and industry standard and still have system semantics embedded in the content. DITA is a good example of a standard that embeds a lot of system semantics — in this case, DITA semantics — in the content.

    If you had content in DITA and you wanted to move it to another system, particularly a system that was interested in different uses of metadata or different kinds of relationships, then moving your DITA content to that systems might be a challenge. (And the usual reason for moving to a new system is precisely that it is interested in different metadata and different relationships, since these are the things that drive different kinds of functionality.)

    To truly be independent of systems, therefore, you need your content to be in a format that contains little or no system semantics.

    Of course, this comes with its own issues. Most systems on the market are designed to work with content that embeds their system semantics in the content. (This both helps lock people into the system, and makes it easier to implement certain specific kinds of functionality — though at the expense of potentially excluding other types of functionality.) So this approach will often require some kind of custom development, or some kind of custom bridge to a commercial system.

    The other issue is that if you don’t embed application semantics in the content, you have to embed subject semantics, so that the application has something to work with to manage and publish the content. This generally needs to be semantics specific to your own subject matter, in order to be reliably used for management and publishing of your content, so an industry standard format probably won’t do for this purpose.

    What this means is that if you want real system independence for your content, you are going to have to do at least some content modeling and system development work yourself. Depending on you business needs, this may be very much worth doing, but there is no way to buy system independence off the shelf.

    There is also a more subtle way in which you can be stuck in a system. If you have been using the same system for many years, the chances are high that that system has shaped the way you think about information design and information development process. These tacit design and process assumptions absorbed from the old system can have a profoundly negative effect on how you shop for, implement, and use a new system. I blogged about this recently: http://everypageispageone.com/2012/05/11/the-design-implications-of-tool-choices/

    1. Tom Johnson

      Mark, thanks for your insight on this issue. I recently attended Confab, and they really stressed the importance of structured authoring. Many presenters feel that by tagging your content correctly, putting it in the right structure, you can ultimately “spray” the content onto a plethora of devices and platforms, therefore liberating the content. (Interestingly, there wasn’t any talk about collaborative authoring during the conference.)

      I feel that if I don’t embrace structured authoring, I’m going to be in the dark ages when it comes to help authoring.

      To marry structured authoring with wiki publishing, I’m thinking of using a HAT like Flare to author content, and then publish out to a wiki format. I would need to publish to Word, and then use a Mediawiki add-on for Word to convert the syntax to Mediawiki. But this would give me much more flexibility with the content. I would need to do some tweaking of the Mediawiki output (for example, categories, cross-navigation links, and images probably wouldn’t turn out well without some adjustment). But if the original source lies in Flare, I could also push out a DITA output, a PDF output, an mobile output, and an online help output. The content would also be in an XML format, which would facilitate feeding the content into Mark Logic databases, Lingotech (for translation), and other platforms.

      What are your thoughts on this approach?

      1. Richard Hamilton

        Tom,

        How about flipping it around and authoring in the wiki and exporting to other forms? You get some great collaborative advantages when authoring in a wiki. As described, you’ve got a pretty convoluted process (Flare -> Word -> MediaWiki+tweaking). You might be better off finding a wiki that will give you XML output (Confluence is one, but there are others). From there, you can get the outputs you need and also the advantages of a wiki on the front end.

        All that said, I’d be wary of putting the cart before the horse here. Tool selection ought to be at the back end of your process, not the front end.

      2. Mark Baker

        Tom,

        I think it is very important not to confuse “in XML” with “structured”. MS Word produces content in XML (thus the X in DOCX), but it is not structured in the sense we normally use the word.

        The kind of structure we are talking about when we talk about structured content is concerned with two things:

        * Making the content conform to a type. That is, laying down a set of rules (a template) for what content of a certain type must contain, and validating that as far as possible.

        * Enabling the processing of the content, other than simply publishing it as it appears on screen, such as adding links, pulling content together from different sources, reordering, collating, selecting, etc.

        These two things are deeply connected. If content conforms to a type, you can create a rule for that type and apply the same automation to all content of that type. And if you can apply automation to content, you can make the type much stricter, and do things like adding consistent headings in the tools, rather than asking the authors to follow a rule for writing headings.

        It is about, as I have said elsewhere, creating content *as* a database, not merely storing content in a database.

        Whether you start with Flare, as you suggest, or with a wiki as Richard suggests, you are only going to get such structure as Flare or the wiki creates for their own purposes. You are not likely to create much automation potential or create topics that conform strictly to a type. If you push output to DITA, it is likely to be to a generic DITA topic type, since you won’t have enough structure in the source to create something that conforms to a specialization. You would get DITA output that would run through a DITA publishing chain, but you would not get much of the other DITA functionality, because you would not have the structure to support it.

        That isn’t to say that that tool chain, or the one Richard suggests would not be the right choice for your organization — there is an overhead to structured content that has to be paid back in automation, or it provides no value. But a real structured authoring approach does not start with a tool: it starts with a data model, and then looks for a tool that can handle that data model.

    2. Richard Hamilton

      Mark,

      I like your distinction between system semantics and subject semantics. A few thoughts:

      1) Regarding DITA, you indirectly put your finger on something that has concerned me about DITA, which is its bias towards a particular style of topic-oriented writing. If that style goes out of style, it may be hard to move content to the “next thing.” As you say, any tool has its biases, but DITA’s are more “in-your-face” than most.

      2) I’m not sure why subject semantics couldn’t be handled in an industry standard way, for example using an industry standard like RDF. You still have to model your content and do the hard work, but you should be able to characterize the result in a standard way.

      Finally, I found your article, The Design Implications of Tool Choices very interesting, and I recommend it to everyone reading this thread.

      1. Mark Baker

        Richard, I agree entirely about DITA. It is very much founded in a particular view of information design — much more so that Frame or Docbook, for example. I do get the impression that many of the people who are implementing it are doing so to achieve a higher degree or reuse without any real idea of, or commitment to, the information design ideas on which the reuse mechanism itself is founded. I’m really not sure how manageable that is going to be in the long run.

        Your comment on subject semantics provokes me into a sneak preview of an upcoming post.

        RDF is a metadata representation standard, not a structured writing standard (such as DocBook or S1000D). If you want to apply a metadata label to a piece of content, then a standard for metadata representation, like RDF, would be a great choice for crafting a label specific to your content.

        Unlike, say, Dublin core, which is just a standard set of metadata fields, RDF is a standard for creating your own metadata label. As such it is an entirely reasonable choice for adding a subject-semantic label to your content objects.

        However, there is more to creating application/system independence than creating a subject-semantic label. There are two ways you can add semantics to content (this is the preview part). You can give your content a semantic shell (that is, attach a metadata label to it the way Campbells attaches a label to a can of soup) or you can give it semantic bones (that is, create a detailed structure within the content so that you can apply metadata labels down to the level of an individual word.

        If you want to avoid application/system semantics in your content, you have to apply subject metadata down to the word level, since a system or application would insert its own semantics into the content at that level (a link or cross reference, for instance).

        Thus achieving true system independence requires giving your content semantic bones, and, to be effective, those bones are going to need to be more specific to your particular subject matter than an standard format will typically capture.

Comments are closed.