What Have Been My Most Successful Experiences in Connecting Help Material with Users?
In an earlier post (Do We Need a New Approach to Help?), I surfaced concerns about the approach to help material in general and asserted that despite 50 years of innovation, most users still have the same reaction towards help: they dislike it and find it a chore.
Lots of commenters agreed that we need to create more engaging user experiences. Laura Palmer added that converging instructional design with information design can be a useful approach. Mark Baker noted that perhaps by its very nature, help material will be painful to users because it requires realigning mental models, and sense-making is often achieved through trial and error.
How do I follow up on a post like my previous? It's always easier to point out problems rather than provide solutions. But rather than beginning deductively, such as by embracing a certain learning theory and applying it to help material, I thought I'd try a different, more inductive approach. I'll start out with the following question: What have been my most successful experiences in connecting help material with users? Then I'll try to move toward a conclusion based on trends in my experiences.
Direct Observations of Users
In the times that I've observed users interacting with my help material, it has always been eye-opening. At my previous organization, we had a usability lab and I did get to see users interacting with help first hand. I wrote about the experience here: Help Material Gets Skipped, Videos Get Watched. What I learned is that beginning users tend to skip reading and instead watch videos. But for advanced users, it's pretty much the opposite: videos get skipped, help material gets searched/skimmed.
Of course my sample size was pretty small -- maybe a half a dozen users. So it's hard to generalize a trend from the sample.
Another time I got to observe a user in a more traditional setting. I printed out a 20 page guide for her to follow and then just watched her follow it. During the observation, she was constantly interrupted and would lose her place. She often put her finger on a step to remember where she was, but then someone would call or stop by her desk (she was an admin assistant) and she quickly lost her place. She used screenshots as sign posts as she quickly skimmed through the material, glancing at the screenshots to make sure she was on the right track.
My takeaway there: Screenshots in help material both guide and reassure users that they're on the right track. But again, my sample size was really small: just one user. (See Best Tech Writing Tip Ever: Watch a User Try to Follow Your Instructions.)
One time I provided in-class training for a product I documented. In the training, I would give users a series of tasks to do and then observe their progress. They always struggled with the tasks, and it got to be so interesting that we actually made the developers and interaction designers attend the sessions so that they too could watch users blunder through their seemingly simple interfaces.
I learned that users often don't do things the way we plan. They find alternative methods that we didn't account for. They also "misinterpret" terms, "misread" labels, and struggle with things that otherwise tend to be "obvious" to the project team.
My takeaway: When users are forced to act rather than just absorb help material, we see a wide discrepancy in what we thought they understood.
Another time I observed a user interacting with help occurred in my own home. I came home one day to find my wife irate at the computer. She'd just ordered $40 worth of Amazon books and shipped them to an old address in another state. She tried to find out how to update the address and cancel the order, and she nearly threw her mouse at the screen in frustration.
In this situation, did she consult the help? No. She searched the web repeatedly, looked for support numbers, and kept throwing her hands up in the air and swearing. (See Emotional States of Computer Users in Times of Frustration.)
My takeaway: User anger is real, and help should address the very situations that users actually have, rather than the vanilla user case scenarios software was designed with. For these real scenarios, users trust Google more than the product help.
Indirect Observations of Users
Admittedly, in thinking about how I've interacted with users in the past, I realize that I haven't interacted with users enough to draw upon a lot of experiences.
When people design websites, they often rely heavily on user observation to determine the designs and workflows that work best. I've heard many times the same idea applied to tech comm -- the need to observe and interact with users (see this presentation from Joe Sokohl, for example).
But as a technical writer, you have to go out of your way to interact directly with users. Unless you have a real desire to see help from your user's point of view, face-to-face interactions with users don't happen regularly.
However, tech writers do interact with users indirectly all the time. The interactions are less face-to-face and instead through support center tickets, forum posts, e-mails, and other feedback. I originally didn't consider these interactions to be in the same category as a usability person observing someone using help, but I think the outcome might be just as useful, if not more, because these interactions tell us the kind of information users are looking for, not just the way they look for it. (See 10 Ways to Gather Feedback from Users.)
In my previous job, I interacted heavily in user forums and email. The amount of feedback that poured in was more than any single person could handle. But in drinking from the firehose, it informed me a great deal about real user questions and pain points. Interacting in the forum fueled the questions I addressed in my help material. It allowed me to address real user questions, not just imagined ones.
Probably 75% of the questions involved bugs and other problems beyond the scope of help. For these issues, I'm not sure users resorted to help at all, because almost no help explains things like, "When the system database goes down, you see error messages all over the place."
Another time I actually was the support person for a product I was documenting. There were only about 200 users, but the application was complicated enough that the department actually had a dedicated support person (me).
Again, the feedback I received from support made its way into the help. Many times I would respond to questions by pointing people to new help articles I'd written to specifically address their question. And I also was able to connect that feedback to the project team in helpful ways. (See my essay titled From Overlooked to Center Stage.)
Whenever I've interacted with users in these settings (forums, support, email, etc.), it amazes me to see so many questions I could never anticipate in the pre-release of the product. The stream of incoming information is endless, scattered, sometimes small and sometimes big. The questions touch on nearly every category of feedback -- the functionality, the workflow, the interface, the data, the lack of features, the need to handle a specific scenario, bugs, wish lists, corrupt data, errors, and more.
My takeaway: Documentation is not a closed process. It's an open process that never ends. (See A Reverse Approach to Help Authoring for more on this topic.)
My Own Experiences
In my own experience as a user, I remember vividly connecting with some tutorials on Visio. The tutorials briefly explained a principle and then included some instructions for me to follow. Through the activities, I really felt like I was learning. (See A Perfect Model for Online Tutorials.)
But in these situations, I was trying to learn a product from scratch. Other times, when I'm more familiar with the product, such as with WordPress, I make my way by searching for specific answers here and there on Google. How do you change the header on this theme, or what does RES mean in stylesheets, or how do you create a sprite, and so on.
I'm convinced that there are at least two modes to learning: beginning and advanced. When I want to learn a product or system, I may not have a specific question. I just want to learn how it works. In these scenarios, I prefer more sequential based learning. I like activities to try out. I like to watch videos.
But when I'm an advanced user of a product, I usually only consult help when I have a specific question. In these situations, I quickly skim for the answer, and I prefer text. I won't read much at all. I'll jump down the page looking for code samples or other immediately visible answers.
What Does It All Mean?
I hope that my meandering through the various experiences has been without agenda. I'd like to find some massively insightful conclusion, but instead I see lots of little insights here and there. I'm hoping readers will add their own answers to the question -- What have you done that connects positively with users?
Despite a clear trend, I'm leaning towards the following conclusion. The most important principle in connecting with users is to answer actual questions users have. You only find out what these questions are by interacting with users.
Even if you have direct access to observe users, the observations won't tell you the kinds of information users are looking for in the same way that forums, support tickets, and incoming e-mail will reveal.
By addressing their questions, you can help users resolve their problems and have better help experiences.
The problem is that users often have such a variety of questions that it's nearly impossible to answer them all. Doing so would convert help material into a knowledge base whose only sane entrance is search (which might not be a bad thing).
In the next post, I'll address the question of scope with more depth. How do you answer the million different questions users have so they can find their answers in your help material?