Search results

Follow-up to 4 Technical Writing Tests Post

by Tom Johnson on Nov 12, 2019
categories: technical-writing

It's been a while since I've written a post that has received so many negative comments as 4 Technical Writing Tests to Screen Candidates. Although many people did like the tests and found them interesting or fun, quite a few people had more negative reactions. As a result, I'm scrapping the idea of multiple choice tests as a way to filter candidates.

Sample comments

Here are comments from people who were turned off by the tests:

… Some questions seem writing environment-specific or opinionated… I did terrible on the tests; will probably not be applying to Amazon anytime soon.

… if I was given a timed grammar test as part of an application, I would immediately recind my application. It is the one thing I deeply dislike about being a Technical Writer. I realize that this is a controversial opinion. I have recinded my application in the past when during interviews I got into debates about the usage of terms in documentation. That is not the kind of job I want, and I think it takes away from what we are trying to accomplish. If the documentation suits the users needs, does it matter if it’s consistent? In some ways, yes. But what is more important?

… I’ve never been a fan of ‘tests’ for hiring purposes, especially in technical writing. I feel the same about developer hiring. They give you the wrong data, if they give you data at all. Personally, whenever I’m asked to perform a ‘test’ as part of applying for a job, I stop the process right there and decline, 100% of the time.

… Personally, any company that asks for “tests” is a red flag for me. Come on…you know in the first few seconds of meeting a candidate if they are a good fit! You don’t need a test to tell you this…if you’ve truly got the expertise you think you have then you should be able to size them up in short order…no test will tell you this.

… And what about those of us who have certification? I’ve proven over and over again I’m an excellent writer…17 years worth of writing… you’ll lose out on top quality candidates if you blindly require tests.

… Thanks for the heads up. You’ve helped me to cross Amazon off my list in future job postings.

And regarding the language section, some wrote:

Some questions were really cool, though a few were irrelevant I would say or at least specific for the project

I am, admittedly, a horrible grammarian. That said, I think some of the grammar questions are actually specific to the style guide you use.

Not everyone disliked the questions. These are only the comments where people explicitly said they didn’t like tests like this. Even so, they were enough to make me think twice about multiple choice tests.

Follow-up thoughts

I wanted to make a few general comments as a follow up. First, these tests were an experiment, not something that is used at Amazon for hiring. I am trying to explore better ways of filtering candidates beyond the writing exercises and portfolio submissions already in place. The goal was to have a quick filter between a stack of an initial stack of resumes and the subsequent writing exercises, phone screens, on-sites, and other evaluations.

I initially made these test questions open-ended, but to reduce the time required to analyze responses, I converted them into multiple choice, which is actually quite challenging. Creating multiple plausible but false options by definition creates controversy because the plausible (but “wrong”) answers have to be close enough to the right answer to be tempting to select. Their plausibility might be justified in some cases, and it angers people if they fail the question that might be right in some scenarios/style guides/companies. On the flip side, if you make all the questions obvious and straightforward, it isn’t much of a test.

I think many more people would welcome open-ended questions, in part because it doesn’t suggest that there is a right or wrong answer. That mode of thought aligns better with the humanities discipline, and I would gladly accept and endorse any well-reasoned argument even if I disagreed with it. In fact, I love exploring contrarian points of view.

But based on the negative reactions to the tests, I will likely just scrap the idea of any multiple choice test at all. In my experience, for every person that takes the time to leave a comment, there are a hundred more who have the same thought but who don’t vocalize it. I often use my blog as a way of testing the waters with ideas. Anything that evokes such a visceral reaction might not be a good idea. Imagining so many disgruntled people thinking of me as a test-monger or thinking that Amazon’s hiring process involves such tests didn’t set well with me over the weekend.

However, I do want to note a few things. Just because someone doesn’t give you an official test during the hiring process doesn’t mean that the interviewer isn’t constantly testing you. I think most people know this, and it’s one of the reasons why interviewing is so frustrating. Interviewers are making dozens of evaluations and assessments (often unconscious) in their mind with every question and response during interviews.

Even a question as innocent as “Tell me about yourself,” or “Why are you interested in working here?” or “Why are you leaving your current job?” are loaded with expectations about right/wrong answers. Ramble on too long, or come across as entitled, or mention some true but negative detail, and your answer could easily be interpreted as “wrong.” Then the interviewer might conclude that you’re “not a good fit” (without much elaboration as to why).

In other words, there are dozens of little tests taking place during each exchange. How do you, as an interviewer, articulate out the key questions and answers in a more formal, explicit, and unbiased way? That’s what I was trying to do with the multiple choice test.

At least a written test exposes some semi-objective assessment of a candidate’s knowledge. Any test, however, will be hopelessly biased to the industry, domain, and experience of the test creator. (I’m actually curious to see the STC Tech Comm Certification test now.) The risk in basing decisions on a test is that you miss out on candidates with experience in other areas (even in hardware or biosciences), not to mention the risk in demoralizing or off-putting candidates with the test experience.

I dislike multiple choice tests myself, especially in the school system. Regarding the SAT/ACT, critics frequently debunk the validity, fairness, and value of these tests, as they are easily gamed by elites who can pay for the right preparation. And there’s little correlation between these scores and one’s college performance. Still, tests continue to be a huge factor in college admissions.

Tests about technical subjects with clear right/wrong answers tend to be more commonly administered. For example, if you interview at Google, the first question the interviewer might ask is for you to describe the output of a function. These kinds of tests place all the value on your technical knowledge, which can more objectively be assessed, though such tests seem misleading to me when the core task of the job is writing rather than coding.

The most informative kinds of tasks you could have candidates do to evaluate their skills aren’t practical to ask candidates to do. For example, to ask the candidate to condense 20 pages into a two-page summary, or see how proactive the candidate is in interacting with engineers, or on the flip side, for interviewers to walk through all the steps in a candidate’s submitted writing sample.

So in a way I guess I’m back to the drawing board here on my initial goal, which was to come up with a set of questions that I could ask tech writing candidates during phone screens to determine their awareness and qualifications. Stay tuned for a follow-up post on an adjusted approach.

Overall test results

I used Easy LMS to create and administer these tests. In part, I was also exploring how to create tests that I could then integrate into my API documentation course because so many people have asked me to include a certificate option for the course. (And if there’s a certificate, surely some kind of test must be involved, right?)

In case you’re interested in seeing the overall scoring results from Easy LMS, I printed out some PDFs below. It shows how many people answered questions correctly as a whole (there is no identifying participant information). However, note that I updated some of the confusing questions mid-way through this test, so the pass/fail rate can’t be assessed with any scientific measure. When I realized that some questions were too confusing or nuanced, I made them more straightforward. Most people failed every test, though.

Also, originally I was going to correlate pass/fail rates based on the years of experience from each participant, but since I’m scrapping the tests, I’ll pass on this correlation too. Still, I’m making the info in the above PDFs available in case anyone else wants to use these questions as a starting point for some other test.

About Tom Johnson

Tom Johnson

I'm an API technical writer based in the Seattle area. On this blog, I write about topics related to technical writing and communication — such as software documentation, API documentation, AI, information architecture, content strategy, writing processes, plain language, tech comm careers, and more. Check out my API documentation course if you're looking for more info about documenting APIs. Or see my posts on AI and AI course section for more on the latest in AI and tech comm.

If you're a technical writer and want to keep on top of the latest trends in the tech comm, be sure to subscribe to email updates below. You can also learn more about me or contact me. Finally, note that the opinions I express on my blog are my own points of view, not that of my employer.