Search results

Guest post: Generative AI, technical writing, and evolving thoughts on future horizons, by Jeremy Rosselot-Merritt

by Jeremy Rosselot-Merritt on Nov 6, 2025
categories: academics-and-practitioners aitechnical-writing

In this thoughtful guest post, Jeremy Rosselot-Merritt, an assistant professor at James Madison University, wrestles with generative AI and its impact on the technical writing profession. Jeremy examines risks such as decisions being made by leaders who don't understand the variety and complexity of the tech writer role, or the perceived slow out from human writers compared to the scale out of output from LLMs. Overall, Jeremy argues that Gen AI is another point on a long tech writer timeline of adapting to evolving tools and strategies (posssibly now context engineering), and he's confident tech writers will also adapt and continue on as a profession.

I’ve been meaning to write something about what’s happening with generative artificial intelligence (GAI) in relation to technical writing for a few months, but for various reasons I didn’t, until now. Writing about a topic like this carries with it some intrinsic risk—mostly around the loadedness of the topic itself, but also in the fact that generative AI is such a new and evolving concept that it’s difficult to assert, let alone predict, many things surrounding its continued evolution. All of that is true with respect to technical writing and adjacent fields like usability and user experience, information design, and marketing communications, among others. As I unpack some of the present and speculate about the future, I’m doing so with the understanding that what I say is a work in progress.

Generative artificial intelligence is a relatively new technology that can “think” through questions and problems that we give it and often produce convincing (or at least convincing-sounding) answers based upon that process. The programs doing that thinking produce answers, known in AI vernacular as “inferences,” resulting from a complex series of calculations based on probability and human tweaking of supervised training on massive amounts of data and text. The systems that result are known as “language models” or “large language models” (LLMs) when built at scale.

As it relates to technical writing, I’m going to give some perspective on these questions: What is the relationship between GAI and technical writing today? How can and do these technologies—particularly large language models (LLMs) like ChatGPT and Claude—affect technical writing as a field, and what might the future look like?

Check out this NotebookLM video if you’d like an interactive summary of this post.

Where is this conversation now?

The conversation now really depends on how you look at it. For a technical writer, there’s an undeniable existential question: How is ChatGPT [or another LLM] going to affect my job?

For someone in middle management or executive leadership, the question is often about quality—How can we use these tools to improve our documentation and communications?—and efficiency—Can these tools make us leaner (implying, of course, cost savings) and nimbler?

Then there’s the perspective of academics, who will periodically investigate the questions of practitioners and managers yet also have to consider their own career needs around research and curriculum development. Those latter considerations can lead to “zoomed in” questions such as, How can I study GAI as a potential “stand-in” for human agents in usability testing? or, even more granularly, What are particular linguistic qualities of LLM-generated feedback on emails at the word, phrase, and sentence level? These questions, to be fair, may not address system-level issues of workplace application; but they are interesting and important in other ways.

Since I’m a technical writer by training, and since more of the existential questions about GAI affect practitioners the most, I’m going to answer the first question I presented above: How is GAI going to affect my job?

First, we—those of us in the field of technical writing—don’t really know the full answer to this question. Not yet. And it could take years for the “full answer” to become apparent. At the same time, it is the question that I hear from students most often, and it’s one of the most common questions that I hear when I talk to people in the industry.

I’m going to flesh out some ideas relevant to this question in the sections to follow, but if I’m giving a tl;dr version of my current response, here’s what it would be:

Most of the evidence today, anecdotally and in practice, suggests that technical writing as a field will remain viable. At the same time, it is clear that GAI will directly impact the work of technical writers. It is also clear that trying to ignore or downplay the presence of GAI (even with the best of intentions) will not change that fact. The best, most evidence-informed suggestion I can make for technical writers is to learn more about GAI, how you can incorporate it into a work process that is still distinctly human, how you can effectively advocate for those adjustments with influencers and decision makers, and how you can center yourself as someone who has a unique knowledge-based skillset and a highly evolved understanding of context.

Let’s unpack this a bit more in terms of the field’s vulnerabilities, strengths, and future possibilities—and as we do that unpacking, let’s give that tl;dr version of my answer some nuance that will be helpful as we look toward the future of our field.

What are technical writing’s vulnerabilities in this context?

Our field has many strengths (which I’ll discuss shortly), but it also has some vulnerabilities. Those vulnerabilities create some baseline challenges that we’ve reckoned with for years, long before GAI came onto the scene. It’s important to name them.

Technical writing is a “text-heavy” field

One of the most obvious vulnerabilities is that we are working in a field that involves a lot of text. Regardless of the deliverables you work with, you’re probably writing, editing, formatting, or posting it. And as it turns out, language models like ChatGPT are designed to generate text—lots of it, sometimes—from a single prompt. In many ways, it’s humanly impossible to keep up simply on the level of text production. Text, however, is only part of the story of technical writing—something I’ll return to when I discuss technical writing’s strengths.

Technical writing is not always well understood outside the field

This is a problem that predates GAI, and it’s been a challenge for decades. A lot of people outside technical writing—even the engineers, programmers, accountants, QA specialists, and managers we work with—don’t fully understand what it is that we do. That’s true with the deliverables (“Don’t you all mainly just write manuals?”), the tools we use (“I don’t think I’ve heard of [InDesign, MadCap Flare, Doxygen, etc.], but even if I had, I couldn’t tell you what they really did”), or processes (“You actually interview subject matter experts to generate content to refine? And then you test that content with actual users?”). This compounds the challenge that GAI creates.

Models of technical writing vary significantly among companies and organizations

Here we have a real “blessing and a curse” scenario that ties into another aspect of technical communication I’ll address later in this column: technical communication’s presence across many different economic categories—for-profit, non-profit, public sector, and cross-sector subcontractor/agency work. This multi-categorical nature of technical writing means that one person in our field may write documentation for cloud software developers, while another person may interview community members as part of a report on citizen use cases for a public park. Just as technical writing spans multiple sectors, the models of technical writing (how it is implemented in different organizations) vary widely, too, in terms of:

  • Departmental placements. Technical writers may work in a number of differently named (and, just as important, differently focused) departments. For example, in my own career, I’ve worked in departments ranging from Technical Publications in a pneumatic tools manufacturer to Research and Development in a metallurgical processing software and equipment supplier, from Communications in a health benefits company to Corporate Marketing, Project Services, and the Regulated Industries business group at a boutique engineering services firm. At the lattermost company, I had no fewer than six supervisors and shifted among three different departments over a four-year tenure.
  • Reporting structures. Starting out, early career technical writers often think they’ll work alongside other technical writers and be supervised by people with skill sets related to technical writing. As it turns out, both scenarios are possible, but neither is “standardized.” Thinking back again on my own career, I started working for a technical publications manager; she had a background in professional writing. At my next job, I worked for the compliance manager of a health benefits firm; at the next one, I worked for the business development manager, then one of the company’s owners (it was a small firm at the time), the project services manager, and finally the manager of regulated industries. In my last industry job before going back for my PhD, I reported to the vice president of research and development. These folks had degrees in fields ranging from English to electrical engineering and paralegal studies. A few had no college degree at all, had moved through a number of different roles themselves, and were excellent supervisors by and large. But the reporting structures in each company were vastly different in many cases, and the types of roles I reported to were quite variable. This is not unusual, but the implications for a technical writer’s experience are real.
  • Distribution of technical communication labor. How technical writers work together—if they work together at all—really varies. Some technical writers work together, side by side, in the same department for the same company. That’s common, and that may be how people relatively new to the field may envision things sometimes. It’s also common, however, that technical writers work apart in the same organization. For example, much like other knowledge workers (like data analysts) and creatives (like graphic designers), a technical writer is often dedicated to a specific workgroup or department—the B2B software team, for example. Another technical writer in the same company might work with a different workgroup—the B2C software team, perhaps. In practice, there are benefits and drawbacks to both centralized and decentralized models; but for the technical writer, the workplace dynamics can be quite different.
  • The “lone technical writer” challenge. This brings up another significant point, particularly in smaller companies or companies where documentation is sort of a newer or niche offering. There are plenty of organizations where the technical writer on staff is the only technical writer that organization has. Such scenarios often compound the challenges I discussed in this section. For instance, in companies where technical writing is not well understood or valued, there’s a possibility of shared, mutual understanding when more than one writer is employed, even if they’re in different workgroups. If such a company employed only a single technical writer, that mutuality goes away.

What are technical writing’s strengths?

At the same time, even with those vulnerabilities, there are still significant strengths that we have as a field. In my view, those of us in technical writing don’t always account for at least one of them, making it even more important to reflect upon.

Technical writing is not “just” about text. This is probably one of the most important ironies of technical writing: what we do in this field is way more diversified than simply “wordsmithing” and “cleaning up paragraphs.” Though it flies in the face of some pop culture portrayals (think Tina the Tech Writer from Dilbert), this is something I point out to a variety of different stakeholder groups: industry professionals outside our field, students in courses I teach in the major, academic colleagues who teach and research in the field. This is one assertion where academic research and workplace realities actually intersect, as technical writers frequently:

  • Create rich, often elaborate graphics for a variety of different purposes
  • Design complete training courses for HR and functional departments like sales, engineering, R&D, and so on
  • Manage projects
  • Learn more about particular nuances of a product or service they’re documenting than some of the subject matter experts know
  • Know as much about a company’s customers or external stakeholders (such as vendors) as anyone else within the company
  • Apply (knowingly or not) principles of organizational culture across different departments and stakeholder groups, serving as a knowledge linchpin within an organization
  • Possess a unique set of skills across domains that is notably difficult to duplicate, particularly for the amount of money that technical writers are often paid (decently, but not always at the rate of engineers, programmers, and some other roles).

Simply put, very few fields can claim this kind of breadth and depth.

Technical writing is for humans

Many academics make this point, and they’re right about that: technical writing is human-focused in terms of both its products (web content, reports, instructions, social media posts) and in terms of many of the processes that technical writers use to create those products (user-testing, subject matter expert interviews, customer satisfaction surveys, and so on). Although GAI does have the ability to “guess” at human responses to particular events and stimuli, the technology cannot substitute for input from actual humans. (Some academic research is exploring whether GAI can provide “analog input” similar to what a human could, but this is still exploratory and not yet conclusive.)

Technical writing spans multiple economic sectors. This point is similar in principle to that of skill set diversity discussed earlier. Technical writers work across a vast array of economic sectors—what I call focus areas in courses I teach. What’s particularly interesting about this point is that (a) the full variety of economic sectors where technical writers can add value is still not fully tapped, in my view, and (b) the variety that does exist is at times surprisingly underappreciated, even by practicing tech writers themselves.

In the private sector, the contributions of technical writers are fairly well known. For example, many employees of software firms are aware of the fact that technical writers are often hired to help document code and create end user instructions. In the public sector and in non-profits, that awareness is quite variable. For example, who would have guessed that Harris County, Texas (where Houston is located) would have advertised an opening for a technical writer in 2024? My own dissertation research several years ago demonstrated significant variation in not only what people thought technical writers “do,” but also people’s opinions on where (what kinds of organizations) technical writers could work.

The result is this double-sided irony: Yes, the fact that technical writing spans multiple economic sectors is somewhat underappreciated. However, the fact that this is the case is a significant boon to the field’s economic potential.

Technical writing has weathered other storms before

If we accept the argument that technical writing as a field began somewhere around the 1950s (a much bigger argument, but let’s go with that for the sake of illustration), we can say that the field has been through a lot over the course of 70+ years. One of the great historians of technical communication, Dr. Katherine Durack, documented much of the field’s history in her 2003 article “From the moon to the microchip: 50 years of Technical Communication.” Technical communication survived the advent of word processors with spelling and grammar checkers in the late 1970s; computer programs that analyzed text for readability and suggested revisions in the early 80s; the advent of the World Wide Web in 1989 and the need to learn markup languages in the 90s. Those are just some of the early technology shifts that Durack documented.

Entering the 21st Century, we dealt with tech changes, too—the start of DITA and the rise of early social media platforms like Myspace, Facebook, and LinkedIn in the 2000s. We rode the dubious waves of significant economic upheaval: the dot-com bust of the early 2000s, the ripples of the Great Recession in 2008, and more recently the uncertainties of the COVID-19 pandemic. All of these changes (among others) required us to adapt, to be strategic. Word processors didn’t erase the need for people to typeset or think about layout. Spellcheck wasn’t an editorial panacea. The move from print to digital and the growth of new documentation tools and formats led to more demand for tech writers, not less. While GAI is a different animal than many of those technologies, the principle of adaptation in technical writing is not new.

Tools have changed

Just as technologies have changed, the tools that we use to do our jobs have changed and will continue to. For example, technical writers do create documentation using Microsoft Word, but the exclusive or primary use of a word processor or desktop publishing software for that purpose is more limited than was the case two decades ago. Instead, technical writers have had to learn new tools for structured authoring, single- and multi-sourcing, collaboration, and publication or posting of content. In many cases—as with the use of structured authoring tools—technical writers have had to “pick up” an entirely new skill set, such as the use of a markup language like DITA for the purpose of creating multimodal documentation.

Both then and now, we’ve seen a litany of challenges that have stressed or significantly changed the field’s broader application. For example, primarily localized documentation maintained “on site” at a particular company has given way to cloud-based documentation on various internet platforms. This requires a significant change in document control philosophy as well as how we look at documentation.

Crowdsourced documentation has become more common as well, particularly over the last 20 or 25 years. User communities can contribute real-time inputs on products and how to use and fix them, just as developers and end users now have the ability to document software features and bugs almost in real time.

Still, even with these and other developments, technical writers are being hired. What is the common denominator here? It’s adaptation and willingness to learn—key areas where technical writers, by virtue of career necessity, have excelled for decades. Again, while we’re clearly in the dawn of the GAI era, I strongly suspect that a similar analogy could bear out in the future.

What do we know about the future?

We obviously don’t know “everything” about the future, and we have to be cautious about speculating—but we can make some assertions with pretty good confidence. Among them:

We know that GAI will continue to impact technical writing as a field

This is something that both industry professionals and academics can agree upon, generally speaking. Upon no circles of technical communication of which I’m aware is this realization lost. What that impact may look like is ambiguous (as it’s questionable whether we reached “peak AI” yet), but this is a pretty standard point of agreement.

We know that technical writing as a field of practice has always required adaptation to disruption

Of the points I’ve covered so far, this is easily one of the most salient, and it’s timely. Dr. Nupoor Ranade, a Carnegie Mellon researcher who studies the relationship between AI and technical communication, reminded me of this fact when she recently spoke in a class I am teaching. Technical communication, she pointed out, has survived major shifts in and since the 20th Century: the move toward software automation (think word processing layout tools that revolutionized publishing and, in many ways, content generation) of the 1980s, the explosion of broad information availability with the internet in the 1990s, corporate outsourcing of jobs (including technical writing) in the 2000s, and the increased functionality of natural language processing tools like Google Translate in the 2010s. Amid all these shifts, technical communicators found ways not only to adapt, but also to assert and even expand their value.

Technical communication’s value spans multiple economic sectors

This is a big one, and it’s a point that even those of us in the field sometimes take for granted or don’t always appreciate fully. Experience and research both support the applicability of technical communication across economic sectors—not one, or two, or even five, but easily a dozen or more. How so? Technical writers thrive in software companies—and not just one type of software company: well established desktop-based commercial software; open source, community-driven software; established and startup software companies offering all kinds of services in the cloud (think SaaS). A variety of communication needs drive industry demand for technical writers: from user manuals for heavy manufacturing companies to software testing protocols in boutique engineering services firms.

We have technical writers working in the public sector, too, as communications specialists for local, state, and national government. Though to a somewhat lesser degree than in the for-profit space, we also see them working in non-profits, sometimes with different responsibilities and titles (development coordinator with significant stakeholder communications, for example). In many of these roles, technical writers take on work that makes use of highly diverse skill sets in user experience, information design, project management, and an exceptional ability to bridge gaps among disciplines and departments.

What can’t we predict (currently)?

There’s a lot we can’t easily predict right now, especially as LLMs like ChatGPT and Claude are still relatively and rapidly changing. However, I think that being aware of what we can’t say with certainty yet will in a very real sense help us prepare for the changes as they do happen over time.

We don’t know how GAI as a technology will develop

When ChatGPT was released to the public in November 2022, it introduced what to many was a new way of interacting with technology: how do we wrap our heads around the idea of a chatbot that can (with patience and awareness of evolving imperfection) write both poetry and Python scripts, that’s trained in principles of human psychology, that can (with effective prompting and specific context) help you install a clothes dryer in real time? Almost as soon as it was released, industry professionals in seemingly every field started talking about it and posting about it on LinkedIn. Academics, for their part, went wild with research and speculation alike, often concerned about implications for their own work. While humans continue advancing plausible ideas about its future, the truth is we can’t predict every nuance of how this technology will develop, nor of how it will necessarily impact human society long term. That is a challenge we have to acknowledge for our own good.

What’s more, we don’t know what may exist beyond GAI. See, for example, artificial general intelligence, or AGI, a type of “superintelligence” that’s currently theoretical in nature, but could become a reality in the future (likely the distant future, though that’s a much larger discussion).

We don’t know how social and economic circumstances may change

It’s hard to know how social and economic conditions will change in the next few years—even more so in the next few decades. Recessions happen, leaders change, and some fields that are viable at certain times may become less viable (economically speaking) at other times. In some ways, this unpredictability supersedes GAI as a point of concern simply because questions around what jobs GAI may change or replace will naturally come after the economic viability of those jobs. This leads us to two questions that need to be answered about a particular job:

  1. How viable is this type of work through different social and economic cycles?, and then
  2. To what extent could GAI affect them?

Let’s take two kinds of jobs as examples for a minute: health care workers (those who have direct patient contact) and brand managers (people who manage branding and marketing for a company). These are obviously two very different positions, and their viability in different economic environments is often very different, too. When economic downturns happen, healthcare is still needed and sought after simply because people get sick regardless of “how the economy is doing,” so doctors and nurses are still in demand. Marketing and branding budgets, on the other hand, often get cut because companies simply can’t afford to maintain them as they do during periods of economic prosperity; therefore, brand managers may get laid off or find their positions consolidated. The “GAI question,” while still important, is somewhat less relevant in that particular case. GAI can only really change or “threaten” human employment in a particular job when that job is economically viable.

Where does technical writing exist in this spectrum? Based on my experience and on economic data, here’s how I would answer the two questions above. On question 1: Economic downturns in general do affect the viability of technical writing as a field across industries. In other words, if a recession were to start tomorrow, it’s very likely that technical writers would be cut at a number of companies. However, the effect is more muted than it would be for more cyclically prone fields, such as marketing and branding or hospitality and tourism. Technical writing would still have viability, and documentation would still be needed for software, medical devices, financial products, and so on. On question 2: When it comes to the GAI question, again, we’re still figuring out what that looks like over the longer term—but the likelihood of complete substitution of human writers with GAI seems low because the need for human involvement in managing and nuancing complex information is very much still present.

What can technical writers do to make themselves future-compatible?

This is a question that we all are asking, especially recently, and I think there are at least three things we can do to help with that, broadly speaking.

Learn about what GAI is and how to use it (and not just language models)

I find broad agreement on this point, particularly on the language models like ChatGPT, among colleagues in education and industry. I even had a student in class recently who mentioned an internship where the student had been directed to use a language model to help ensure that questions their team was writing were written in a style that a professional audience (in this case, physicians) could understand. It seems clear that language models will not only continue to be a part of the technical writer’s toolkit, but will continue to expand, in many cases at the employer’s behest. As a result, I think it’s crucial for tech writers to learn about prompt engineering (one of the biggest), LM terminology (enough to understand how it “works,” anyway), which major players are in the market and what the strengths of each are, and how the tech can be used effectively and ethically. In an ideal situation, tech writers can help become “ambassadors” of the technology in terms of developing organizational guidelines for its use, educating others about it through workshops and training materials, and so on.

A second, easy-to-miss point is that language models are not the only “kind” of GAI that exist. Diffusion models like Stable Diffusion can be used to generate visual content on a local computer or in the cloud. AI-assisted tools trained on how to create and modify audio in specific ways can be used for a variety of tasks, such as removing background noise in an audio file (something that tools like Ultimate Vocal Remover are quite good at). As another variation in the audio space, the open source tool Open AI Whisper can transcribe recorded speech quite effectively— an important tool in the tech writer’s repertoire when, for example, analyzing interview audio from user testing.

Understand GAI’s evolving limitations

GAI, while a potent tool, is not without its limitations. Some of these are relatively well known, and some not as well known. One of the best known limitations is that of GAI errors (sometimes referred to in common discussion as “hallucinations,” which is a term I avoid given the real implications for human experience). These responses can involve fabricated or exaggerated details, and they extend to document outputs that language models like ChatGPT create: letters, emails, and yes—documentation of the kinds that technical writers create.

Imagine asking ChatGPT to write a consumer complaint letter and the output letter includes the wrong account number or identifies the issue as something with your wireless phone bill when it’s actually an issue with your natural gas billing. That’s annoying, and it could affect the clarity or credibility of your complaint. It could be even more embarrassing when you’re writing a recommendation for someone and you ask GAI for help (something for which I would advise caution in practice), only to find it included details that had nothing to do with that person. In documentation, the stakes can be even higher ethically and legally. For example, in a manual on welding torch use, a GAI output error can lead to unsafe operation or serious injury. In documentation that includes information on setting up financial transactions or providing sensitive personal information, using GAI to generate that content—especially without very close human oversight—is dubious at best and legally and ethically fraught at worst.

Another challenge is overconfidence: making an assertion that’s clearly wrong, or at the very least not completely correct, and asserting that incorrect statement as if it were fact-checked and unquestionable. This happens periodically, such as when the user prompts the AI chatbot with a question that calls for confidence: “All right, I need some no-nonsense straight talk here. Tell me how the tone of this email is likely to be received by my engineering colleague” or “Give me six well reasoned factors that led to [a certain company] choosing [a certain city] as its corporate headquarters.”

Language models such as ChatGPT, Microsoft Copilot, and Google Gemini have added image-generation capabilities over the last couple of years. This is key because standalone image-generation models, such as Stable Diffusion or Midjourney, often require additional technical expertise or different access pathways, making language models like ChatGPT a standard entry point for AI image generation for many users. While this is great for people who want to generate graphical content, graphics models are notorious for adding “extras” to the images they create. For example, if a user asked ChatGPT, “Generate a lifelike image of a Toronto Blue Jays fan sitting in front of a television, on the edge of their seat, excited about last night’s win over the Dodgers,” the result could be reasonably good—but it could also be disappointing. Toronto might be misspelled, the team logo may be off, the user may be sitting at a bar instead of at home in front of the TV—any number of things could happen. This is partly why precise prompt generation for graphics models is especially important.

One really common challenge of GAI chatbots is their arguably well intentioned tendency to rhetorically and emotionally accommodate the user to the point of being a sycophant. You can find a number of memes and videos parodying that effect (like this one). The challenge here isn’t necessarily complete lack of honesty—the chatbot may still signal you that an idea you have isn’t “the best one”—but, rather, clarity of nuance. For example, just to test it, I once asked ChatGPT if Bryan Cranston (Walter White from the Breaking Bad series) had limited acting range. Even with the subjective nature of the question, most anyone with knowledge of prestige television history would recognize that’s beyond a “spicy take”—ridiculous, even. Yet, to protect my dignity, ChatGPT still included a section about why someone “could” think of Cranston as having limited acting range.

Do these limitations make GAI a “no go” for technical writers? No, not necessarily. But they remind us that (a) language models, graphic models, and other forms of generative artificial intelligence are far from perfect and that (b) technical writers need to account for those imperfections and maintain a level of due diligence when using them for specific applications.

Make adaptations in the workplace—and not just those involving GAI

This one is really critical. My own research and professional experience—and consequently my teaching—emphasize the adaptations that technical communicators can and should make on a regular basis. This harkens back to my earlier point about how technical writers have successfully navigated economic and technological changes over time, essentially because we had to. To some, the types of adaptations I advocate may seem like “nice to haves” rather than “must haves,” and in certain ways they can be. At the same time, I’ve found key adaptations in areas like these help solidify our value in organizations:

  • Understanding workplace dynamics as early and often as we can—we need to learn about organizational culture, organizational norms (including how “AI-forward” an org is), personalities, etc., as much as we can, as early as we can
  • Building relationships and alliances across disciplines/departments—engineering, R&D, customer service, HR, even less “obvious” ones like Finance and Accounting
  • Embracing a variety of organizational contributions where needed—it’s common that tech writers wear a number of different hats in an organization (project management, customer relationship management, and others), and wherever we can, we should embrace that value that people in our field can uniquely provide
  • Communicating professional identity to colleagues and stakeholders—doing so helps expand knowledge of what we as technical communicators do
  • Supporting professionalization efforts in the field—this is more of a field-wide principle, but it helps promote consistency in our professional identity and in how technical writers are trained and perceived by stakeholders with whom we work (this is a much larger issue and could be the topic of a much longer column, actually)

In sum, what does GAI have to do with the viability of technical communication?

The best answer to this question lies in two ideas that, together, reflect an important reality of technical communication today. One comes largely from academic theory; one comes from an article by Fabrizio Ferri Benedetti.

First is the academic theory—one which says technical writers are “knowledge workers” in a post-industrial society who gather, interpret, curate, and make accessible the knowledge that users of various products and technologies need. Many academics have written about this idea, but the concept itself traces back to the “symbolic-analytic framework,” initially developed by economist Robert Reich and later formally applied to technical communication by Johndan Johnson-Eilola in his 1996 article “Relocating the value of work: Technical communication in a post-industrial age.” “Symbolic-analytic workers,” Dr. Johnson-Eilola wrote, “rely on skills in abstraction, experimentation, collaboration, and system thinking to work with information across a variety of disciplines and markets.” In essence, academics argued that technical writers are not mere “creators of text,” but rather highly skilled custodians of abstract knowledge.

The second idea is particularly appropriate given the widespread growth of Generative AI language models, particularly ChatGPT, since 2022. In his recent article “AI must RTFM: Why technical writers are becoming context curators,” Ferri Benedetti makes an argument for the technical writer as context curator, which he defines as “a technical writer who is able to orchestrate and execute a content strategy around both human and AI needs, or even focused on AI alone.” Ferri Benedetti goes on to say, “Context is so much better than content (a much abused word that means little) because it’s tied to meaning. Context is situational, relevant, necessarily limited. AI needs context to shape its thoughts.” Here we have a framework that takes the concept of “knowledge worker” a step further. For decades, we’ve known that technical writers are excellent at curating knowledge and making it accessible. Now that GAI, particularly large language models, provides access to specific prompt-directed knowledge, the role of technical writers is shifting to providing the context around which that knowledge is situated—something GAI cannot do by itself—and therein lies the modern value proposition of technical communication.

In the section about technical writing’s strengths, I pointed out that technical writing is “not just about text,” followed by a list of critical tasks that technical writers perform on a regular basis that go well beyond writing and editing. In the 360-degree view of technical communication as a field, many of those tasks—such as learning the nuances of a product or service, understanding organizational culture, and learning about a company’s customers and external stakeholders—are an integral part of the knowledge economy in which technical writers thrive and which they’ve navigated in their day-to-day work for decades. The fact that technical writers excel in that knowledge economy is a testament to their excellence in work context. In my view, that’s true to a degree that very few professionals in other fields can claim, and certainly it’s true in ways that are often underrecognized outside our field.

As I mentioned earlier, one of the challenges that our field faces is that people who aren’t technical writers don’t always understand it. This is an ongoing challenge that practitioners and academics alike have recognized for well over 30 years. Where many folks outside our field think of technical writing as a “text-only” or “text-first” field, GAI compounds that perception by offering language models that can seemingly do much of the text generation themselves. I believe we can optimize this view by embracing our roles as context curators in a vastly evolving knowledge economy, by advocating for our roles and educating others in industry on “what we do,” and (this part is a little less discussed in professional circles, but still needed) working to bridge the gap between academic research and industry practice. In doing so, I don’t see us embarking on the next “Brave New World” of technical communication. What I do see is our field not only weathering the storm, but adapting to it and reinventing ourselves in ways that add significant value to organizations, just as we’ve done continually for some 70 years now.

About Jeremy Rosselot-Merritt

Jeremy Rosselot-Merritt is an assistant professor in the School of Writing, Rhetoric, and Technical Communication at James Madison University, where he teaches courses on technical communication in business, industry, and organizational contexts. In his research, he studies workplace communication, workplace dynamics (such as organizational climate), the application of technical communication across industries, and how professionals in different fields perceive technical communication. Jeremy was a practicing technical writer for 15 years in industries ranging from biopharmaceuticals to software-as-a-service (SaaS) and pneumatic tools manufacturing. He holds a PhD in Rhetoric and Scientific & Technical Communication (RSTC) from the University of Minnesota and a Master of Technical and Scientific Communication (MTSC) degree from Miami University (Ohio) and can be reached at [email protected].