Search results

Will tech writers survive AI? Perspectives from two professors, Nupoor Ranade and Jeremy Merritt

by Tom Johnson on Mar 21, 2026 comments
categories: academics-and-practitioners aijobspodcasts

In this podcast, I chat with two professors — Nupoor Ranade (Carnegie Mellon) and Jeremy Merritt (James Madison University) — about how AI is reshaping the technical writing profession from the academic side. We discuss dropping enrollments, misconceptions about what tech writers do, historical parallels to past disruptions, agentic AI and organizational restructuring, the cyborg model of human-machine collaboration, and how academics and practitioners can bridge the divide to solve real problems together.

Video

Audio-only version

Listen here:

Note: Most of these shownotes are AI-generated.

Topics covered in this podcast

  • Dropping enrollments in tech writing programs — Nupoor reports that some graduate programs have seen enrollment drop by more than half, as students question whether the profession will survive AI. PhD programs, however, are actually growing as students seek deeper research-oriented learning.

  • Misconceptions about what tech writers do — Jeremy notes that across 31 interviews, he received wildly different definitions of technical writing. The field has done a poor job of documenting and marketing what tech writers actually bring to the table, making it easy for decision-makers to assume AI can replace them.

  • Writing is only 20% of the job — Tom argues that most people think tech writers just write all day, but writing is a fraction of the role. The other 80% — gathering information, navigating organizational politics, understanding product ecosystems — is far harder to automate.

  • Historical parallels to past disruptions — Nupoor’s book project documents how the field has adapted to previous shifts: desktop publishing, the internet, wikis, Docs as Code, globalization. Each time, roles changed but the profession survived. AI may be different in degree but the pattern of adaptation holds.

  • Agentic AI and organizational restructuring — Nupoor envisions a future where every employee has AI agents working under them, and the organizational structure flips — with writers and customer-facing roles at the top, steering information flow that feeds into agents working under engineers.

  • The cyborg model of human-AI collaboration — Tom pushes back on the idea that agents will manage agents without humans. He argues the current reality is a cyborg model: iterative, back-and-forth human-AI interaction, much like how driverless cars still need human oversight for unpredictable real-world conditions.

  • The academic-practitioner divide — Jeremy and Nupoor discuss the gap between academic research (slow, theory-driven, paywalled) and practitioner needs (immediate, application-specific). With STC shutting down and Intercom gone, finding shared venues for collaboration has become harder.

  • Preparing students for an AI-centric workplace — Jeremy gives students the option to use AI in assignments, respecting their agency while asking them to document how and where they used it. Nupoor balances AI-required assignments, AI-free assignments, and critical discussions about AI ethics and privacy.

  • Writing With, Without, For, and Against AI — Jeremy highlights a colleague’s course at JMU with this title, noting how passionately students engage with the cultural, ethical, and rhetorical dimensions of AI — not just the functional “how to use it” aspect.

  • How practitioners and academics can collaborate — Both guests call for more meaningful partnerships: academics doing immersive workplace research, practitioners sharing real problems, and cross-disciplinary cohorts (writers, engineers, HCI researchers) tackling shared challenges with data-driven rigor rather than hot takes.

Narrative essay version of the conversation

If the podcast were an article, this is what it would read like.

The Profession That Keeps Almost Dying

Technical writing programs are losing students. Not all of them, and not everywhere, but enough that the pattern is hard to ignore. Some graduate programs have seen enrollment drop by more than half. Students look at the headlines — Amazon laying off writers, Snowflake cutting documentation teams, AI generating passable prose in seconds — and draw the obvious conclusion: why invest years and tuition in a profession that might not exist when they graduate?

This is the fear that runs underneath everything right now, and it’s not confined to students. Practitioners feel it every time a new AI capability drops. Academics feel it when they struggle to fill a class. And yet the historical record tells a surprisingly different story — one of a profession that has adapted, repeatedly, to disruptions that each seemed existential at the time.

Desktop publishing was supposed to eliminate the need for dedicated writers. The internet was going to make documentation obsolete. Wikis were going to crowdsource the whole enterprise — give every engineer a login and let the docs write themselves. Docs as Code promised a distributed model where contributors committed directly to the repository. Each time, the prediction was the same: we don’t need tech writers anymore. Each time, the profession evolved and survived.

But here’s where the historical analogy gets uncomfortable. Previous disruptions mostly replaced individual tasks — formatting, publishing, versioning. AI is different in kind. It doesn’t just automate a single step in the workflow; it can draft documentation, organize information, stress-test scenarios, and even generate code. It operates across the entire surface area of what a tech writer does. So the question isn’t whether this disruption is real — it clearly is — but whether the pattern of adaptation still holds.

There’s a strong argument that it does, and it hinges on a misconception that has plagued the field for decades: the belief that technical writers primarily write. In reality, writing might account for 20% of the job. The rest is information gathering — interviewing engineers, navigating organizational politics, understanding product ecosystems, making judgment calls about what to emphasize and what to bury. It’s knowing which partner team is frustrated, which API is about to be deprecated, which executive cares about which metric. None of that is automatable, because none of it is writing.

This misconception isn’t just an academic point. It has practical consequences. When decision-makers don’t understand what tech writers actually do, it’s easy to look at an AI that produces clean prose and conclude the role is redundant. The field has done a terrible job of documenting itself — ironic for a profession built on documentation. Ask 31 people to define technical writing and you’ll get 31 different answers.

Meanwhile, the technology is moving in a direction that may actually elevate the tech writer’s role rather than eliminate it. Agentic AI — systems where AI agents operate semi-autonomously within organizations — is reshaping how work gets structured. In this emerging model, every employee might have agents working under them, and the organizational hierarchy could flip. Writers and customer-facing roles end up near the top, because they’re closest to the people whose problems the organization exists to solve. They steer the information that feeds into engineering agents. They oversee the documentation that agents produce. The tech writer becomes less of a writer and more of an orchestrator — a curator of the knowledge systems that everything else depends on.

But this vision coexists with a more grounded reality. Right now, in practice, AI hasn’t replaced tech writers so much as augmented them. The current model is closer to a cyborg: human and machine working in continuous, iterative collaboration. You prompt, review, adjust, re-prompt. You catch the errors the model can’t see because you have context it doesn’t. You deal with the monkey wrenches — the build that didn’t work, the feature that got cherry-picked at the last minute, the engineer who mistagged a release. At least 70% of the work consists of one-off tasks that can’t be scripted into automated workflows. They’re unique, messy, and deeply contextual. The fantasy of fully autonomous documentation pipelines is, for now, exactly that — a fantasy that’s been “two years away” for the last decade, much like driverless cars.

Where does this leave the profession? Probably in the same place it’s always been: adapting. The specific shape of the adaptation is still forming. It likely involves tech writers who understand AI well enough to direct it, who can build and curate skills files and agent workflows, and who maintain ownership of the knowledge layer that AI amplifies. The humans who understand both the technology and the messy, political, contextual reality of how organizations actually work will be the ones who survive.

The enrollment numbers will probably recover. They usually do. But the students who come back won’t be training for the same job. They’ll be training for the one that emerges on the other side of this — whatever it turns out to be.

Will tech writers survive AI? Perspectives from two professors, Nupoor Ranade and Jeremy Merritt
Will tech writers survive AI? Perspectives from two professors, Nupoor Ranade and Jeremy Merritt

Resources

Transcript

Tom (00:03)
Welcome to another podcast. My name is Tom Johnson from I’d Rather Be Writing. And today we have a special focus. We’ve got two people from the academic world. Jeremy Rosalett Merritt and Nupoor Ranade. They are joining me in this exploration of a lot of really pressing topics. We’re going to dig into generative AI, the academic context, all these questions

the students and faculty are asking as well as professionals in the field, working in the field, how all this intersects. So this is going to be a great sort of exploration from the academic side here to try to understand this topic. Let me just introduce the two guests today. Okay, so we’ll start with Dr. Jeremy Roslett-Merritt.

You are an assistant professor at James Madison University in Virginia. You were previously at Carnegie Mellon and you may have read a couple of posts by Jeremy on my site. He recently wrote one called Generative AI, Technical Writing and Evolving Thoughts on Future Horizons. And then a few years earlier, he wrote another guest post titled, Why are technical writers often treated as such an unimportant part of the company, of a company and that one

got a lot of attention and focus. So, you’ve probably seen his name and I really appreciate those guest posts. Now, Dr. Nupoor Ranade, she is an assistant professor of English at Carnegie Mellon University in Pennsylvania. Previously you were at George Mason, North Carolina State. And we actually met back in STC India in 2022, I believe.

Nupoor (01:55)
2015 actually.

Tom (01:56)
2015, gosh, 2022 was, let’s see, at some point we did a podcast. I interviewed you for a podcast. Okay.

Nupoor (02:02)
You made it. That was 2020-2021 around that time. Yeah.

Tom (02:07)
Yeah, and that was focused on users as producers of knowledge, how tech writer roles are changing. Now, Nupoor, you’re also working on a book, one about the evolution of the field, exploring the idea that this technological disruption has parallels with previous ones and how tech writers have sort of evolved and survived in the past. So.

Welcome both of you to this podcast. And I don’t know if you want to say hello or did I miss anything about kind of your backgrounds here?

Jeremy (02:43)
I feel like we’ve kind of come full circle, Tom. First of all, thank you for having us. Interestingly enough, and Nupoor knows this, I started really following her work closely when she did that original podcast with you a few years ago. And she and I hadn’t even met, unlike the two of you. And it wasn’t until our time overlapped

for a year or two at Carnegie Mellon that we actually did meet and got to have a lot of great discussions, and just as you and I have, Tom, and you and Nupoor, and that’s why I say I feel like this conversation is kind of coming full circle with the three of us and I’m really appreciative and excited that we get to spend this time together doing this.

Nupoor (03:29)
Thanks Tom for having us and Jeremy for inviting me to this. I understand how this full circle is working now, but I’m just really thrilled to be here.

Tom (03:38)
Yeah. I remember meeting you at that, that gosh, that STC summit. I’m or yeah, the STC summit in India. And I’m embarrassed that I didn’t get the year right. Cause it’s like an old man thing. I’m like, yeah, everything’s a blur in the past, but I remember you were so enthusiastic. Like you had just this eagerness about you that I was like, wow, she’s really into this. I couldn’t quite tell. And then to see you just kind of go into that academic world and really excel, it was pretty…

Nupoor (03:48)
No,

Tom (04:08)
Pretty impressive. I like that history. Now, I often think when we’re talking about AI and so on, that it’s sort of a practitioner problem. Like you hear about these layoffs, we hear about Amazon doing layoffs and Block doing layoffs and Snowflake doing layoffs, and it fills practitioners with a lot of fear.

Nupoor (04:13)
So.

Jeremy (04:31)
Yeah, I saw that.

Tom (04:36)
Right. People are like, man, am I next? But the academic world is also, you don’t hear about professors getting laid off due to AI. At least I don’t. Maybe, maybe that’s the thing. I don’t even know. I’m excited to hear what the perspective is from your view, from your situation and so on.

But it’s like we’re connected in a pretty close way too, because you’re, training and preparing people to enter the professional world. And so of course you’re connected with how, how that professional world is evolving. But let’s, let’s jump into it now. Um, let’s start off with this, just general question. How do you feel that generative AI is changing technical writing and you can, uh, I don’t know.

Each give us your who wants to go first. Let’s start us off here. How is generative AI changing this profession?

Nupoor (05:41)
I think Jeremy can go first.

Jeremy (05:43)
Sure.

I know we both have talked about this a lot, Nupoor and I have. I would say that generative AI is really changing and augmenting the roles that technical writers play in organizations, or the roles that tech writers have in organizations. And something I’ve been thinking a lot about GAI in the context of tech writing is workflows.

when and how generative AI should be included at different stages of the technical writing process. Well, here’s one example. Let’s say I’m a tech writer who just finished interviewing a developer, a software developer, about a new API endpoint. Tom, this is kind of getting into your realm a bit more than some of my history with the API stuff. But

I have notes that I’ve taken from that interview with the subject matter expert and I’m struggling to organize those notes. So I paste those notes into a tool like Claude Sonnet and I prompt it to help me organize those notes into a working document structure.

At that point, I would need to choose how to use Claude with the document that I actually write, like the document that I actually intend to provide to the end user or to the end customer. Do I use the tool as an idea generation tool? Do I use it as an editing tool? Do I use it? And it’s interesting, I found AI, generative AI to be helpful in certain cases for stress testing different

documentation scenarios. So I could also choose to kind of use Claude as a stress tester for worst case scenarios in real world use. And as an organization, okay, I would also want to try to develop standards and guidance around those use cases so that they could become a

you know, meaningful part of a technical writer’s workflow. So that’s one way I’ve really been thinking about it is in terms of tech writing workflows. And that’s been something fairly recent the last few months that I’ve been really thinking about a lot.

Tom (07:58)
Yeah. Yeah. I mean, I can see that workflow. for sure. Like there’s so many different ways to leverage AI and different sort of APIs and different scenarios. And, I think that’s a good example. Thanks for, thanks for making it concrete, right? Like you’ve got a new API to document. You’ve met with an engineer, you’ve got all these nodes, you’ve got all this info. How do you shape and organize it, make it fit, figure out that

context that you need to really make it seamlessly integrated into the rest of your doc. Nupoor, do you have any thoughts? How is generative AI changing things?

Jeremy (08:28)
Mm-hmm.

Nupoor (08:40)
Yeah, so it’s changing things on different levels and in different spaces. For example, in academia, a lot of graduate students, we’ve started seeing lower enrollments because they think that their roles are going to be replaced anyway. What’s the point of getting a degree in tech writing anymore because generative AI is doing everything. But I think the real crux is that augmentation piece. And for me, I was looking at generative AI based on how it is used and what it is automating.

sort of, but then thanks Tom for mentioning my book and the work that I’ve been doing recently, the research on the historical perspectives. So that has made me think more about what generative AI really is. So it is a tool for automation. And it’s not the first time that we have a tool for automation. We’ve had similar tools in the past, which have helped automating tasks like desktop publishing or testing for standards. Like we have AcroLinks, we have word processors, which we didn’t have before. So these were

automation tools which sort of replace some of the tasks that technical writers did. But specifically with AI, because it can do so much more than just automate a single task or a single process, we are also seeing changes to not just the tasks or the skills of a technical writer, but also to the profession as a whole. And I know you mentioned earlier about all the layoffs that are happening. So it is changing the profession in a lot of ways where we are having to think about where are

human technical writers really important? What are they doing? We are also seeing changes in the community of technical writers and, you know, where who we are collaborating with, who we are working with, what language do we speak now? I mean, when I say language, I don’t mean localization. I mean, you know, how are we communicating with developers? How are we talking to our users? And are we actually talking to algorithms or are we talking to other agents to get our information? So I think I think that community of human

that we envisioned before is also changing. So I think there are changes at the task and the role level, but there’s also changes to the profession and to the communities that we’ve been part of. I can talk more about agent-based AI and how that and specific individual tasks, but I think that’s too nitty-gritty so we can get to it once we get to it.

Tom (10:59)
That’s, yeah, this is great. I just wanted to introduce kind of the topics and get this ball rolling. You mentioned something that really jumped out at me and that’s lower enrollments. Students are kind of getting fearful that the profession might not be around or that it might be replaced and so on. What kind of pressure does that

put on you? Like are you seeing your enrollments in your classes cut in half or a third? Like what’s happening there?

Nupoor (11:35)
So there is a lot of reasons for that, not just AI, but AI is definitely contributing to it. So one of the things that you mentioned earlier was a lot of tech writers, you know, at Snowflake and these other places losing their jobs, and we’re not seeing that in the academic market yet where academics are losing their job, but that’s not really true. It’s just that folks in, you know, let’s say the tech comm program are not losing jobs. It’s folks in the, let’s say, religious studies or, you know, some of these other areas which are being cut

in order to hire more in engineering spaces or in more tech related areas. So we are seeing those kinds of layoffs. Similarly, in writing fields, students are really worried that, first of all, that AI could do all this writing. So what does it mean? Because until they come to the program, they really don’t know what tech writing really is. As we all know, most of us have accidentally come into, you know, stumbled across technical writing.

We never, like I at least never, and I know a lot of folks like me who never really planned from high school or you know like it was not their major when they did undergrad like they wanted to be a technical writer. They either stumbled across it or just eventually became that when they joined the company and their role sort of got converted to a technical writer’s role. So until we introduce them to what the real task is they don’t know and then they think that it’s just writing and then Generative AI is doing the writing.

Tom (13:04)
Yeah.

Nupoor (13:04)
The other reason is that initially these entry level positions, there’s a lot of those. And in order to get a graduate degree, if you already had a job, then we have a lot of part time programs in the country for technical writing majors or graduate programs. And so a lot of professionals used it as a way to transition into these careers from another space. So when they did it part time, they

paid full tuition, attended evening classes. So a lot of programs were structured to accommodate such working professionals. The fees for graduate school is still high, whether it’s in state or out of state, it’s still pretty high. So if you’re not getting the return on investment because now you’re worried whether you’ll get a job, so then they’re again thinking of investing that money, which they could then, you know,

give elsewhere, like do some courses on LinkedIn or Coursera or these other platforms. And they’re trying to debate whether that’s the same thing.

Tom (14:07)
Yeah.

Nupoor (14:08)
Our programs, the number of students has gone down and I would say in some programs it is lower than half. It’s, you know, we’re just getting a few students sometimes or sometimes some of the programs are almost also shutting down. But there are other programs which are doing really well that I know of, especially the PhD programs are doing really well. We’re getting a lot of interest because I think folks who are interested in grad school just to get a job

are the ones who are sort of dipping, they’re reconsidering thinking where things are going and looking at the future, but are skeptical about the future. But then there are others who actually know about the human role and they’re now interested in more research related learning. And we’re seeing them enroll for the PhD programs instead so that they can then slide into tech comm either work for all tech careers, which we call it that is alternate than academia, or you look for

and as positions directly where they can augment themselves with AI and technologies as such.

Tom (15:12)
Well, I can, I can see how like, if, if your enrollment of students, is, is a lot less that if I were a teacher or professor, I would feel a little bit, fearful, right? It’s like, gosh, if I don’t have students, I don’t have a job, right? But, Hey, you, you said something I wanted to jump into and, and this will also bring Jeremy’s perspective here. You said that a lot of people don’t really understand.

what tech writing is. They assume it’s just people writing all day. And since machines can now do that writing, then why do we need tech writers? Jeremy, in your recent blog post, you were talking about how a lot of people don’t really understand the profession. They don’t understand what a tech writer does. So it seems very clear that like, hey, we don’t need a writer anymore. We’ve got AI to do the writing. Do you want to talk more about that misconception and how that sort of threatens the stability of

things or I don’t know what you have to kind of say about it.

Jeremy (16:15)
Yeah, I do want to comment on your previous question about enrollments. I think Nupoor made a really good point about sort of the, and I know we haven’t delved into this completely, sort of the difference between…

Tom (16:22)
Sure. Yeah.

Jeremy (16:32)
what many of us call academic technical communication and then what others, you know, many of us call industry technical communication or practitioner technical communication and kind of the connections between the two and how we can make those connections better. Our students who are going into our programs in technical and professional writing and

related degree programs at the undergraduate and graduate levels. I’m teaching an undergraduate course in technical communication right now, and it’s basically a full course. I think that what, and part of the reason I wrote the most recent blog on your site, Tom, is that I wanted, it helped me to reflect on what’s happening in the field.

But it also kind of gave me, and I hope that it gave others, it wasn’t just about me, it hopefully gave others a reference point about where we are in the field and what has happened in the history of the field. And this is something Nupoor knows a lot about. In fact, her book, if I can say this, is going to speak to some of these historical movements or these historical things that have taken place in the field.

to your point about people misunderstanding or kind of having a misconception, that’s a challenge. So one challenge we’ve had in the field, and I said this in my blog post, Tom, is the technology keeps changing. And every time there’s a technological shift or a change in tools, and I know the AI moment is really big right now. It’s revolutionary in many ways, not just in tech com, but in…

other fields as well. But I think one of the things that’s important to keep in mind is that this is a field that has adapted reasonably well over time to a lot of different changes that have happened in the 20, I don’t know, 25 years or so that I’ve been in the field, whether as a practitioner or as an academic. As far as misconceptions about the field,

I would say probably two things really come to mind. One is that as technical writers in industry or in practitioner roles, I found it in my career very helpful to help colleagues understand more about what I brought to the table, not just what was in my job description, but skills that I had or potential that I had in that role that was not being tapped.

And I think that a lot of that really, really went back to, again, some of the misconceptions. A lot of people who are the supervisors of technical writers and who work with technical writers don’t have any formal training in technical writing themselves. So if you were to ask me, what does a lawyer do? If you were to ask me, what does an accountant do? I’m probably going to be able to tell you, OK, an accountant does something

with money. An accountant does something with taxes. In some of my own research, particularly when I was in graduate school, one of the questions I asked the people I interviewed as a part of that study, how would you define technical writing or how would you define what a technical writer does? And over I think it was 31 different responses, I got a really, really wide range of responses to that question, which I actually found surprising. That is not something that I thought

you know, of a priori, like before I did the study. So I think we have a problem of kind of marketing or advertising or conceptualizing what our field is, specifically with people who are outside our field. And then the second thing I would say in response to that question quickly is we have an opportunity, when I say we, I mean those of us in teaching and research spaces in academia,

Nupoor (20:27)
you

Tom (20:39)
Yeah.

Jeremy (20:52)
to help frame that concept of technical writing, both for people in the field, like people like, you know, Nupoor and I were just talking about people in our undergraduate and our master’s programs who want to go into technical writing as a field. And also people who are not planning, students, I should say, who are not necessarily planning to become technical writers. So the engineers, the programmers, the lawyers, the accountants of the world who

will hopefully go into their own professions with a better working knowledge of what we actually bring to the table.

Tom (21:27)
I sit in my current job, I sit with a bunch of product managers and I’m sort of realizing that I don’t even really understand what product managers do. Like I realize now that they have to write these two pagers and they have to figure out like they have to mediate between partners and their requests versus what’s best for the program and then try to come up with like resolutions. I didn’t know all this. And I’m sure.

Jeremy (21:39)
Mm-mm.

Tom (21:56)
They don’t know exactly what I do either. Like they probably are like, what is Tom doing all day? He’s not working on my doc requests. What is he doing? But yeah, I mean, this is symptomatic of so many things. We don’t really understand what other professions totally do. And so if you’re high up and you’re like, well, we don’t need a writer. We’ve got great writing tools. Well, writing is only like 20% of the job. So what are you doing with that other 80%?

Jeremy (22:21)
Yeah, that’s exactly it.

Tom (22:26)
Yeah. well, okay. So now I’m really intrigued by this historical argument. Okay. Because this is, this ties in. sorry. You’re going to add something, Jeremy. Go ahead. go ahead. I didn’t see the hand raise. I actually don’t see that icon. So just do it physically next time.

Jeremy (22:36)
I think Nupoor was going to something.

Nupoor (22:36)
I was raising my hand.

Okay.

So, no, I just had something to add about the misconception. So I think in our field, we have done a terrible job at documenting what technical writing is. I don’t think we really have a good definition anywhere, if somebody Googles it today and Gemini pulls something or if Chad GPD pulls something, I don’t think it is the real view of the field.

it’s not going to capture everything. And it’s not something that we can really define well because I think the definition that we have today just comes from the history of technical communication as a profession. It came from the World Wars and where writers were first used to document the tools and the different technology that was used for the World War.

And those writers came to be called technical writers. They were civilians just doing more technical tasks. And I think that is what forms the definition today. But Tom, I think you brought a very good point about what we are missing out is the whole, the role of the technical writer with a perspective of where they fit in an organization. So that organizational placement of that job, which again is something our students just.

will not understand from an external point because if they Google or if they ask ChaiGPD, they’re not going to get that whole scenario. So we’re always working with engineers. Engineers will build the product, and then we document the product, will be used by the documentation, will be used by the users. So this whole circle, but there’s so much that goes on in each of the steps in the workflows. And they’re really complex to talk about that a tool would not explain.

And so I think that is what the misconception is right now is that technical writers only write user guides. I think that’s how the field is looked at and that therefore they think that AI could do that job.

Tom (24:44)
Yeah. Let’s, let’s, okay. Thank you for your answers on that. And, let’s move into this historical part because I, I’m really curious about this. I know this is the book project you’re working on and it does seem very easy to be caught in the moment and think, my gosh, this is a moment like none other and everything is changing. There’s no parallels to the past.

But I’ve been in this field at least 20 years and have seen the evolution of quite a few things. Not a, not all of it, thankfully. I’m not that old, but you mentioned the shift from desktop, desktop publishing. People could suddenly like, you know, shape their own kind of outputs and formats or suddenly book publishers. Basically we saw the rise of the internet. We saw social media. We saw the.

the Wiki craze or whatever you want to call it where people set up. Now we don’t need tech writers, which is everybody writes their own little piece and it’s crowdsourced and you’re good. And then Docs as Code came along. It had a similar sort of model. It’s like, people are just committing docs in the same repository. It’s this distributed model. And each time the profession has adapted and survived. But certainly as a counter argument,

Jeremy (25:53)
Yeah.

Tom (26:10)
this moment where you have AI doing so much of the writing and the logic seems quite a bit different and, like, I mean, comparing the AI changes to like Wikis would be a difficult sell. Things really do seem different. But what is your sense? Do you think that in three years time, we’ll look back and think, gosh, why did we get

so worked up over this? It was just another phase of things. I don’t know.

Nupoor (26:43)
I absolutely don’t think that. I don’t think we’re doing enough to prep ourselves for the wave that’s coming because I think the other, the previous technologies mostly that you mentioned, which changed the professions, sort of did that in, you know, they again, like specifically replaced some of the repeating tasks that folks did. And so maybe the roles changed, but then the roles still existed.

Here we are seeing sort of a 180 degree shift in not just the roles, the tasks that we do, but also the way organizations are structured. And I know you mentioned the PM role at your organization and how that works. But similarly, I know that was the NVIDIA conference yesterday and something that Jensen mentioned, but something that I’ve also been thinking about a lot recently is just the…

Jeremy (27:24)
Mm-hmm, yeah.

Nupoor (27:39)
I think the agentic AI, which is I think the biggest innovation in a long time. We’re not thinking when we say AI, we’re just thinking AI, generative AI in the writing space. But agentic AI is what will change a lot of things the way they are being done. So I think the previous structure of an organization looked like, again, engineers, technical writers, and then the users. But what’s happening right now is instead of

an organization that was a structure with all human employees or human resources, you know, but we are moving towards agents now and what agents would do, which, you know, again, Jensen is saying that he wants every agent, every human engineer or even a writer or whoever at an organization to have 250,000 tokens to use.

which means what will happen is engineers will have agents, so they are like their employees or their assistants working for them. Similarly, writers will have those and other people in the organization will have those. So what we’re looking at a future where these agents will do the talking to each other. I think the workflow and the organizational structure will almost be reversed because technical writers or maybe marketing folks are the ones who are closest, closely associating themselves or

closely, I guess, affiliated with customers or end users. So I think we’re seeing those people at the very top because ultimately we’re solving user problems, whether it’s a product or information, we’re solving users’ problems. And so we are seeing a reality where we have the writers and marketing and all these other folks who are interfacing with customers, bringing in queries and bringing in questions.

And then that information being then translated to agents who are documenting and writers overseeing that documentation of questions, queries, and information, which is being fed to other agents that are working under these engineers. Because if they do not consume accurate information, then what engineers create is not going to be good enough or is not going to solve the customer’s problems really. And so once that happens, then it is going to go into production, I think, and then that produced.

product, whether it’s information, whether it’s a software, whether it’s a hardware, is then going to be tested and other things, which again, will be handled by more agents. So you’re thinking definitely this crunch of resources, but more agents. So there is more, I guess, entities working on it, just not humans, and humans are overseeing all of that work.

Tom (30:15)
Jeremy, what are your thoughts on that?

Jeremy (30:17)
I really appreciate, you know, Nupoor’s take on the history of the field. And interestingly, that’s something that we talk a lot about. I think it’s fair to say in my graduate program, one of my exams, so when you go up, you know, to write, before you write your dissertation and ultimately defend it, your PhD,

most programs you take this series of exams called qualifying exams. What it means is that passing these written exams, they’re a pretty big deal. Passing these exams, that’s why they call them qualifying exams. You become qualified to basically write your dissertation and move forward. And one of the specific questions on one of my three exams was,

basically summarizing the history of the field and summarizing and part of that summarization in one of my exams was how have academic scholars written about technical communication in terms of a definition and as it turns out Tom I think you made a point about the definition as it turns out in in academia we’ve been working on defining the field we’ve had numerous definitions of the field over the course of the last

goodness, probably dating back to the mid 60s into the 70s. And then you started seeing even more and more definitions in the 1980s, 90s and into the present even. So I’m very interested in that. And I think, Tom, the other part of your question was about the future. So we have this history that shows a lot of adaptation. Like we’ve made a lot of adaptations. You brought up the.

DTP, the desktop publishing example. You brought up Docsys code. You brought up certain aspects of Web 2.0. One thing that I was thinking about in terms of preparing for the future. So I try to do a couple of things. And I’ll just speak of this from a teacher’s point of view. But it also applies to practitioners, because nearly all of my students are going to be future practitioners.

I’m teaching, so I was talking about this idea of workflows before, I’ll make this relatively succinct. I’m teaching an undergraduate course now where I’ve asked students to create a data visualization. They have the option, it’s not a requirement, but it’s an option, of using a generative AI tool like Microsoft Copilot, what have you, to assist with generating that visual.

and as part of that process they have to specify, and this is really important, have to specify where and how that tool was used in their workflow. The idea is to really have students reflect upon and to be intentional about how they’re using a specific tool in that process, which is something that they will absolutely need to do in their careers. So I would also strongly recommend the TechWriter’s

in different phases of their careers. Now, this is true for students and this is true for seasoned technical writers, I think, in my own view. But I recommend that the tech writers in different phases of their careers learn more about the underlying technology. And by that, I’m not saying that we have to learn every single piece of terminology, but I think it really is helpful to have a working knowledge of how

models kind of actually work under the hood, so to speak. So I think it’s really helpful to experiment with smaller local models like Lama, Quinn, Mistral, some of those models that you can kind of use in the cloud that are not chat GPT, you that aren’t these cloud frontier models that everyone’s familiar with. I really think that helps people, technical writers specifically, get a much better sense of what generative AI looks like under the hood.

Tom (33:59)
Mm-hmm.

Jeremy (34:25)
and why it behaves the way it does. And I think that kind of goes back to, you know, Nupoor saying, well, you were asking about the next three years and what we may say three years from now. We should probably do this in three years. Like, we need to have this retrospective conversation with Tom in three years and just see what the field is looking like. My feeling, and I think I got at this on the guest column on your site, Tom, is that the field will continue.

Tom (34:28)
Yeah.

Jeremy (34:55)
The impact of these emerging technologies, particularly in this context, generative AI, is going to be felt. And the role of the technical writer, and I think this is well represented in a lot of the academic research, the role of the technical writer is changing. It’s going to continue to change and evolve. And part of our job as teachers

And I think also as researchers is to help identify what those changes are and how practitioners, students and other current practitioners can adapt to those changes and make generative AI a reasonable and meaningful part of their workflow without removing

ethics and without removing the human element, the rhetorical element from that process. And I think that’s a balancing act. I’m not saying that that is an easy thing to do, but I think that that’s something we have to do.

Tom (36:02)
Trying to figure out how the profession is changing and how it will look for three, four years from now is tough. was just, I mean, just for fun, I was going back to this podcast that I did with Nupoor back in 2020 titled Users as Producers of Knowledge, Conversation with Nupoor Ranade about how tech writer roles are changing, right? We were talking about the same thing.

Looking at how Docs as Code workflows are changing roles users are now producing. They’re not just passive consumers. You know, we’ve been having these conversations for a while and things keep changing and it’s like, well,

Nupoor (36:28)
Great.

Tom (36:47)
It’s hard. I mean, having the historical context is great because it’s like, look, we’ve talked a lot about how things are changing and it’s always like this scare that tech writers are going away. Now it seems scarier than ever. And you’ve got these different visions of like tech writers as information steerers, managers, you know, guiding agents and so on. When I look at like how things are currently playing out.

I really see a heavy integration between human and AI. They’re conversations, they’re iterative, they’re back and forth. I don’t really see automated workflows just taking off. I don’t think you could find a single tech writer in any big tech who has automated a complex workflow seamlessly and just stepped out of the picture.

it’s sort of the same fate as driverless cars. They may work for like smooth stretches of detailed or roads that have been mapped in detail, heavily Q aid, you know, tested thousands of times. And now it can, can do this, these routes in the city where it’s predictable. There’s no snow, there’s no adverse conditions. There’s no construction. That’s not the real world. Like at least 70 % of what I do is these one-off docs.

tasks that can’t be scripted into an automated workflow. They’re like unique things like, Hey, there was a bug here. We had a change to this process here. We’re tweaking this, you know, now you got to alter something. It’s not like a repeatable thing. There are repeatable things like release notes. And I have been scripting that, but every time there’s a monkey wrench in it. Every time there’s something that is like, the build didn’t

Jeremy (38:28)
Yeah.

Tom (38:36)
the build doesn’t work, so things don’t generate. Or, we’re going to cherry pick something into this latest release. Or, the engineers mistagged that. That’s not actually a release stage element. last minute, we’re going to add in this feature. And so any kind of pipeline that I’ve constructed breaks. And so you do need this human constantly in there. This idea that like,

Agents are gonna manage agents and the human writers are just no longer needed just seems like a fantasy. That’s too futuristic. It’s the same fantasy as, yeah, cars. You don’t need drivers anymore. You just get in and it tells you wherever you want to go. Well, that fantasy has always been two years off for like the last decade. Anyway, that’s my personal philosophy and take on this is that the cyborg model of human-machine interactivity

Jeremy (39:18)
Mm.

Tom (39:29)
Is what’s currently the reality will get more integrated? But I don’t know, the idea that you just remove the human is a hard sell for me. It’s just not how things are currently working out, you know. And I would love to see that role where, you know, Nupoor, you mentioned this sort of tech writer at the top that’s managing the flow of information, that’s kind of steering the direction of the agents. Did you have like a title for that? Like what would you call that role? Certainly not technical writer, right?

Jeremy (39:41)
Mm-hmm. Yeah.

Nupoor (39:56)
Totally.

Tom (40:01)
Okay, okay.

Nupoor (40:03)
something like that. I definitely don’t think the human role could be replaced because we have a lot of use cases that we’re seeing and not just repeating tasks. It’s just the way technical writers have been gathering information before they documented it. That process itself, I don’t think can be automated at all. It requires a lot of personal connections and talking and networking, not just with engineers. And it’s not sort of

a boiler-plated model of how you do it, it’s constantly adapting and changing the places where we gather information from. And so I do think that AI could help with some of the, increasing some of the efficiency, and that’s what it is doing for engineers as well at the moment is, yes, the time that they took to do some of the tasks is being cut in half or maybe a quarter sometimes, but unless you have a good engineer who is looking at the code and understands

understanding what is going on, an automated code generation tool is not really helpful because it’s about managing an infrastructure rather than managing that one task. I think that’s what we are talking about.

Tom (41:17)
Yeah, yeah. You know, I, the personal element, I actually find that I’m able to pull all the info I need for at least release notes from file diffs between releases. I’ve sort of gravitated to the idea that I don’t even really want engineers to try to tell me what’s in the release. I’m going to look at like the diff between releases and be like, okay, so you added this method, you changed this. It’s a lot more factual.

It doesn’t care. It sort of fits my product. Maybe, maybe if you have a product that is different, it wouldn’t work, but you can get a lot of information just by looking at how the code has changed. But, I want to, I want to sort of take this in a different direction right now. okay. So Nupoor, you’re writing a book and, oops, got a little cat fight in the background here. You’re writing a book and.

Now this brings up the question, how do academics kind of provide value to practitioners about directions? I know there’s this divide between academics and practitioners. Now we don’t have the STC intercom or other journal. like, and this book, I’m thinking first of all, to write a book at the level that academic standards require.

require maybe a multi-year effort, a lot of review. mean, review cycles can be lengthy, research, all this. By the time you’re ready to publish this, won’t the landscape have moved so much that it just becomes outdated? I mean, you write about, if a person were to write about like, model behavior and so on, like the middle skipping problem and other kind of like pitfalls might be true today. And then,

when the next model comes out, it’s like, yeah, we’re not doing that whole little temperature knob anymore. We’re not doing this anymore. How do you wrangle that, that moving landscape, the academic timelines and publishing and this kind of just the rigorous slowness of it all?

Nupoor (43:32)
We try to work with good publishers and good editors and hope that they will help us make more relevant contributions. And that’s the short answer. And the long answer is, I started this idea of I want to write about how AI is affecting tech com because I think we are still only talking and thinking and wondering what generative AI and how it’s helping with tech writing tasks. are not yet talking about

you know, how can we write about AI or, you know, some of those things like opening black boxes, making AI more transparent. That’s something that tech writers could do. So there’s other things about AI and around AI that we are not yet thinking about or talking about. And so I had this idea writing with AI, writing for AI, which meant writing for algorithms, and then writing about AI, which is opening these black boxes.

And I spoke to the SUNY editor and he said the same thing that you did, which the book has to be on the shelf for 10 years and still be relevant at least. So this is not a helpful idea. This could be the last chapter perhaps, but not the whole book, which is why I shifted a little bit to this idea of documenting the evolution of the field, which again, we don’t have a book right now in the field which has talked about the history and documented each of the decades. So that’s the kind of research that I’m doing.

And through that research, I’m trying to find patterns of what is common, what has shifted, what has not shifted. We did talk about technologies a little bit earlier on desktop publishing, social media, internet, and so on. But there were also a lot of geopolitical shifts, like globalization changed a lot of things, of things like jobs being outsourced to these countries like India and Malaysia and Singapore, where labor was much cheaper, and what that did to the US job market.

and the labor and similarly globalization again, which languages were now being used with product documentation and how localization that change localization and translation processes. So there’s technology, but there’s also these other things that are going on in the world, which make these changes. So similarly with AI, we have different concerns like ethics and.

Academics, I think why our research is so slow and why it takes so long is because we want it to stay longer. We want it to be grounded in some theory which will stay relevant for a longer period of time. So we’re trying to acquire knowledge and establish and publish that knowledge which will stay relevant. And the divide exists because practitioners don’t want that. Practitioners want an application-specific problem to be solved. And academics want to create a solution which can be

generically applied to a lot of problems in the same area. Anything that disconnect exists. So I think it’s about finding this common theme, and that’s something that I’m working on. And my theme is the idea of fears and how instead of just imagining things that are happening, I want to base it with, I want to support it with data and some research and what are the things that have changed in the past and what that means for the future.

How can we still hold on to the human role of technical writing? Something that we teach in our classes is also not just what technical writers do at their job and what the skills that you’re supposed to have, but also how do you justify your role as a human writer? What are you doing which is different? For example, I had an assignment in an editing class where I got students to…

use AI to edit a piece of content, you know, like a paragraph of content. I got them to use AI to do the editing. And again, because we have standards and style guides in each of the organizations, it’s different. But then one of the, I like to follow the Microsoft or the Google standard usually. And that’s what I talk about in the class. And so we wanted, because Microsoft, the MSTP uses…

a style guide which says that a 10-year-old should understand the content of this document. That’s the standard they fall back on. I translated into a readability score, which the Fleshkin CAD and other methods to calculate that score that word processor tools have and Microsoft Word has that. What we did was in that class, they generated a piece of content and used AI to edit that content to make it readable for a 10-year-old.

which again, the AI sort of failed because we calculated the flesh and cat score on it again, and it did not really meet the grade level standards. And so I was like, see, here’s where you come in and here’s where humans are actually important to change the content in ways that you understand better, you understand what a 10 year old could be. You could test it constantly and see what comes up. And it requires knowing your humans, knowing what 10th grade, 10th.

10 year olds are, or seventh graders, what they look like and the knowledge that they’re consuming, meeting with them and making them read and test and all of those things, again, which is a moving target, but at least that’s something that humans are capable of doing in better and more meaningful ways than technology is. So just pointing out and helping students or just writers justify their roles in and…

AI augmented or technology augmented space is something that we also do. And that will remain common. There’s things that will remain the same regardless of what technological breakthroughs happen in the future. So that’s what I’m focusing on now.

Tom (49:28)
There is a recent thing that a recent the skills platform spec, whatever that has come out has been kind of a big buzz among practitioners lately where I guess engineers got really enamored of the idea that you should convert your documentation into this agent skills spec, agentskills.io.

that has a specific format that’s geared towards machine consumption. just like you’ve made your tasks capable to be run by machines and so on. I spent like last week converting some custom instructions into this format and testing it out and trying to get familiar with it. And yeah, it’s interesting. It’s like, I don’t know, just trying to gauge like where is the demand for the human element.

Jeremy (49:58)

Tom (50:22)
Where does the human fit in? Is this what we’re doing now? Like we were, we’re, making skills files for our docs so that machines can read them, you know, but that, that question is constantly on my mind. It’s like, where’s my value? What am I doing? I don’t want to be doing something that like can be replaced tomorrow because it’s just so meaningless in terms of like human ingenuity or value. Jeremy, you have any thoughts on this? This divide with prac with, academics and practitioners. you, mean, you’re kind of.

I sent you these articles earlier this, before this podcast about how, I mean, you’re really trapped in a world where to excel as an academic professor, you need to publish in rigorous journals. need to get, they need to be peer reviewed and so on. And you’re trapped in this world where all this content is pay walled off and it moves at a snail’s pace and it’s like practitioners will.

Look at an article and be like, well, that model that’s so 2024. it even apply? We’ve moved beyond that. We’re no longer trapped by that problem. So irrelevant. Now, how do you, how are you wrangling that?

Jeremy (51:32)
You mean kind of like the difference between the two worlds and kind of what’s expected in each or how to unify the two worlds? I I think that, you know, one of the, and this is something I was thinking about before we had this conversation. I’ve actually thought about it. Nupoor and I have talked about it many times in the past. And to me, this kind of divide or this kind of gap that exists sometimes.

Tom (51:37)
Sure, Yeah, yeah.

Jeremy (52:01)
It’s a gap between theory and research on one hand and the work that practitioners are doing on the other hand and really how to create meaningful connections between those spaces. So I want to be clear and when I say that, that there absolutely are connections between the two important ones.

And I think part of the challenge is trying to optimize how the two work together. And this is a work in progress that’s been a work in progress for, I think it’s fair to say, many decades. One thing you mentioned, Tom, is this kind of, you know, the need to publish and…

you know, the kind of ecosystem of academia. There’s a real difference in context there. And you were very, what you said was right, very accurate, Tom. Obviously, academia is a pretty unique ecosystem when you compare it with business and industry and even other spaces like nonprofits.

A lot of faculty jobs do require publication in journals. This is not usually a requirement in most industry jobs. Another good example, tenure has been an important part of many faculty positions in academia. In industry, that system just simply doesn’t exist. So you end up with situations like practitioners solving problems that academics haven’t studied yet and academics publishing findings that practitioners

don’t see or if they do see it, they don’t see it for many months or years down the road. And that’s a challenge that we wrangle with. And I think there are, and I think Nupoor has some thoughts about that. wasn’t trying to open up another kind of line of discussion necessarily, but I think there are ways of addressing that. I think there are a lot of people like you, Tom, who are doing a good job trying to bridge that gap.

Nupoor (53:57)
I think what we’re trying to do is build some partnerships with industry practitioners so that we stay current with the problems that are going on and publish articles with journals that are sort of faster moving and, you know, can do a lot more. Intercom was a great space, again, to publish for academic and practitioners, you know, commonly so we could collaborate with practitioners published there because STC and, you know, their work was recognized in both spaces. Similarly, technical communication with the journal, which was very popular across both

practitioners and academics and it was pretty fast moving because they have a quarterly release and they counted as high impact journals in our field. It is getting challenging with STC shutting down. I think now there are venues and different conferences which are coming up. The CIDM which was started by Joanne Hackos a long time ago and just detecting this problem with the field because it’s existed and the divide is only growing. But the CIDM conference

was something similar and it was started to address this issue. So there are researchers like us working in the space. There’s Rebecca Anderson, there’s so many other folks. I can’t come up with names now, but Saul Carliner, who have done these kinds of projects before where they’ve partnered. I think the challenge is just keeping in touch with practitioners. Our student body is the best resource.

that we have but most of them start at entry levels. We’re like talking with them, creating alumni networks, maintaining those, staying in touch with our graduate friends and, you know, from grad school and staying in touch with industry, going attending conferences and trying to be there, trying to be more present and bringing in those, I guess, challenges to the classroom so that not to publish or not to, you know, just solve a problem but to prepare the next generation of technical writers for the things that

are coming up, which is sort of a shorter, short-term goal for us, you know? So we can keep revising our syllabus every semester that we teach just to keep the students in touch with what’s going on in the industry. So I think that’s what we are trying to play at. With the research projects, although they are longer term, that’s sort of, we just deal with them differently. We deal with them as a different beast than teaching. And I think it’s that mix.

Tom (56:28)
Now there’s another element to this academic, like the challenges that I want you to comment on as well. Within a corporation, most corporations are very enthusiastic about employees embracing AI. And the more you use AI, great. It’s like, oh, you’re pushing out more content faster. Great. You know, we’re going to mandate that people use AI. We want you to have trainings and demos and lots of rah rah rah.

But in, in like university, Jeremy, you shared that CCCC’s resolution for Cs, about how like students should have the right to not have their voice distorted by AI. That’s gonna like take in, you know, consume their voice and put them in the middle. And, like there’s a lot more hostility towards AI, people who have more like objections are more sensitive about so many things.

Jeremy (57:07)
Four, yeah.

Nupoor (57:07)
forcing.

Tom (57:28)
How do you help students prepare for this AI-centric workplace in a place and context where people are a lot less receptive about AI usage? my kids’ high school, for example, AI is sort of forbidden. It’s considered cheating. It’s rarely used outside of the context of cheating. I’m sure you can’t write a journal article with AI and submit it, right? It’s like…

So how do you, are you, do you have a bunch of cognitive dissonance between like, I’m preparing students to use AI, but I can’t use AI or everybody hates AI. It’s like, how do you deal with this?

Jeremy (58:12)
I think, you know, I appreciate very much what you’re saying, Tom. I think in the way I frame academic conversations, Nupoor is probably gonna have a kind of a differently nuanced take on it, but that’s part of the point I’m about to make.

I think in academic spaces, the ones that I’ve been a part of or the ones that I’ve seen kind of in the larger kind of academic conversation, if you will, there are a large number of conversations taking place about

when we should adopt AI, how we should use it in our classes, how we should perhaps not use it or limit its use in the class, how we should give students the ability, the option, or even the requirement to use it. I would answer, at least for myself, can’t. So there are a lot of different points of view on that is what I think what I’m getting at in the conversation. What I would say in my own

teaching and in my own work with students, I mentioned earlier, and this is just one example about the data visualization assignment that students in my current undergrad course in technical writing are working on. They have the option of using a generative AI tool like Microsoft Copilot to help them generate that data visualization, help them generate that graphic. And the key word again is option. I’m not

requiring them to do that because I want them to have that agency or that kind of that you know they have the opt-in or the opt-out option if you will and I do that for two reasons one is that I do want them to have that agency because as it turns out and this is not probably surprising you know Tom you’re you mentioned your your your kids high school

Nupoor (1:00:09)
you

Jeremy (1:00:17)
You know, your kids probably have opinions on AI that Nupoor and I may not know about. As it turns out, maybe not surprisingly, students, like my students, our students have opinions on AI. And some of them have very strong feelings. So, you know, for or against and everything in between, if you will. So by giving them or trying to give them that agency, I’m trying to speak to that.

respecting their ability to make that choice. Whether they choose to use it or not in that specific project, I ask them, I invite them to think of it as a part of a workflow that a technical writer could use in industry. And I ask them to specifically mention as a method when they turn in or as part of their methodology when they turn in that assignment.

Nupoor (1:00:53)
Yeah.

Jeremy (1:01:16)
their intent was in if, for example, they choose to use it and how specifically they chose to use it. So I hope that starts to answer your question, Tom, but I think there’s definitely a continuum of thoughts on there and I think it’s a valuable conversation.

Tom (1:01:31)
Yeah.

Nupoor (1:01:36)
It is sort of, I think it is sort of a balancing act. We got to use AI, have some assignments which use AI so we can prepare students enough for the workplace.

There should be some assignments which do not use AI absolutely so that we get to see their own take and make sure that the assignment is just designed in a way that it’s going to be impossible for an AI to do it for the students so we get a good sense of what their thoughts are, where their creativity is flowing. And then we also need to have some assignments or some sort of discussions in class where we are talking about the critical takes on AI so that folks who absolutely want to refuse and are not on board with sharing their ideas or using AI.

at all in classrooms, we get to chat with them and they get to raise their concerns and opinions around AI, the issue of agency like Jeremy mentioned, they can talk about it at scale and we can have some good discussions which they can even take with them to the workplace. And once they have a seat at the table, they can bring those up and can add some valuable insights to the conversation, not just be like, I’m refusing this, I don’t like this and this is my

Jeremy (1:02:32)
Absolutely.

Yeah.

Nupoor (1:02:45)
surveillance and privacy and is being impacted. So instead of using those big buzzwords, thinking about AI in more critical ways and how it’s maybe not, it’s not answering or addressing certain populations and here’s what it’s doing and this is why I don’t agree and we should still sort of augment it with some human skills or with some humans is the thing that we want them to say rather than be like, I don’t want to use it at all.

So that’s what we’re trying to do and it is Jeremy Cohen.

Jeremy (1:03:20)
I didn’t mean to interrupt. I wanted to, Tom and Nupoor both, I wanted to add something, and Nupoor is aware of this, and we’ve actually talked with this colleague I have here at James Madison, and I’ll mention his class because I think it maps really neatly, really nicely on what a lot of what Nupoor was talking about and a bit about what I was just talking about.

I work with another faculty member here at JMU, Rodolfo Rudy Barrett, who teaches a course. And this is actually the title of this special topics course for undergrads. It’s called Writing With, Without, For, and Against AI. I mean, that’s literally the title of this special topics course. And I’ll just, I’ll be very brief, but I’ll say two things. One,

I had the opportunity to observe that course about three weeks ago or so. It was an extremely, I would say that, yeah, it was an extremely well-designed course and the students, I was amazed at how, not only how strongly they felt about the topics culturally.

ethically, rhetorically, all of these different aspects of AI and how it affects their lives and how it’s affecting industry and pop culture and a variety of different things. And then the second thing is that, and I’ve told colleagues this too, and Nupoor knows this, I am thinking about that kind of…

So, Nupoor just used the term balance or balancing act, think, early in her, one of her previous statements, correct me if I’m wrong, Nupoor, but I think part of that balance is looking at generative AI on an ethical level on a,

what academics may call often, would call many people, would call an epistemic level, like how we think about knowledge, what knowledge actually is, how we interpret it. And also then there’s the functional aspect. And I think Nupoor spoke to this already. There’s the functional aspect. How does AI work? How does generative AI, what are models? What are weights?

what are embeddings, all of these kinds of, like, we don’t have to go into that level of detail, but there’s that functional aspect. But before you get to that functional, how do I use AI in my workflows? What does it do under the hood? Any of that. It’s important to, and I hope, you know, I don’t want to sound pedantic here or anything, but I think it’s important to frame it.

in this kind of ethical, of humanistic way too. Like what are our end goals here and what are kind of the limits ethically and, you know, kind of logistically about this technology a little bit.

Tom (1:06:23)
Well, I can appreciate how difficult it must be to try to, like balance all these things because, for sure, like in my head, I’m just thinking so many different thoughts and, you know, like if I had, if I were a teacher and a student was like, nah, I’m not going to use AI, AI’s, you know, no, this is terrible. Just be like, you know, good luck. I don’t really know.

One of my kids, they were pretty anti-AI and they’re starting to look for internships. They’re in college, they’re like a math major. They’re looking for internships and they came back and they were talking to my wife. They said, mom, all the internships are, they’re like related to AI and math. And they came to the conclusion that like, even if they hate AI, it’s sort of a reality of.

of many jobs and expectations. Students may have the luxury to say, to ethically object to using AI, but that doesn’t necessarily mean they can just get a job. I don’t even know if there are jobs. So this is another problem. With or without AI, the jobs are so scarce, it seems like a nightmare. But anyway, so many interesting avenues to pursue there.

Again, it must be hard. Last question. Then let’s wrap this up because I know we’ve been chatting for a long time. I want to end on how practitioners can work with academics in a productive way, because we definitely need, we practitioners need a lot of information. We’re right now in the dark about a million things. Just to give an example, a few months back, some people came out and said, hey,

all your docs should now be in this LLMS.txt kind of format where you have like little high level summaries of every page in your docs. You feed this high level summary to your model and like now people can find so much more. And other people said, well, that doesn’t really do anything. Like the model can ingest all your content just fine without that file. It’s a waste of time. That’s one of probably like 20 different

debates I’ve heard about AI. Like now I’m now I’m exploring the whole skills thing. Like does do these skills actually help the skill.md file for my docs? Is that actually beneficial? How do you create automated engineering workflows? What are the what are best practices around this? How do you how do you structure your content in a way that like machines can read it? How do I keep my job? Like how do I improve the visibility? What should I be doing in like, you know?

So that three years down the line, I’m valuable. Should I be that information strategist? So I have like a gazillion questions and you as academics are stewards of like Knowledge you you look for rigorous evidence, you know You you actually do studies so that we’re not just blown about by the latest hot take that somebody has right? It’s like everybody’s got a hot take. Nobody knows what to believe How can we how can we like get the

Jeremy (1:09:40)
Yeah.

Tom (1:09:43)
good knowledge from you about all these hot, difficult topics that we’re currently facing so that we’re not just being persuaded by somebody’s charisma and rhetoric, but actually making decisions based on real data.

Nupoor (1:10:03)
I can get us started. So I think, like you said, Tom, there is a disconnect and we would want to hear more from practitioners and solve problems which are real and which could actually be meaningful and help the practitioners. The disconnect is because we want to have solutions which are more generic and which can be more broadly applied and not just specific to one organization or this one problem. I just think of data whenever I’m

thinking of conceptual plus practical application specific solutions because data did such a good job of standardizing something that was grounded in more theory as well as practical implementation. And we just haven’t seen something like that after everything has been more decentralized. Every organization has its own way of doing things, standards and practices. And therefore we

it’s gotten more challenging even from a practitioner perspective or for practitioners to collaborate with academics to sort of research and solve problems that they might have. The solution here, I think it goes back to what I said earlier is more collaborations and more meaningful collaborations, not just with academics from one area, but if we were to work with

other folks like engineers and writers from the industry working with academics or let’s say technical communicators as well as HCI folks, the human computer interaction researchers. I mean, such groups or cohorts of people who could do projects and run projects together could really be meaningful and can help provide some insights that are

Like you said, research-driven, data-driven, could be like create sustainable solutions, not just a hot take or solve a current problem.

Tom (1:12:06)
Yeah. Jeremy, any thoughts on how, I don’t know, this topic of how practitioners can collaborate with academics for real solutions?

Jeremy (1:12:19)
I would say, and I really appreciate what Nupoor, I appreciate the question, Tom, and I appreciate Nupoor, what you’re saying about this very much. I would just add that it is a two-way street. Industry professionals and academics need to have incentives to reach out to one another and interact, or at the very least, they need to be able to see the benefits of doing so.

On the academic side, we need to do our best to encourage research involving workplace professionals and, you know, to immersively engage with workplace settings where researchers are going out to different workplace settings and studying what’s taking place there, whether that’s in person or, you know, in remote settings or kind of hybrid, if you will. There are strong precedents for that historically.

that I, you know, because of time, you know, we don’t need to have to get into right now, but there are strong precedents for that historically and in the present. Nupoor is a very good example of people who have done that and are doing that kind of work. I would just say two other quick things. All of us in technical writing, and I kind of, I did already state this in so many words before,

All of us in tech writing, both industry and academia, need to try to make sure that we’re taking every opportunity we can to help those who are not technical writers, the engineers, Tom, the product managers that you mentioned earlier, accountants, customer service reps, really anyone who’s not necessarily working in a tech writing role to better understand what we do and what we’re capable of.

And then the last thing is that I think this is a big one and this is something I’m, it’s still a work in progress for me, but it’s more for those of us teaching technical communication, which would include a good number of not most of us in the academic space. Alongside some of the well established kind of skills like editing and visual design, we should also give students a

We should do what we can to give students a better sense of how workplaces actually function, including the politics, the culture, who holds influence, how decisions get made, including decisions, by the way, on generative AI. That kind of awareness, I think, can make a huge difference in how effectively a technical writer is able to navigate a new organization for students who are just getting started in the field.

And lastly, I’m not saying that those dynamics aren’t getting stage time in our classes. They absolutely are. I’m just saying that it’s a part of making sure that students have a repertoire of skills that will help them to navigate those situations, including very much this very evolving situation with generative AI that we’ve been talking about here.

Tom (1:15:32)
I would love… sorry, go ahead, Nupoor.

Nupoor (1:15:33)
I was saying I wish there were more venues for both folks to sort of come together. I don’t know if that’s the reason because, you know, it was recognized in academia as well as in industry and something that folks could go to. I am currently involved in the Write the Docs chapter in Pittsburgh. Fortunately, we do have one. But I wish there were more venues where we could connect with folks and talk about, you know, day to day. That would be a great way to sort of bridge some of that divide. But I know it’s challenging.

Jeremy (1:15:40)
Yeah, that’s true.

Nupoor (1:16:05)
and post pandemic everything, the world just looks different.

Tom (1:16:09)
Yeah, I can see how it’s difficult to get the data. From a practitioner’s point of view, I would love to just open up my company and say, hey, we just did this survey of nearly 200 tech writers and asked them 25 questions all about AI. And here are all the answers. But I can’t, which we just did. And yeah, pretty cool. But yeah, this is a problem that isn’t new, right? It’s like corporations don’t really want to share data.

Jeremy (1:16:26)
Yeah.

Tom (1:16:38)
And it’s hard. I mean, I did an interview with Rebecca Anderson about a project she’s doing, and she wanted me to sign a, like a form, you know, just like a standard research form. I had to get like, I had to get lawyers to approve it and everything. I was like, wow, that’s a lot of work. So they can see how this flow of information is difficult, but hopefully you’ll figure it out and hopefully we can have more of these conversations. I definitely want to…

Jeremy (1:16:40)
Yeah, that’s true.

Mm-hmm.

Tom (1:17:08)
just like have better communication between practitioners and academics because we do have a lot to benefit from each other. Thank you so much for this conversation. I mean, you put a lot of thought into this and really appreciate like just your honesty and the transparency and just getting a read on all the issues that are surfacing in your book projects and your other projects.

Jeremy (1:17:14)
Mm-hmm.

Absolutely. Thanks, Tom.

Tom (1:17:36)
your classroom assignments and so on. It’s interesting to hear what you’re doing. So thanks again. And if readers or listeners want to find out more, I will add links to Nupoor Ranade, Jeremy Merritt, your profiles on LinkedIn and other kind of posts and podcasts and book projects that we’ve been discussing. So thanks again and appreciate your time.

Nupoor (1:18:07)
Thank you Tom

Jeremy (1:18:07)
Thank you.

Comment on LinkedIn

About Tom Johnson

Tom Johnson

I'm an API technical writer based in the Seattle area. On this blog, I write about topics related to technical writing and communication — such as software documentation, API documentation, AI, information architecture, content strategy, writing processes, plain language, tech comm careers, and more. Check out my API documentation course if you're looking for more info about documenting APIs. Or see my posts on AI and AI course section for more on the latest in AI and tech comm.

If you're a technical writer and want to keep on top of the latest trends in the tech comm, be sure to subscribe to email updates below. You can also learn more about me or contact me. Finally, note that the opinions I express on my blog are my own points of view, not that of my employer.