Recording of AI Book Club discussion about Kai-fu Lee's AI Superpowers
- Book club meeting recording
- Audio only
- Questions and prompts for the book club
- Main discussion points
- Transcript
Book club meeting recording
Here’s a recording of the AI Book Club: A Human in the Loop meeting, held August 17, 2025:
Audio only
If you just want the audio, here it is:
Listen here:
Questions and prompts for the book club
Here are the questions and prompts I prepared for our book club discussion. We didn’t discuss all of these points, but it gives a sense of my own interests and thoughts on the book. These questions appear within the Notes and discussion doc.
Is the US locked in an escalating AI race with China? Who will “win” and what would it mean to win?
- Kai-Fu Lee asserts that AI dominance is split between the US and China, forming a duopoly. Where does this leave Europe? Is Europe losing due to excessive regulation, which stifles monopolistic companies that can compete with these mega companies?
- Just as the divide between rich and poor increases, and more wealth becomes concentrated in fewer companies, is the same happening across the world, with greater concentrations of wealth collecting in the US and China while the rest of the world languishes?
- Lee describes a virtuous cycle where more data leads to better algorithms, which leads to more users, leading to more data, and so on. The big companies are so far ahead, there’s no way for startups to really catch up. Are we entering an era where the only good jobs will exist at 10 mega companies across the world?
- How long will tariffs protect local companies? It seems like the only reason US car companies still exist is because the import fees for China EVs are so high, but this strategy can’t last long. Will the auto market be the first to fall to China, providing a signal of what’s to come? Soon not just all autos are Chinese, but other products too. Is this what it means to win? One country’s products and services saturate another’s?
Is this what it looks like for China to be winning?
- China’s biotech is faster and cheaper
- The American car industry can’t go on like this
- How China Went From Clean Energy Copycat to Global Innovator
- There’s a Race to Power the Future. China Is Pulling Away.
- AI experts return from China stunned: The U.S. grid is so weak, the race may already be over
What are the main reasons why Lee thinks China will overtake the US in AI dominance?
- China has an abundance of data, esp. from WeChat mobile payments for everything. Data is what drives advancements in AI.
- China’s gladiator entrepreneurs integrate heavily with their businesses, from the ground up in every aspect. In contrast, Silicon Valley prefers more of a light, hands-off model – a single digital product that is implemented locally by other companies.
- China is all about focusing on what turns a profit, whereas Silicon Valley is more mission driven.
- China is full of implementers (builders, integrators), and we’re in an age of implementation, not discovery. (Note that Lee is making this argument in 2018! Surely he underestimated the transformative discoveries still to come in the years since his book was published.)
- China has a massive and growing talent pool of skilled AI engineers.
- China benefits from a strong, top-down, and techno-utilitarian government that has made AI supremacy a national priority, mobilizing immense resources and crafting supportive industrial policies.
In a recent Slack discussion, Dan posed a question about how the US-brokered deal allowing NVIDIA to sell chips to China again (reportedly for a 15% cut of profits) factors into Kai-Fu Lee’s framework from AI Superpowers. NotebookLM offers this interpretation:
- The deal highlights the distinction between the “Age of Discovery” and the “Age of Implementation” in AI. *The US, by withholding its most advanced chips, aims to maintain its lead in the “Discovery” race, which involves pioneering new, sophisticated AI algorithms and foundational breakthroughs.
- Conversely, by allowing some chip sales, the deal permits China to continue competing robustly in the “Implementation” race. *This aligns with Lee’s argument that China’s strength lies in integrating existing AI technologies into numerous products and domains, rapidly scaling their application even without the absolute cutting-edge hardware.
- Furthermore, the ongoing uncertainty created by tariffs and restrictions has inadvertently positioned the US as an unreliable supplier. *This has, in turn, strongly incentivized China to accelerate its efforts to develop its own domestic chip manufacturing capabilities, driving it towards greater self-sufficiency in this critical technology.
- In summary, from Lee’s perspective, the US move is a nuanced strategic calculation. It attempts to slow China in one area (discovery) while implicitly ceding ground in another (implementation), but it also carries the significant risk of making China a stronger, more independent AI superpower in the long term. It’s not necessarily “handing over the tools to China to win,” but it might be accelerating China’s path to self-sufficiency, which is a significant factor in a long-term AI race.
Why aren’t US-based TWs being outcompeted by scrappy Chinese tech writers who have the entrepreneurial, gladiator mentality Lee describes? AI would seem to help TWs overcome language barriers.
- Offshoring used to be a significant concern in tech comm maybe 10-15 years ago, but the fears never materialized. Are things different now? I don’t understand why AI tools haven’t led to a resurgence of offshoring.
- Will China dominate in services industries too, or are they mainly competitive with physical products (and soon, robotics)?
Lee argues that humans won’t be able to compete long term in areas of intelligence, logic, coding, etc. Humans only excel at emotional aspects like care, love, empathy, and connection. He sees our jobs evolving to this domain in which AI and tech can’t compete due to its weakness. Do you believe that this is a job destiny for humans? What if we’re not good at those soft emotional talents?
- How can education systems and cultural norms adapt to prepare future generations for a job market where emotional intelligence and compassion are paramount?
- Will tech writer roles exist in 10 years?
AI Superpowers was published in 2018. How much of Lee’s thoughts and predictions are still on target? Was he right about China? It does seem like his book was prescient. China is frequently in the news as a top concern for the US.
- Was he right about China’s rise? Lee was largely prescient regarding China’s rapid ascent in AI. The country has indeed become a formidable AI power, particularly in the “Age of Implementation” and “Perception AI,” driven by its data advantage, entrepreneurial intensity, and government support.
- The duopoly still holds: The idea of a US-China AI duopoly remains highly relevant. While other nations are investing in AI, none have yet emerged as a true third pole capable of challenging the scale and depth of AI development in these two countries.
- Job displacement is a growing concern: Lee’s warnings about massive job displacement and the need for a redefinition of human purpose are increasingly discussed as AI tools like large language models become more capable and widespread.
- The “Sputnik Moment” paid off for China: The 2016-2017 AlphaGo matches are still widely cited as a pivotal moment that galvanized China’s national AI strategy, proving Lee’s analysis of its impact correct.
- The “Age of Implementation” is here, but discovery continues: While the emphasis on implementation is valid, the pace of AI discovery (e.g., in generative AI, large language models) has continued at a rapid rate since 2018, perhaps exceeding what Lee fully anticipated at the time of writing. This suggests a continuous interplay between discovery and implementation rather than a strict shift.
- His focus on human-centric AI and social solutions remains critical: Lee’s philosophical pivot towards valuing love, compassion, and care work as humanity’s unique strengths in an AI future is highly relevant as societies grapple with the societal implications of advanced AI.
Lee argues that the AI race won’t be a race for techno-economic world superiority, or one where the first to achieve AGI takes all. Instead he sees the greatest challenge as one of massive social displacement, unrest, and dystopian transformation due to AI. He predicts job losses of 40-50%, creating massive disruption and social pushback. How will China and the US handle a society in which half the people are unemployed?
- Even if a government implements universal basic income (UBI), a $20k/year salary seems impossibly low. In fact, $50k in Seattle or $100k in San Francisco is considered poor. Won’t this lead to massive social revolt?
- Already, Trump was elected by dissatisfied populations that felt left behind by globalization and technological change. According to Lee, AI-driven job displacement and inequality could escalate this sentiment, threatening widespread social revolt unless proactively addressed. So aren’t our current politics a preview of what’s to come?
Main discussion points
These are the main points we discussed (generated by AI):
- An eye-opener on China’s AI prowess: The book served as an eye-opener, shifting the group’s perspective from a US-centric view to recognizing China’s extreme competitiveness, dominance, and rapid advancement in the global AI landscape.
- The global AI “duopoly”: A recurring theme was the idea that the world is now in an AI duopoly, with the United States and China as the two clear superpowers leading the race, leaving other countries like those in Europe significantly behind.
- Political stakes of the AI race: The discussion explored the significant political and cultural implications of an AI race “winner.” Concerns were raised about what a world dominated by China’s political and social values might mean for democratic institutions globally.
- The “social investment stipend” debate: Kai-Fu Lee’s proposed solution to mass unemployment was met with mixed feelings. While some saw the appeal in incentivizing human-centric and community-focused work, others worried it could be naively idealistic and potentially lead to a new form of government-controlled dystopia.
- Skepticism on mass unemployment: While Lee predicts 40-50% unemployment, the group was skeptical that entire jobs would be eliminated. The consensus was that AI would more likely automate specific tasks within professions, creating a messy, drawn-out period of disruption rather than a sudden wave of mass unemployment.
- Evolving the role of the technical writer: The group envisioned the technical writer’s role shifting from content creation to becoming a “manager of AI.” Future work would involve steering AI agents, reviewing and editing AI-generated content, and identifying knowledge gaps to prevent AI “hallucinations.”
- The enduring value of human “taste”: A key insight was that as AI generates more content, the human ability to judge quality—what was termed professional “taste” and expertise—will become more critical than ever for editing and directing AI outputs effectively.
- Interpreting US policy on AI chips: The members tried to make sense of current US strategy, like the Nvidia chip deal, through the book’s framework. They saw it as an attempt by the US to maintain dominance in the “age of discovery” (creating new AI breakthroughs) while conceding that China may lead in the “age of implementation” (applying existing AI).
- China’s competitive mentality: A lingering question was why China’s relentless “gladiator” business ethic hasn’t yet disrupted service industries like technical writing through aggressive outsourcing and automation, concluding it’s likely not profitable enough yet.
- Change management as a bottleneck: The group concluded that corporate and societal change management is a major bottleneck to rapid AI adoption. The inherent difficulty and slowness of transitioning for most established organizations will likely prevent the kind of sudden, sweeping disruption that many predict.
Transcript
This is a transcript of the book club discussion, with the grammar and readability cleaned up by AI.
Tom: Hi, my name is Tom Johnson, and this is a recording of an AI book club held on August 17, 2025. We’re discussing Kai-Fu Lee’s book, titled AI Superpowers: China, Silicon Valley, and the New World Order. It was originally published in 2018 but is surprisingly relevant and seems to align a lot with today’s events, especially around China, which is why I selected it.
If you’re new to the AI book club or are thinking about joining, you can find more details at idratherbewriting.com/ai-book-club. The site has the upcoming schedule. We typically read about one book a month, generally about AI, trying to align with whatever topics are current and interesting. The book club gives us an excuse to dive into some of these books and have a good discussion with other people who have read them.
Participants don’t necessarily need to be technical writers; you can be from any project, role, or company. You just need to have an interest in AI. Typically, we do have a lot of technical writers because that’s my audience, so you’ll find at least half or two-thirds of the people have tech comm or tech backgrounds. We often bring that lens to try to understand AI in the context of that role and how it affects us.
Again, this is a discussion, so topics are all over the place. There are three people plus me in this book club discussion, and it goes on for about an hour. Thanks for listening.
Tom: I’m excited that both of you came to this book club. Last month was kind of a downer because nobody showed up, but then again, it’s summer and many people are traveling. A lot of people really liked this book this time. Sorry, I’m trying to hold it up… I’m sure it’s backwards, right? That’s why I didn’t even hold it up. Maybe some others will join, maybe not. But hey, three people is enough for a discussion, and I’m hoping to have one here.
So, let’s kick things off. Florence and Molly, what did you think of this book? Did you have any general reactions? Did you like it, dislike it, or think it was dated or still relevant?
Molly: Feel free.
Okay, sure. Overall, I liked it. I learned a lot about things I hadn’t considered. I haven’t really been thinking about what’s going on with China and AI at all, so it was great for that. I got so much out of hearing how Chinese businesses operate—their tactics when they compete with other companies. Some of the stories he told, like the Qihoo 360 vs. Tencent QQ faceoff, were fascinating. Businesses operate a little differently there, and I thought it was really interesting to learn about that.
I did have questions about the timing since it came out in 2018. I don’t know enough about AI to know if what I’m thinking is right, but I don’t know if things are playing out exactly like he said they would. There’s that side of it. I thought his idea about the Social Investment Stipend—I think that’s what it was called—was interesting, but I feel like there are probably holes in that idea that could make it unworkable. I don’t know. But I liked that he gave a lot of attention to the labor question because I feel like a lot of the books we’ve read have kind of given a little shout-out to it and then moved on.
Tom: Those are all great points. I jotted down a few notes to discuss. But before we do that, Florence, what was your high-level reaction to the book?
Florence: I liked it. It caught my attention because, like Molly said, I guess I’m very US-centric and hadn’t considered China’s impact and how AI was developing there. The whole concept of a “duopoly”—this race for AI, which I think I read in your summary—was something new to me that I wanted to learn about. Okay, full disclosure: I’ve gotten through the first half and then just read your summary for the second half, but I did read the last chapter, so I knew where it was going. Right before this call, I asked ChatGPT, “How do the forecasts in this book stand up in 2025?” And it said that they’ve held up pretty well. So that was what interested me.
Tom: Welcome, Remy. I just want to pause and say hi. Thanks for joining. We’re just talking about high-level reactions to the book, and Molly and Florence were going over points that jumped out at them. I thought we should jump off on some of these because they provide excellent starting points, and we’ll ask you in a minute what your high-level reactions are too.
The first one is just the sense of being an eye-opener about China and their competition, dominance, and excellence in this field. I think we are very US-centric. Typically, people… Actually, Remy, where are you? Are you in the US or outside?
Remy: I’m in New York.
Tom: Okay, another East Coaster. We’ve got Boston and New York as well. I think we tend to be pretty myopic about our worldviews in general, and this was an eye-opener for me about China’s extreme competitive dominance in this space. I don’t know if “scary” is the right word, but it’s sobering to think about what it would look like if China wins the AI race. Maybe some people think they’ve already won in many respects.
I started gathering different articles; I just started seeing them everywhere now and I’ve listed a few… let me see if I can share my screen. As I’m keeping my eyes open and reading different news sites, I keep seeing articles like this. “China’s biotech is cheaper and faster”—this was from today—talking about how they’re able to produce so many mainstream pharmaceutical drugs cheaper and faster, which is what so many people want and need. I work in an auto-related domain, so I’ve been following the car industry news for a long time. Basically, the only reason there’s still an American car industry is because we’ve tariffed China’s EVs so much. Otherwise, they would probably decimate the car industry here. The energy sector is also being dominated. We’ve got another article on energy. These are all pretty recent articles talking about how China is really knocking it out of the park. “AI experts returned from China stunned: The US grid is so weak the race may already be over.” It’s talking about how energy, which is used to power so many data models and all the AI infrastructure, is just sort of everywhere—this energy availability.
So, it’s eye-opening to read this book from 2018 talking about how China has a real chance of not just competing but overtaking the US. And it looks like a lot of this is happening. Are you noticing more articles as you read the news about China’s dominance in different fields and AI?
That was probably the main takeaway for me from the book: to recognize that the world is really probably a duopoly. I was hoping some Europeans would join today; I know we have some people in France who have participated in the past, and I wonder what their opinion is. It’s not like France isn’t in the AI game—they’ve got Mistral and other things going on. But to see the world as a duopoly between China and the US leading the pack, in the same way that corporate dominance is centered on a handful of big, trillion-dollar companies, is kind of unsettling as well.
Anyway, Remy, tell us your high-level thoughts on the book. What did you think of it?
Remy: Nice to meet you all. This is my first AI book club with you, so I’m happy to be here, and thanks for creating the space, Tom. I found it fascinating. I had some of the same reactions in terms of not previously having a deep understanding of where China fit into this puzzle. I really liked the way the author framed a lot of it. I thought it had a nice, warm ending that I didn’t expect, but I agree with a lot of the takeaways about moving to a more humanistic way of life, with human service jobs and constructing a society around supporting each other.
What was most eye-opening for me was understanding how, in previous waves of general-purpose technologies, China wasn’t in a place to meet the moment in the way the US was—but they really are now. What I find most striking is the manufacturing difference. He talks about how you could automate parts of that, and maybe the US will eventually go that direction, but right now it’s so China-dominant that I can totally see the third and fourth waves of AI going that way. This could create a bottleneck or demand a turning over of US society and how we manufacture here.
The other thought I had was about the point of who should “win.” He talks about not framing it as a race with a winner, and I agree with that framing, although I have some pessimism given the state of the international political sphere. There are really significant political and cultural implications to whether China or the US becomes the dominant force in global AI. One thing I was surprised by is that he tried not to go deep on moral judgments of the political systems, but I think you have to in order to consider this critically. There’s a very significant difference between the political systems. My understanding is that part of the reason there’s so much fear and concern about China winning the AI race is because of the implications for global democratic institutions and culture. So, what stuck with me in the end is this really big question mark about what happens if China does overtake the US in those third and fourth waves.
Tom: Wow, you’ve got great insights here.
Remy: Just questions, mostly.
Tom: But that last point about the political impact of China winning and dominating is an interesting thing to think about. It reminded me of that TV show that was popular a few years ago, Man in the High Castle. I’m not sure if you’ve seen it, but it’s one of those alternate history series. It reimagines how history would have played out if the Nazis had won World War II. We’ve seen other reimaginings like this, too. I think there was one on Apple TV about what if Russia was the first to put a person on the moon.
How would things play out politically if China was dominating in so many aspects? Would we see a decline in democratic institutions? Would people decide that democracies are weak and that we need more authoritarian shaping and government subsidizing and forcing of different economies and markets? It’s interesting to consider how that would play out. And it’s unknowable.
Florence: I totally lost my thought, but first, I think your point about how it impacts our political systems is a good answer to the “so what?” question. When I’m reading “Oh, China is going to become dominant,” I think, “Who cares? So what?” But it’s a good point about it impacting politics and our cultural values. Anyway, I blanked on what I was going to say next, so go on.
Tom: My fear of China “winning” is this: Let’s say they become the dominant automaker because they have much more capable AI-driven cars, or at least cars with superior perception rendering. If that puts many automakers out of business, then my job—documenting map-based APIs for cars—might go away because there’s no audience for our product. It would all be based in China or something. So it would have a real economic impact in a lot of sectors that would affect us.
As Remy pointed out, the author doesn’t want to characterize it as a race where somebody wins. He pivots to a conclusion with his solution of the Social Investment Stipend and UBI, arguing we can’t just compete economically but must move toward this humanitarian, socialist-looking society. I just didn’t really buy that. What did you think about that conclusion, which he comes to after a health scare with cancer? Did you find it compelling, or did it seem naive and idealistic?
Molly: It’s hard to imagine companies all agreeing to transform the way they create value. I liked the idea—it sounded great—but it’s hard for me to imagine us all agreeing to collaborate and cooperate with each other. It’s also easy to imagine something going wrong, where the government decides certain values are important and forces people to prioritize fringe beliefs it’s pushing, creating a new kind of dystopia. I feel like it could definitely go wrong. I don’t have any better ideas, but while I liked it, I’m also not sure.
Tom: I really can’t see myself doing some kind of care profession where I go and help the elderly. It’s just not what I do; I can’t imagine it. But maybe there is some role we could play. And I can see your point about how this could easily become its own dystopia, with the government telling people what to do, people revolting, forming splinter societies, and everything fragmenting.
Remy, what did you think more about the author’s Social Investment Stipend solution?
Remy: Personally, I really like the idea. I think there are lots of questions and implications about the execution, but I actually like the idea more than UBI. As someone who works in nonprofits, I’ve always been very social-impact-minded, so it was a very appealing idea to me: incentivizing supporting each other and community engagement. Our current system often incentivizes the opposite—turning a profit at any expense, unfortunately, often at the expense of other people. So, I don’t think it’s a bad idea, even regardless of the timelines of AI and automation. I think it’s actually a good idea to entertain.
I do think there is a really big question, to Molly’s point about how this could go dystopian. Obviously, anything could, but the question of who decides what is covered by a Social Impact Stipend has really big implications. What qualifies, so to speak? And who decides within and between organizations what kinds of work really need a human in the loop? Because there are always going to be entrepreneurs saying, “Well, we could automate that, we could AI-ify that.” You could say maybe this is where some kind of regulation comes in, whether it’s social regulation or government regulation. But I do think at a certain point it becomes hard to control. And that’s only within a country, not even talking between countries. Then you bring in international competition. If the US theoretically got to some kumbaya, harmonious state where we were all helping each other and getting Social Impact Stipends, and China is not doing that, what does that mean for the state of the global economy and competition?
I really like the ethos of it, and I think it’s a good thing to push for even in our current society—to walk towards something that incentivizes cooperation and social participation. To your point, Tom, there are a lot of people who probably can’t imagine themselves not doing the kind of work they’ve chosen. I see that as a product of our culture; for many generations, we’ve learned to associate our identity so tightly with the work we do. If we shifted to prioritizing the kinds of work he’s talking about, covered by the Social Investment Stipend, maybe two or three generations from now there wouldn’t be a misalignment. People could find purpose and productivity in that kind of work and meaningful connection. But I get how the transition would be rough because a lot of people aren’t there yet—I’m not there yet. It would be a big sea change and would have to happen over time. It would need to be an intentional policy choice that would bear out over time, but it would leave some people behind in the process.
Tom: It’s cool that you are involved in nonprofits because you definitely bring a good perspective to this discussion. I totally understand that so many of our identities are wrapped up in our current work and it’s hard to see ourselves doing any other path. As you say, it doesn’t have to be that way; it may just take time to reset.
Let’s jump more into this idea—not so much the social investment, but the reason why Lee even talks about it and UBI. He predicts that 40% to 50% of people might lose their jobs, leading to massive social unrest. He thinks this is the main threat, more than anything else. He’s not really aligned with the idea that AI is going to subjugate humans and annihilate us. He thinks it will be the social impact of mass unemployment and disruption to the labor force that will create different, almost working-caste classes, and just a lot of people not sure what to do. We already saw in the last election that a lot of people who feel left behind—many of the people who voted more conservatively in places where the economy is languishing—wanted to bring back manufacturing or do something. This created a lot of political division. So this idea that massive changes to our workforce would create a lot of social unrest is pretty compelling.
Do you agree? This book was written in 2018, which surprisingly still feels very relevant even though this is before ChatGPT even came out. It’s hard to believe. But do you agree with the idea that by 2030 or 2035, we’ll see 40% to 50% unemployment? Or is that a premise you’re unsure about? Perhaps this whole AI thing is overblown and people are overestimating what it’s going to do.
I’m kind of split. I’m not entirely persuaded. I try to use AI as much as possible in my job, but right now I feel it only gives me a 30% or 40% lift and still needs a lot of human steering and direction. The idea that entire jobs would just be automated seems very dreamy and optimistic to me. Any thoughts on massive unemployment to come?
Florence: I gave you full disclosure; I skimmed the second half of the book.
Tom: It’s okay.
Florence: I was just thinking about that. Tom, you yourself said that you can’t imagine working with the elderly. I’m a technical writer, like you all probably are, but I’m also a caregiver to my mother and I have to manage her caregivers. So, I see firsthand people working with the elderly. My daughter just finished an associate’s degree and is looking into healthcare. The book talks about the Social Investment Stipend—the social incentive to do social jobs. I’m thinking the market itself will drive this. The new generation of workers is looking at their future and seeing which jobs are at low risk for AI automation, and I think they will just veer toward those jobs as a market-driven phenomenon, rather than a government-incentivized one.
Tom: Do you feel like your daughter was thinking about AI when she chose healthcare? Was it on her mind?
Florence: No, but because of my experience in technology and AI, I made her aware of that, and it just helped push her more towards a caring, healthcare position. So those are my thoughts.
Remy: I appreciated his nuanced discussion about unemployment. He argued that you can’t necessarily talk about entire jobs being automated, but tasks within jobs. The projection he ends up with is around 38%, but then there’s an extra 10% if you add in some of the kinds of work not considered in other studies. So maybe 40% to 50% of jobs have parts that could be automated. But it’s true that all jobs are a mix of tasks that might be automatable and tasks that can’t, like the human connection piece.
He also talks about the downward pressure on industries. To Florence’s point, if people flood into industries that seem less vulnerable to AI, there will be a lot of pressure on those industries and their salaries. There will be disruption no matter what, basically. I do think that social and economic pressures won’t cause it to happen all at once and may even draw it out on a much longer scale. There are a lot of people in industries that just will not adapt on the timeline that entrepreneurs want to see. We also haven’t answered some of these social, philosophical questions about what we do with that displacement.
So I think all of that will lead to some degree of disruption in most fields and also a lot of very drawn-out negotiation about what actually happens to those jobs. It’ll more likely be the task-centered approach where pieces of jobs get automated. The question is whether there are enough automatable pieces of your job that we could wipe it out altogether. JP Morgan Chase laid off 10% of their workforce in favor of automation, and Microsoft laid off 8% of their global operations workforce. Those jobs are definitely at risk, at least for big tech companies. His upshot at the end is like, there are jobs at the top and the bottom that are safe, but the messy middle—the middle-class bedrock—is actually most at risk because a lot of those pieces might be automatable. So I don’t know what’s going to happen, but I think it’s true that it’s going to be really messy, it’s probably not going to happen all at once, and it’s probably going to cause a lot of upheaval.
Tom: Remy, can I ask what your profession is? Are you a tech writer, a product manager, or something totally different?
Remy: Something totally different. I work at a nonprofit called Compass Pro Bono in Washington D.C. We do pro bono consulting and board matching for other local nonprofits in all kinds of issue areas: homelessness, healthcare, environment, sustainability. I’ve been there four years. I was in a strategy role before; now I’m Director of AI and Thought Leadership. A lot of my day-to-day work is consulting for nonprofits on responsible AI integration. I led that in my own organization over the last two years, so I’ve created a framework and I teach other nonprofits about it and speak at conferences. The thought leadership piece is turning our 25 years of institutional knowledge into education for other organizations. So that’s my day-to-day. I’m fascinated by AI on a personal level and I play with it all the time, but I’ve also thought a lot about it on an organizational and institutional level. I have pretty lofty dreams about how it could change our sector and our culture, but I know there’s a lot in the way of that.
Tom: Cool. Well, we’re glad to have you here. Just to give you a little more background, I’m a technical writer at a big tech company in Seattle. The book club is open to anybody in any role, it’s just my sphere of influence online is mostly tech writers. So that’s usually who gets the word. But I’m excited you’re here because you do have a lot of knowledge, and it’s apparent you’ve thought deeply about this topic. That’s great.
This idea that you and others were talking about hits any role: we have to figure out what profession will be safe from AI. And those professions are basically what AI is weak at: social, emotional connection. In tech writing, I don’t think that exists. Maybe we have empathy for the user, but it’s nothing compared to what other roles might have. You advise different companies on AI things. What would you say to technical writers who are trying to evolve their roles in a way that will be safe from AI—to land in one of those human sweet spots that AI can’t easily automate?
Remy: Well, first I’d be really interested in understanding a little more of what technical writing actually looks like day-to-day because I don’t actually know.
Tom: It’s like user guides for technology products, APIs for example that developers use for integration. I mentioned I do documentation for APIs in cars. You get in your new car, it’s got Google Maps in there—well, how’s it getting that map data? APIs are pulling it in. Somebody has to figure out how to integrate it. That kind of thing. I’m sure everybody here has similar roles. Florence and Molly, you guys are in that space too.
Remy: Okay, super interesting. I’m really glad to be in this conversation because I mostly talk with nonprofit folks, so it’s great to hear other perspectives. I sort of think the philosophical question he invokes a couple of times is really important: “In a world where AI can do much of what humans can do, what does it mean to be human?” I think it’s important to meditate on that on a very personal and organization-specific level. What are the things we need to thrive and do our work and connect in meaningful ways?
For my work and the work my colleagues do, and a lot of the work that nonprofits in our sphere do, I feel it’s pretty buffered because there’s a lot of human-to-human connection. There are a lot of ways they can use AI that they’re not yet, which is why I consult for organizations that are thinking about it. But I kind of see this as an opportunity to redefine our relationship with our work. Maybe that’s easy for me to say because I don’t have a job that feels directly very automatable, so there’s a bias there.
But what I’d be interested in understanding is: when you guys think about your work and the sphere of work around you, if you feel like 60-plus percent of your work could theoretically be automated in the next few years, is there a chunk of your work that still gives you excitement and meaning that you feel is in the other category, the connection category? It sounds like, Tom, you’re saying maybe not, because it’s just empathy for the end user. But for me, the opportunity is maybe there’s a role you play, even if it’s not the social-emotional side. Maybe it’s managing the AI and the algorithms that power that work. In the same way that spreadsheets and calculators fundamentally changed how we did math and data analysis, we didn’t all suddenly become useless. We just ended up managing the systems that managed the data. Not that there was no displacement, but…
Tom: I think that’s a more reasonable forecast for the tech comm role: becoming a manager of the systems and the data. Here’s how I envision my job working within a year: I’ll set up jobs overnight to tackle different documentation tasks. I’ll show up in the morning and review the content that AI has written. I might make changes, perhaps not manually, but as a director saying, “Add more detail here,” “You forgot this part,” or “Make sure you single-source that over there.” I feel like I’m going to have a list of things to review, and my job won’t really involve writing, but more steering what the agents tackle and their priorities. It will be more like a manager of AI outputs, not a manager of people.
But also, there’s a tremendous need—and this is the nut nobody can seem to crack—for these AI systems to be smarter and more intelligent. When people type in something, do they get the right answer? For example, in my scenario, let’s say a developer wants to build a mapping application. In their coding environment, they just type out in natural English, “Hey, I want to embed a map here that has 3D zooms over the Eiffel Tower.” They don’t want to code any of it. What are the chances the AI agent can successfully perform all of that? How do we make sure these AI agents are capable? They have to get the right information. We have to identify the gaps. We have to make sure they’re successful. Being able to plug that gap and make sure they don’t hallucinate or have errors seems like a huge role in the future for tech writers.
I still don’t see the social-emotional part of our job. It could just be me. Molly, Florence, what do you think? Do you think your job will look like what I just described, or will it look totally different?
Florence: I agree with what you described, of managing the AI outputs and plugging the gaps. Being less of a writer and more like an editor. I completely agree with that. I was mentioning how I met you at the Write the Docs conference in Portland, and I spoke with other people there who also echoed what you just said about preventing the AI from hallucinating and plugging the gaps in the AI’s knowledge. That’s what I see my job being in the future.
Molly: I agree. I already find myself managing it. We have it on our knowledge base and in the app now, so it uses the docs I wrote to answer every customer’s question. When I find it saying something that’s absolutely wrong, I have to go find whatever article it’s using or try to figure out how to tweak our docs so that it gives the right answer. There’s a lot of that I’m doing now that I hadn’t been doing before. I feel like that’ll become a bigger part of my role. So, definitely managing AI will become important.
But I have another part of my writing where I still want to do a lot of the initial writing myself. I feel like that’s not going to work well when people expect you to adopt AI more. If I want to connect to a topic and understand it, I need to write about it myself before I get AI involved. I often get AI feedback on my drafts, but I feel like my unwillingness to let go of writing the initial drafts myself is going to work against me in the future.
Remy: I almost disagree, Molly. I do think there will be pressure, but I also think that “taste”—the ability to develop expertise—happens over time and will be a real skill set. When you become an editor or modifier of AI outputs, you need to have that deep understanding to know if an output is good or bad. I’m worried about people just entering the workforce who may never get the chance to build that. For people who are currently in the workforce, whatever your system is for developing taste and an ability to look at an output and say “this is good or bad” comes with lots of experience and whatever process that takes you. So I agree there’ll be pressure, but I almost think in certain ways it’s safer to be someone who doesn’t use AI at every part of the process, but still has the taste to critically judge the outputs of the AI and steer them in the right direction. This is versus someone who hasn’t even entered the workforce and had the ability to develop the taste that’s needed to assess. If you’ve only ever written essays with ChatGPT, you probably don’t know what non-ChatGPT writing feels like. It feels a little bit off because you just haven’t seen and created that much writing that’s been critiqued. Now, I go on LinkedIn, I see a bunch of ChatGPT captions, and I can recognize the patterns because I’ve used LLMs, but also because I developed a taste for good writing. I would imagine the same thing will apply in many professions.
Tom: I definitely agree that without expertise, using AI tools is going to be very difficult. You’ve got to be able to know when it’s off and be able to steer it. If you’re just drawing a blank, it doesn’t work. I found this to be especially true with code samples. I’m not an engineer, so if I use AI to write a code sample, I can kind of see what’s going on, but I don’t have the engineering expertise to thoroughly know if it’s good or bad or even works without sinking a lot of time. So I just don’t even try that; I tell engineers to produce the code. But yeah, expertise is critical. And if the only way you get expertise is by going through some of the writing more manually, that seems valid.
I did want to push back a little bit on an assertion I was making earlier that there was no social-emotional component to the tech writing role, which might be overstated. I do think that writing itself, perhaps more personal, creative writing, does have a tremendous social-emotional component. If I write a blog post on my site, for example, and I just have AI write it, there’s very little meaning behind it and I don’t learn from it. I really use writing as a way to think through something as well as to respond to what other people are writing. It’s very much a social, interactive space, and it requires me to process ideas as well—the emotional impact of ideas. So writing itself is this weird paradox where, on the creative side, it’s very social-emotional. I don’t see it being easily automated; nobody likes AI slop. But in the workspace, with tech docs that are meant to be more Wikipedia-like and factual, with no real persona or voice or creativity, it doesn’t seem to require that same degree of social-emotional commitment or investment, at least for the work I do.
Molly: Interesting. It’s a good point.
Tom: Let’s hit this topic that Dan proposed. Dan couldn’t be here. A couple of people emailed me and said, “Hey, I really like the book, I just couldn’t be here at this time.” One of those people is Dan, and he said he wanted to better understand current events with China, especially the Nvidia deal where the US government gets a 15% kickback on all chip sales to China. How does that fit into the framework Kai-Fu Lee is talking about? We’re reading this whole book on China; certainly that provides some kind of lens by which we can interpret current AI policies with China.
I was scratching my head on this one, so I plugged some resources into NotebookLM and asked it this question. I’m kind of curious to see what you all think. Basically, NotebookLM said the idea is that by not selling the most advanced chips to China, the US maintains its dominance in the “age of discovery” aspect of AI, while conceding that the “age of implementation” can be dominated by China or others. The author separates these two ages. His argument is that we’ve kind of passed the age of discovery and are now in implementation. Ironically, this was before ChatGPT had come out in a mainstream way in 2018, so I’m not sure if that was shortsighted. But this idea that there’s discovery and implementation, and if you want to dominate discovery, you need the most advanced AI chips and the US is going to hold on to that and try to compete there, while still selling other chips and letting other people implement AI. What do you think? Do you have any thoughts on how to interpret current US strategies with China around AI? Is there any kind of enlightened viewpoint we get from Kai-Fu Lee on current events, or is it just unrelated?
Molly: I sort of feel the same way. I feel as if I should have more insights on this, but I don’t really. I don’t know enough about this.
Remy: Yeah, I was really surprised by the 15% deal. I guess not surprised… Well, it was unprecedented to have a deal like this with a private company for the government. But I do think it’s a really hard question: what do you do about chips? There’s some benefit to the US economy if we sell to China, and there’s some competitive risk if we let China develop their own chips. Even though the Chinese government has been pressuring domestic companies to use Chinese chipmakers’ chips, they have all still preferred Nvidia chips so far.
I think it’s a really interesting question. Part of my interpretation of Kai-Fu Lee’s argument is that we’re rapidly moving away from the age of discovery. And part of that might be—and it was 2018 also—that if we didn’t sell Nvidia chips to China at all, it might just be a matter of time until Huawei and other chips are just as good and favored in Chinese companies. So, I see the advantage to the US economy from doing that. The 15% cut is really interesting.
I guess the lingering question for me is what is the actual long-term implication of the cultural-political war between the US and China of doing this? Because this maintains some degree of influence. I think the good thing about it, in a sense, is that if China owns manufacturing in a very significant way but the US owns chipmaking, then China doesn’t have the only upper hand in the third and fourth phases of AI implementation. We could theoretically say, “Hey, we’re not going to sell you our Nvidia chips anymore if you don’t help us manufacture the next wave of AI-native mobile tech tools or perception AI hardware.” I don’t know. That’s a drastic oversimplification, obviously, and China could very easily say, “Well, now we’ve studied your chips and we have the tech, so we don’t need it.” But that’s my high-level thought. I do think it’s good to have some leverage. I don’t know if this is enough to actually have a hand on the scale when it comes to the manufacturing side, but it’s something. So I can see some benefit to doing this, at least for the US economy and cultural dominance.
Tom: It is such an interesting question because there’s no easy answer. If you don’t sell chips to China, you encourage their own independence and effort to build them locally. And if you do sell chips to China, you probably won’t change a whole lot. They see the US as unreliable and are constantly changing tariffs and other policies. They already want independence. I’m not sure we’re going to just get them used to our tech stack and then they stop innovating locally. There are so many unknowns. But if there’s one thing that seems clear, it’s that China is a major component of any AI strategy. The fact that they’re so competitive means that the US is trying to get involved. And even though this weird 15% deal seems very corrupt and like the government sticking its hand into business in weird ways, maybe it will incentivize the government to get more involved in AI if it sees more returns on it. That involvement could spur more AI support and development in the same way that the Chinese government got involved and pushed AI because it saw it could return massive benefits as well. I’m not super steeped in politics, so I don’t really have an informed opinion about this, but it is a question Dan brought up and worth looking at in light of this book.
There’s one other theme I wanted to bring up that I couldn’t quite understand very well. Florence, you brought this up at the start. You said it was interesting to see the depiction of the Chinese company mentality—the “gladiator” mentality, the way they integrate from the ground up in all aspects, that they’re not beholden to some lofty mission, but are very focused on what turns a profit. What I don’t understand is why there aren’t more China-based tech writing firms that could easily take any documentation project, turn it around for a fraction of the price, use AI models to clean up any language issues, and just put all US tech writers out of business.
The whole outsourcing idea initially surfaced probably a dozen years ago. People got really scared, but then it didn’t really happen. People tried to offshore a lot of tech doc projects and it didn’t work so well, so they pulled back. But why couldn’t these companies… I feel like the way the author described the Chinese work ethic and mentality—the gladiator, do-whatever-it-takes, work-around-the-clock, full-throttle approach—why isn’t that just putting tech writers out of business here in the US?
Molly: I feel like it could happen eventually, but maybe there are other applications of AI where they can make a lot more money, and there just isn’t enough money to be made by replacing tech writers yet. In startups, I’m the first tech writer at my company, and I think a lot of companies hire them later than they should because they don’t prioritize it. Maybe there just isn’t enough money in it yet to be bothered. But when they figure out the way to do it really cheaply, then sure, they’ll just blow us all out of the water and we’ll all be out of a job.
Tom: Or one of you can create that app first, right?
Remy: I will say, part of the consulting I do has a very significant lens on change management for organizations on AI. And I try to prioritize responsible AI change management, which means human-centric and values-aligned AI adoption. I think for any prediction of mass industry or job disruption, change management is going to be a big bottleneck. This might not be as true for the next two generations of founders who are AI-native, in the same way that tech-native startup founders were just using the internet from the jump. The Gen Z, Gen Alpha, and beyond kids who go into the workforce and start their own companies—which is already happening—AI is just there. They may already be automating things, and it’s not replacing someone, it’s just starting from there. I think that’s going to be more and more common.
But for the vast majority of organizations in this country that employ and contract people, there’s a significant hill to overcome: the bias of having done something a certain way forever. I think we underestimate the extent to which that’s going to slow AI adoption. There’s going to be a lot of disruption when the next generation of founders comes along, but they’re not going to be in the C-suite of most companies for many years. We’ll have maybe the gerontocracy of C-suite leadership over the next 10 or 15 years. And a lot of those organizations and people are just not going to make the shift as quickly as I think we assume because it’s really hard.
Part of the framework I offer is to prevent backsliding. I think a lot of organizations are… like Duolingo and Klarna are two tech startups that laid off a bunch of workers and said, “Oh, we’re going to just automate their jobs,” and then had to rehire almost all of them or other people to replace them because they weren’t actually in a position to do that. Now, maybe two years from now they’ll be able to, but I think there’s going to be more lag and difficulty for a lot of reasons, including what you were talking about earlier, Tom, which is assessing the actual output of the AI. It’s not going to be perfect. So I think there’ll be more lag than people expect in a lot of ways, in part just because we’re used to not working with it. That’s why I have some skepticism around any claim that in the next few years, entire industries are going to be completely wiped out. I do think there are obviously going to be really significant consequences in some industries. We’re already seeing mass layoffs by big tech companies that have the infrastructure to do that on a mass scale and will still be prestigious and able to attract the best talent no matter what because they have so much money. But I don’t think that’s going to be the case for most businesses in this country. I think it’s going to be a much slower roll.
Tom: I can see how your role in change management is probably a pretty hot field right now. That seems like a very difficult transition for companies to make. And in many ways, I’m very glad that companies are slow to change. If they could just pivot overnight to some drastic new model that wiped out half their workforce and they didn’t feel the pain, that would be pretty scary.
Well, hey, this has been a great discussion. I appreciated your insights into this book. I’m glad we read it. I’ll plug the next book in the list. It’s going to be backwards, but it’s Empire of AI by Karen Hao. This is a recent book. It’s kind of similar to the AI Supremacy book we recently read by Parmmy Olson, but it’s a little bit more anti-AI. I shouldn’t say anti-AI; it’s very rigorous, a little more critical. It’s quite a thick book. I listened to half of it on the audiobook so far and I was just like, “We’ve got to read this one. This is a must-read.” It just goes in-depth in so many different areas. The title, Empire of AI, suggests colonization and conquest of AI companies over so many different facets of life. It is a good book. Even though I’m an AI optimist, this is one we definitely should read.
I will send a little note about that and hope to see you at the next one. Also, these book clubs are recorded. If for some reason you don’t want me to share the recording on my site, let me know. But hopefully, you’re okay with that. I really appreciate your participation and your forthcoming thoughts and reactions on this book. So thanks again. Any last thoughts that anybody wants to say before we close up?
Remy: Nice to meet you all. Really appreciate the conversation. Looking forward to more.
Tom: Thanks for coming.
Molly: Yeah. Bye.
Tom: Bye. Take care.
About Tom Johnson

I'm an API technical writer based in the Seattle area. On this blog, I write about topics related to technical writing and communication — such as software documentation, API documentation, AI, information architecture, content strategy, writing processes, plain language, tech comm careers, and more. Check out my API documentation course if you're looking for more info about documenting APIs. Or see my posts on AI and AI course section for more on the latest in AI and tech comm.
If you're a technical writer and want to keep on top of the latest trends in the tech comm, be sure to subscribe to email updates below. You can also learn more about me or contact me. Finally, note that the opinions I express on my blog are my own points of view, not that of my employer.