Search results

AI Book Club recording, notes, and transcript for Ethan Mollick's Co-Intelligence

by Tom Johnson on Dec 17, 2025 comments
categories: ai ai-book-clubpodcasts

This is a recording of our AI Book Club discussion of Co-Intelligence: Living and Working with AI by Ethan Mollick, held Dec 14, 2025. Our discussion touches upon a variety of topics, including the educator's lens, cautious optimism, the jagged frontier, personas, pedagogy, takeaways, and more. This post also provides discussion questions, a transcript, and terms and definitions from the book.

Audio-only version

If you only want to listen to the audio:

Listen here:

Book review

Here’s my book review: Book review of “Co-Intelligence: Living and Working with AI” by Ethan Mollick – an educator embraces experience-based AI learning.

Discussion questions

Here are some of the discussion questions I prepared ahead of the discussion. We talked about some of these issues, but not all.

  • A frequent question I receive about AI is people wanting to know what tasks it’s best suited for. e.g., what tasks is AI good at? What do you think of Mollick’s answer to this question? (Bring AI to the table for all tasks…) In your own work, what does the jagged frontier look like? Is there a jagged frontier for tech comm?
  • Do you find yourself acting more like a centaur or cyborg?
  • Do you agree with the technique of using personas? Does this technique risk increasing AI hallucination because the AI is putting on a false persona, or does it set needed expectations about the rhetorical situation?
  • Do you think Mollick’s recommendation for educators to assign more difficult projects to students is a good solution to the homework apocalypse? What would you tell educators about how to use or not use AI in school?
  • Do you think Mollick’s observations are a lot less radical or doom-filled because he grounds so much in practical experience?
  • Was I wrong to push back against the characterization of AI’s output as “alien minds” so much in Harari’s book, given that Mollick also characterizes AI as alien?
  • Do you think using AI as a sounding board, particularly as an alien mind that can embody and communicate different perspectives that you assign it, is especially useful? Isn’t this what some tech writers have done in evaluating docs using different personas?
  • Are we exhausting the subject here and approaching the end of ideas, or do you think there is enough unexplored landscape to justify reading many more general books on AI? I don’t want to keep reading the same themes over and over.

NotebookLM

Here’s access to the NotebookLM with some of the Co-Intelligence notes.

Transcript

Here’s a transcript of the discussion, with some grammar slightly cleaned up by AI:

0:03 - Tom: You’re listening to a recording of the AI book club. This book club session covers Co-Intelligence: Living and Working with AI by Ethan Mollick. This was a New York Times bestseller published in 2024. In this discussion, we have about seven or eight people informally talking about parts of the book we liked and parts we disliked. We cover topics such as the educator’s lens—the author is a Wharton Business School professor—the sort of cautious optimism that the book takes, the jagged frontier, and so on. A lot of good stuff. It was a lively discussion. If you’re interested in participating in the AI book club, go to idratherbewriting.com/ai-book-club.

0:58 - Tom: You can find details about the schedule, how to join, and how to see the meetings on your calendar. The next book that we’ll be reading for January 18, 2026, is God, Human, Animal, Machine: Technology, Metaphor, and the Search for Meaning by Meghan O’Gieblyn. And it’s an excellent book. I’m about a third of the way through. Much more philosophical. I hope you enjoy this book club discussion. Would love to have you join us. But also, if you just check out the resources that we post and participate that way, you’re fully welcome to do that. There’s also an online Slack where you can chime in with your thoughts and questions. My name is Tom and again, there’s probably seven to eight people here chatting about Ethan Mollick’s Co-Intelligence.

1:52 - Tom: Okay, let’s get going on this book. So, Co-Intelligence by Ethan Mollick. [Holds up book] Oh, you can’t see because I’ve got—there we go. If I hold it real still. What did you think of the book? Anybody want to get going? Did you like the book? Did you dislike the book? Where does it measure on your level of likability?

2:21 - Lois: So, I mean, I liked what he had to say. He was engaging enough to listen to, but I think what you’re saying is more or less the case. It started to feel like it’s just repeating a lot of old themes and there wasn’t really that much that is new here. I mean, the fact that it’s over a year old probably doesn’t help either.

2:46 - Tom: Yeah. I wanted to like the book more. I mean, it got really high reviews and it’s been widely read, but I’m wondering if maybe its peak was a year ago instead of now, and maybe a lot of the ideas are things we’ve already heard or read. I kept—especially the centaur/cyborg idea, which is apparently one of the biggest contributions to the AI conversation, and which Ethan Mollick was one of the researchers on the paper that introduced that metaphor. Is this something I’d already come across and sort of written about? But yeah, the cyborg and the centaur.

3:24 - Tom: Centaur referring to kind of switching off between, “now I’m going to use AI, now I’m going to be human,” and then the cyborg being much more intertwined and continuous kind of integration of AI into tasks. What did you think of that metaphor? Was that something new you hadn’t heard or is that one where you’re like, “Oh yeah, heard that along with the jagged frontier.” Any thoughts on that? Are you a cyborg, a centaur, or is that too Greek of a reference to be meaningful? I assume it’s a Greek myth, the centaur, right? Or is that a Roman thing?

4:03 - Daniel: It’s Greek myth. I actually had had not heard that before, or the jagged frontier metaphor. But I thought they were useful, although it made me think that I haven’t analyzed my own use of AI enough to really dig down into which I am. And so it reminds me maybe I need to be more reflective on how I’m using AI. In some ways, it just feels very—I just use it when I use it, then I’ll put it away. I should maybe be more intentional about it.

4:44 - Daniel: So, maybe I’ll say a little bit more about how I felt about the book. I have two feelings. On the one hand, I definitely agreed that having read half a dozen other books about AI, almost all of this felt like a rehash. At first, where it was giving the overview of what AI is and the history of it, I was like, “Oh my god, is the whole book gonna be like this?” But then, what I liked about it was that it really focuses on practical experience and actual practice, particularly in the classroom. I really liked that part. As someone who is a former educator, college instructor, and someone who works at an ed-tech company, I really connected to his chapters and his examples about education. The chapters like “AI as coach” and “AI as tutor,” I thought that was very useful and a helpful way of orienting my mind.

6:02 - Daniel: I actually saw—my company had a conference over the summer—Ethan Mollick was the keynote speaker. I actually get very inspired when I read this book. I got inspired the way I felt during the keynote because he is someone who—the keynote felt very “shock and awe.” He’s like, “I’m gonna do this, I’m gonna do this,” and he’d have like four things open working on things. He’d be building sites, he’d be doing all these examples. It made me really want to teach again. His ideas about flipped classrooms and stuff—which is not new with him—but it made me want to go back to the classroom and teach. I felt very excited about that. It wasn’t necessarily that he was coming up with a lot of super new concepts, but just how he was applying them as a teacher is what I really liked.

7:00 - Daniel: He got referenced in John Warner’s book as someone who John Warner really respects and thinks of as a serious researcher but who he totally disagrees with. John Warner obviously has much more negative views of AI. So it was very useful to look through a pedagogical lens, through an educator lens. I got really into it even though a lot of the stuff didn’t feel super new.

7:32 - Tom: Well, great comments. And well, we’ve got a lot of points to jump off on there. Sharon, you had your hand raised, but then you brought it down. Did you still want to comment?

7:38 - Sharon: No, I mean, I guess I was just super interested, Daniel, in what level of education you were at. I haven’t read a whole lot of AI stuff, but I kind of felt a little overly optimistic, I thought, in terms of—I mean, it terrifies me that people aren’t going to be reading anymore because they don’t need to, because they can throw up an essay using AI. And they’re learning so much more now, especially with adolescents, that AI and social media together are really reducing the growth of parts of the brain. So for me, I came away being very concerned. So I was interested in what he said that was so exciting to you.

8:30 - Daniel: I think just the—a lot of people talk on a high level about like, “it’s going to reshape education.” I liked how he talked a lot about his Wharton class and the things he made them do or the ideas about how you can apply it through flipped classroom learning. How it’s making lectures go away—which really should. Lectures are not effective ways of teaching students. They take a long time to make. No one wants to hear them.

9:02 - Daniel: As someone who’s also a creative writer, a lot of creative writing classes are through workshops and talking to each other and working on each other’s work as opposed to passive learning. That should be used more in writing composition, which is what John Warner taught. It’s the baseline—freshman writing, college writing—everyone has to take it, but it’s a very broken paradigm. Thinking through how that can be transformed—it made me want to have a free class where I teach people and try things. I got very excited about that.

9:47 - Daniel: Even though I was talking to friends from my old PhD program on Friday and most educators in the humanities hate it. She was surprised that I said I’m more pro-AI than most writers and teachers. I’m also more skeptical than most people in software in the tech industry. There are so many reasons why AI can be very bad for adolescents, for education, to say nothing about climate and things that idealistic young people think about. But I really liked how Mollick put into practice the stuff he was talking about as opposed to talking about it in this kind of “airy” way that CEOs do.

10:53 - Nathan: Can you hear me now? Yes, we can hear you. Those are all really interesting points you’re making, Daniel, because I used to teach as well. I’m not familiar with Ethan Mollick, but I was engaged in the book. Somebody in this group mentioned to follow him on LinkedIn, which I started doing, and he is posting just a lot of interesting stuff multiple times a day in this AI and education space.

11:22 - Nathan: When I was teaching, people would cheat. It is pretty obvious, and it’s like, well, why are you doing that? In a lot of ways education is to get the credential, right? You can’t force somebody to be curious. But for Mollick’s chapters on AI as tutor and AI as coach—which I think conceptually are similar—you have to want to do it. AI can pass the essay exam all day long, but you have to be curious enough to develop your own thoughts in a way that AI can be actually useful. He makes this point somewhere in the book, but it kind of amplifies our instincts, good and bad. If you want to use AI to cheat through the rote education credentialization, you can do that, or you can use it to learn. I feel very skeptical and optimistic for both of those reasons.

12:38 - Tom: It’s a double-edged sword, I guess. Debbie?

12:44 - Debbie: Yeah. Overall, I thought this book was okay. I appreciated that this is somebody who is tech-adjacent, or not really fully in tech, who is talking about AI and actually actively using it and engaging with it. That is something I feel like I appreciated. It’s tough for me to stomach very harsh critiques without—how do you really know if you’re not engaging with it? Whereas last book club, I didn’t get the sense that Yuval Noah Harari was utilizing it or was experienced in it at all from a user perspective. So I always took his slightly more “doomer” outlook with a grain of salt.

13:40 - Debbie: So, I really appreciated that in this book. And yeah, just like Daniel and Nathan, the education chapters were the most intriguing. That was the value-add of the book for me. It got me thinking—I have an eight-year-old son. It’s not really come up in his classroom yet, but I know this is the kind of world he’s going to inherit. The concept of AI literacy really jumped out to me as something really important to enforce.

14:32 - Debbie: And also, similar to Daniel, I’m also a creative writer. But I also work in tech, so I feel very in the middle. With my writer friends, I am overly optimistic. Everybody’s very doom-focused. Whereas some of the people in tech that I work with at my day job are a little bit more optimistic than I am. I am more interested at this point in cautious optimism and cultivating that rather than living on either of the extremes. I don’t think long term it’s a healthy thing for me. We’re not going back. You don’t want to read book after book that just slams AI while going to a job where you use AI all day.

15:50 - Tom: Definitely not. Yeah. Legit.

15:59 - Le: Yep. Trying to find the unmute. I echo all of the sentiments. I probably haven’t read as many AI books as you all have. I have read Empire of AI, which I felt was a lot of doom. But I felt the optimism of this book was actually really refreshing. I should add, I haven’t finished the book. I just started “AI as Co-worker.” So all of the chapters you all are describing—tutor, coach—I haven’t got into yet. That’s exciting to hear.

16:42 - Le: Particularly things that stuck out to me—I noticed halfway through the book as he’s using AI in his writing, and then he’s like, “Gotcha. You see what I just did there?” I thought that was really clever. Where he’s highlighting the importance of AI literacy. I think he was giving the example of a lawyer who used AI in a case and then the judge found out, and then he later mentions that he’s telling the story to us using AI which is citing references. Anyways, that chapter was very meta for me and it stuck out as a point of AI literacy. I’m reading for multiple pages thinking that this is true because I’m reading a book from a Wharton professor, and then he later injects that he’s using AI. It really highlighted how fallible we are and how important AI literacy is.

17:48 - Le: And then the chapter around AI creativity actually kind of changed how I think about AI art. Like most people, I have this knee-jerk reaction where I see AI art and I’m like, “that’s not art.” You didn’t actually spend hours fine-tuning your craft. But he mentions how when we think about creativity as humans, part of it is just the thrill of using a new tool and getting into a state of flow. The output, though it’s using AI, is tapping into this creative sense of whoever’s using it, and that output is still a representation of their creativity. I thought that was kind of healthy and shifting how we think about AI art because whether we like it or not, these tools are here to stay.

18:52 - Le: It kind of provokes some of these philosophical questions of what it means to be human and how we use these tools. I’m sure many of you have seen some of the mergers and acquisitions happening in entertainment with Netflix and Paramount. Netflix particularly is looking to use AI tools to make synthetic media. Disney is having this big IP showdown of “what is pre-AI versus post-AI creative world.” Anyways, this is getting a little bit rambly, but I felt that framework of challenging how we see AI creations and human creativity was just really thought-provoking. So I could appreciate that.

19:51 - Tom: Well, we’ve got a lot of great themes emerging here between the education, the creativity, the optimism. Let’s hear from Jessica.

20:07 - Jessica: Hi, I’m Jessica. I’m new and I’m actually at the same point in the book as you, Le. I agree with what everyone’s saying about how broad and maybe slightly outdated the content is. For me, that’s sort of a plus because I haven’t done all the reading y’all have. If we’re using the metaphor of a cyborg versus a centaur—sorry, blanking on the word—I am like a person leading a recalcitrant donkey. I’m not even combined. I’m early in my stages of using AI and my comfort level.

21:05 - Jessica: So for me, this is a perfect introduction because it’s giving that background, even if I’m noticing things of like, “Wow, this is pretty outdated. It couldn’t even write code at the time.” But I do appreciate the positivity also because I’m getting a lot of doom and gloom from the news, but being encouraged strongly to use it more in my work. Coming from an education background—well, I don’t have one, but the way he approaches it, maybe the way he would teach a college course with examples and thought experiments, is a way that feels comfortable for me to learn.

21:57 - Jessica: I don’t have any blinding insights, but I do appreciate what you were hinting at, Le, of him pointing out that human creativity doesn’t just spring out of each individual’s brain. There’s the content of the past that we all pull from. We’re not individual genius brains coming up with new things because we’re all building off of what came before us and analyzing patterns whether we realize it or not. So yeah, I’m just curious to read more. I have to think of this as a tool and not the enemy.

22:36 - Tom: Thanks for adding your perspective. I just wanted to jump in and add a quick thought on the educator lens and this theme of optimism. It seems a lot of people appreciate the fact that this did take an optimistic approach. Daniel brought up at the start that this is an educator. I didn’t fully register that fact and how against AI most academics tend to be, which makes his perspective even a little bit more radical than normal.

23:10 - Debbie: Kind of going off of Le and Jessica’s point about the creativity chapters, it got me thinking—and I remember when he brought up the whole calculator and the moral panic around the calculator. It brought me back to taking the longer view of technology and innovation for human beings as a civilization. It got me thinking about photography and how that really—I studied art history in college. We studied a lot how people are still painting. There are still contemporary painters. Museums are still filled with classical painted works. It’s just that the mediums have shifted.

24:21 - Debbie: And photography became a whole discipline. Same thing with graphic arts. I can’t draw. I used to do a little bit of graphic design. Having access to Adobe Illustrator really democratized some of that creativity for me. But at the end of the day, taste is still taste. If it’s good, it’s good. I am willing to critique a work of AI-generated art for its value the same way that I am willing to evaluate a computer-generated band poster. So I’m trying to remain open to how this evolves over time, trusting that there will be natural checks and balances.

25:46 - Tom: Well, interesting. So your background is in art history? Is that what you said?

25:52 - Debbie: Yeah. I worked at a museum for a bit in the curatorial department after college and I was applying to PhD programs. Then I was like, “You know what? I don’t know if I can do this very narrow thing for the next seven to 10 years.” This was also 2008, so everybody was panicking. It led me on a different path into software and then eventually into technical writing.

26:32 - Daniel: My wife started in museums for college in 2008 too. She became a museum educator. Just aside—but to respond to what you were saying about the creativity, the thing that I was thinking about was more from what he was saying about it being a great leveler. Using AI can make the lowest-performing people pretty good. And then the people who are the sort of best at it, maybe it helps them a little bit, but it narrows that gap. From a pedagogical standpoint, that’s huge.

27:18 - Daniel: At the same time, as someone who is an expert in writing, it bums me out. I’ve spent so much time doing this thing that I think is very meaningful and special. Now everyone can do it. I have used AI with my creative writing, not so much to make it write in a certain way, but more as a research assistant to find information. It just makes that labor less valuable economically.

28:16 - Daniel: I’m also a poet, so I never expected to make any money from that. It doesn’t bother me as much. But I have a friend who’s a pretty well-known painter who quit his teaching job so he could live off the paintings he sells. If the New York Times just starts not hiring artists and using AI, or Netflix using AI instead of hiring writers, that is not good for the artists. Art is always going to change and engage people in different times and cultures, and it always goes back to the worker thing. Mollick talks about that, and he is pretty positive about it.

29:34 - Daniel: He says expertise is still going to be needed. He’s like, “We’re going to have all this time now.” And I’m like, “has that ever happened?” Everyone’s just going to get busier with other stuff. Tommy, you were writing about this. Everyone’s just going to have more work to do. I do appreciate what Le was saying about people being able to make art and have the enjoyment of creating things. The flip side is, of course, does it make my work as an artist less valuable?

30:51 - Tom: I wanted to ask something about the educator angle. Mollick talks about how he’s giving his students really difficult tasks or telling them, for example, if they’re not a coder, they have to build an app. They really have to push the limits into a new level of impossible. That’s something I’ve been thinking about—what if you give a high schooler a really difficult task that they couldn’t do unless they were using AI? Is that going to be beneficial?

31:42 - Tom: In my own work as a technical writer, I’m also trying to push the boundaries. That mostly has just kind of prompted me into doing more “vibe coding” and generating a bunch of code that I don’t understand or that I don’t know if I want to take the time to try to understand. Is that valuable? The app I’m working on is something that will produce a PDF of any list of topics you give it. Is that a valuable output? I think we’re going to go more cross-domain and pull it together into more difficult projects.

32:37 - Tom: I’m still trying to figure this out because my kids, they hate AI. They’re really outspoken about its negative impacts. I can’t help but think they’re just getting that from the academic environment and what the teachers are communicating just because it’s such a threat to the academic model. Sharon?

33:09 - Sharon: I’m actually working on a book, but I’ve hired an illustrator. The other day she gave me an illustration that I didn’t quite like, so I uploaded it to AI and said, “well, what if you did this and that?” It’s actually still not very good at giving me what I want. But I sent it back to her and she was like, “Oh, no. I never want my illustrations uploaded to AI at all.” Even though I wasn’t going to use this in the book, I was just brainstorming. For her, that was just awful. I really felt for her. She’s just feeling so threatened by AI.

34:02 - Sharon: I’m a writer. I’ve put all this time my whole life being a good writer and being sought out for being a writer, and all of a sudden everybody else gets to have this skill. I just feel kind of bereft of losing this skill that’s been special my whole life.

34:30 - Tom: Sharon, do you feel like you’re going to be forced to level up and become significantly better than what AI-written content can produce?

34:41 - Sharon: I’m not really—I’ve kind of done with writing in a way. So for me, that’s not so much of a concern. I’m turning into different avenues. But other people who have put their lives into being an illustrator or whatever—so far AI cannot replicate what she’s doing. But in the future, that might be the case. Nathan?

35:43 - Nathan: Oh, hey, can you hear me? I was thinking about your comment, Tom, about using AI to level up. Mollick is a business professor, and we’re all in this world of tech. Leveling up in that context is like your example: can you make an app? I have an app idea, but I have been coming across the thought: “does the world need another app?” Are apps going to solve our problems?

36:15 - Nathan: There are other models of education, like service learning, where you’re getting out in the community and helping people, learning how things are working in the real world outside of that “aquarium of education.” I think there’s probably a place for AI there. We’re kind of working almost in this old economic order where the genius creates the app and makes a lot of money. But where making an app becomes cheaper, that economic model might just be slipping away into another time. If we can make our own apps to do stuff, are they really that valuable?

38:05 - Le: Not to be the contrarian here, but I actually disagree with Nathan’s perspective because I am not a writer. It’s something I’ve struggled with. My background is in computer science and I’m a tech person. I actually had similar sentiments seeing all the vibe coding tools, thinking, “wow, I spent many years learning object-oriented programming and now everyone can do it.”

38:43 - Le: But not being a writer, using AI has really helped me express my ideas and given me the confidence to share thoughts with the world. To your point, Nathan, “does the world need another app?” I would challenge that to a group of writers and say, “Well, does the world need another book?” And I would say yes, because by virtue of us picking up this book, it is enabling us to reflect and grapple with concepts in a new way.

39:48 - Le: It’s less about the tool and more about the application. Is that book providing new thought-provoking value? I think there’s an overall net good of authors who can’t code being able to write apps, or people who are coding being able to write articles. Mollick touches on the importance of cross-disciplinary thinking. I think that’s really important to hold on to as we worry about our jobs being threatened. They will need that skill and they’ll need your unique voice.

41:42 - Lois: Yes, I read the study that AI reduces the gaps between programmers who were not very good before and are now good. But I wonder if that’s conflating two things. What makes a good employee is someone who’s diligent, persistent, and takes a project to completion. Are they really saying that an AI tool will make an employee who doesn’t have those characteristics have those characteristics? I’m just a little curious about how they’re grading quality of employees or are they just looking at a simple programming task?

42:43 - Tom: Yeah. All these studies, when you start to look into the details—what was the task, what domain was it—it seems a lot less impressive. None of them are really focused on technical writing. But yes, good point about how there’s a holistic view that might be missed. Is that all, Lois?

43:15 - Lois: I guess what I’m finding is that with vibe coding at work, I am doing tasks that supposedly our DevRel team has known about for years that needed to be done, but somehow never got around to doing and I’ve been able to do them. So it feels like I’m not bringing something into existence that doesn’t need to exist.

43:48 - Tom: Yeah. It’s like you have a document engineer dedicated to focusing on stuff that you maybe have always wanted to do.

44:03 - Daniel: I think I’ll respond quickly to what Lois was saying. It’s important to keep in mind that holistic idea of quality workers, but there is something to be said if you feel like you’re not good at something and suddenly you have this way to be better at it. It does feel really good. Writing is really hard and can be very painful when you don’t want to write, and then suddenly you have this ability to articulate yourself. That is exciting.

45:19 - Daniel: There are plenty of people who don’t think there needs to be any more books. My take on it is—and this is actually something that we tell writing students all the time—don’t think about the product, think about the process. The thing that you get from it is this process of creating something, of working with something, and experimenting. It involves failing all the time. The fact that you’re vibe coding and it’s half-broken—what you are doing is getting into this process of working with AI to do something that you don’t think you can do. In three years down the road, you’re going to be able to do things that people who never tried it aren’t going to be able to do. It’s the process.

46:21 - Le: I just have a comment—100% yes, I agree. In college, I didn’t really write any essays because I studied computer science. Ethan Mollick calls it the “button” where you have a blank page and you don’t know where to start, so you hit the button. I blurt a bunch of words at AI, I hit the button, and then it gives me somewhere to start. About the process, I feel like I’m becoming a better writer just having something to work with.

46:59 - Le: For example, AI is extremely verbose. I’ve discovered the power of short sentences. You can say a really complex thing in a few short sentences really elegantly, but AI doesn’t do that. Through the process of refining the output, I feel like I’m becoming a better writer because it gives me somewhere to start. Maybe one day I won’t have to hit the button. The process is the most important thing, it just looks different with AI. And that is kind of liberating. I don’t need a teacher. I can start to learn by starting somewhere.

48:22 - Tom: It’s an interesting thread about jumpstarting the process. To Le’s comment about “yet another app” being like “yet another book”—at my work, there are so many apps that people have developed that there’s a palpable app fatigue. We have a bi-weekly AI group among technical writers, and every time somebody introduces a new tool, it’s just like, “oh, yet another tool to learn.”

48:56 - Tom: It’s crazy. Two months ago, people were over the moon about Gemini CLI. This person went on vacation for a couple weeks, came back, and there was a new tool—basically the internal equivalent of Anti-gravity—and people are just going nuts about Anti-gravity. You’d think the rapture happened. This person comes back and they’re like, “Oh my gosh, I’m gone two weeks and suddenly the tool that was in fashion is now outdated.” It’s just this onslaught.

49:38 - Tom: Part of the insanity is that we have this giant monorepo. All code goes into the same base and these tools now can query the codebase. They learn from it. If it doesn’t work, it tries to build it, troubleshoots the failure, implements a fix. There’s this virtuous cycle that’s happening. The more you build, the easier it gets to build.

50:11 - Tom: I wanted to add one little writing technique I have learned. I will often write a first draft of something—just my own typing away thoughts—and then I will take this draft, plug it into Gemini Deep Research, and have it kick off some research about the topic. It will bring back a bunch of sources. I’ll take and read the sources, but I also just delegate: “Hey, find an interesting quote from these sources, splice it into my original draft where you think it’s relevant.” This sort of “diglossia” really adds to an essay.

51:13 - Tom: If you’re writing about a book, I always try to find the PDF of the book, upload it into NotebookLM, and make sure that when I quote somebody, I can check it against that rendering of the PDF. I find that I’m pretty sloppy. I’m not that exact in terms of my memory. As I was telling people this at work, they were like, “You know what? You could be losing your critical reading skills if you just sort of do this.” There’s value to struggling through a text and trying to piece together the argument. But man, I have lost that patience. I know an AI can just summarize it in a simple way for me, and it’s hard to force myself to do it the hard way.

52:08 - Tom: Anyway, we’re getting close to the end of our time. Let me just quickly plug the next book: God, Human, Animal, Machine by Meghan O’Gieblyn. It’s somewhat philosophical about what it means to be human—gets into Descartes and the soul. I’m trying to piece together a reading list for 2026. If you have recommendations, let me know. People have voiced that “cautious optimism” is a theme that’s very welcome. We did hash through Empire of AI and it was great, but I don’t know how many more books I can read that are going to just make me feel bad. Le?

53:50 - Le: Yeah, I mentioned this is my first time here. Do you guys read exclusively AI-related books or general technology related books?

54:02 - Tom: They started out being pretty focused on AI, but I’m broadening the topics because it’s just rehashing common ground. I thought Breakneck was more AI-related, but it’s more about the competition with China, a geopolitical commentary. I do want AI to be a theme, but I want to broaden it. Careless People—there’s nothing about AI there, but I still think that’s a worthwhile read. Careless People is to social media what Empire of AI is to AI. It looks at the organizations that built the technology and how their culture leads to some lost idealism.

55:22 - Daniel: I do think it would be worth broadening the focus a little bit. I would count the book we’re going to read next because it seems like a more philosophical book about human identity and consciousness. I’m currently interested in that. I also put in a book a colleague suggested to me, More Everything Forever. I have an appetite for some more “feel-bad” books because we’re in a vexed situation and I like thinking through the vexed-ness.

56:19 - Tom: Cool. Well, hey, thanks for the great discussion today. So many insights and I love to see the energy here. Thanks for coming and participating. I do record these and send these out. People who are just listening and catching up also benefit. I get a lot of feedback from people saying, “Hey, can’t make the book club, but I like the notes.” Have a great rest of your day. Thanks everybody.

Comment on LinkedIn

About Tom Johnson

Tom Johnson

I'm an API technical writer based in the Seattle area. On this blog, I write about topics related to technical writing and communication — such as software documentation, API documentation, AI, information architecture, content strategy, writing processes, plain language, tech comm careers, and more. Check out my API documentation course if you're looking for more info about documenting APIs. Or see my posts on AI and AI course section for more on the latest in AI and tech comm.

If you're a technical writer and want to keep on top of the latest trends in the tech comm, be sure to subscribe to email updates below. You can also learn more about me or contact me. Finally, note that the opinions I express on my blog are my own points of view, not that of my employer.