AI Book Club recording, notes, and transcript for Ethan Mollick's Co-Intelligence
Audio-only version
If you only want to listen to the audio:
Listen here:
Book review
Here’s my book review: Book review of “Co-Intelligence: Living and Working with AI” by Ethan Mollick – an educator embraces experience-based AI learning.
Discussion questions
Here are some of the discussion questions I prepared ahead of the discussion. We talked about some of these issues, but not all.
- A frequent question I receive about AI is people wanting to know what tasks it’s best suited for. e.g., what tasks is AI good at? What do you think of Mollick’s answer to this question? (Bring AI to the table for all tasks…) In your own work, what does the jagged frontier look like? Is there a jagged frontier for tech comm?
- Do you find yourself acting more like a centaur or cyborg?
- Do you agree with the technique of using personas? Does this technique risk increasing AI hallucination because the AI is putting on a false persona, or does it set needed expectations about the rhetorical situation?
- Do you think Mollick’s recommendation for educators to assign more difficult projects to students is a good solution to the homework apocalypse? What would you tell educators about how to use or not use AI in school?
- Do you think Mollick’s observations are a lot less radical or doom-filled because he grounds so much in practical experience?
- Was I wrong to push back against the characterization of AI’s output as “alien minds” so much in Harari’s book, given that Mollick also characterizes AI as alien?
- Do you think using AI as a sounding board, particularly as an alien mind that can embody and communicate different perspectives that you assign it, is especially useful? Isn’t this what some tech writers have done in evaluating docs using different personas?
- Are we exhausting the subject here and approaching the end of ideas, or do you think there is enough unexplored landscape to justify reading many more general books on AI? I don’t want to keep reading the same themes over and over.
NotebookLM
Here’s access to the NotebookLM with some of the Co-Intelligence notes.
Transcript
Here’s a transcript of the discussion, with some grammar slightly cleaned up by AI:
[0:03] Tom Johnson: You’re listening to a recording of the AI Book Club. This session covers Co-Intelligence: Living and Working with AI by Ethan Mollick, a New York Times bestseller published in 2024. In this discussion, about seven or eight of us are informally talking about parts of the book we liked and disliked.
We cover topics such as the educator’s lens—the author is a Wharton Business School professor—cautious optimism, the “jagged frontier,” and more. It was a lively discussion. If you’re interested in participating, go to idratherbewriting.com/ai-book-club for the schedule and joining details.
[1:04] Tom Johnson: Our next book for January 18, 2026, is God, Human, Animal, Machine: Technology, Metaphor, and the Search for Meaning by Meghan O’Gieblyn. It’s excellent—much more philosophical. We also have an online Slack where you can chime in. My name is Tom Johnson, and we have a group here to chat about Mollick’s work. Let’s get going. What did you think of the book?
[2:21] Lois: I liked what he had to say, and he was engaging, but it started to feel like it was repeating a lot of old themes. There wasn’t much new here, and the fact that the book is over a year old probably doesn’t help.
[2:46] Tom Johnson: I wanted to like it more, too. It’s been widely read, but I wonder if its peak was a year ago. I kept coming across the “Centaur vs. Cyborg” idea, which Mollick helped popularize. The Centaur approach involves switching between human and AI tasks, while the Cyborg approach is a continuous, intertwined integration of AI. What did you think of that metaphor?
[4:03] Daniel: I actually hadn’t heard that or the “Jagged Frontier” metaphor before. They were useful, though they made me realize I haven’t analyzed my own AI use enough. I need to be more intentional.
On one hand, much of this felt like a rehash, especially the history of AI. But I loved the focus on practical experience, particularly in the classroom. As a former college instructor now working in ed-tech, I really connected to the chapters on AI as a coach or tutor.
[6:02] Daniel: I saw Ethan Mollick give a keynote this summer. It was “shock and awe”—he had four things open, building sites and doing examples in real-time. It made me want to teach again. Even though some concepts aren’t brand new, his application of them as a teacher is what I liked. He’s a serious researcher whom even critics like John Warner respect.
[7:32] Sharon: I found it a little overly optimistic. It terrifies me that people might stop reading because they can just use AI for essays. Research shows that AI and social media use in adolescents can reduce the growth of certain parts of the brain. I came away concerned. Daniel, what was so exciting to you?
[8:30] Daniel: I liked the specificity. He talks about his Wharton class and how AI can enable “flipped classrooms.” It makes lectures—which aren’t an effective way to learn anyway—go away. In creative writing, we use workshops; that active learning should be used in freshman writing, too.
Most of my friends from my PhD program hate AI. They were surprised I’m pro-AI. I’m actually more skeptical than most people in software, but I’m more optimistic than most teachers. I like how Mollick puts things into practice rather than just talking in an “airy” way like some CEOs.
[10:53] Nathan: I used to teach as well. You can’t force curiosity. If you want to use AI to cheat through a credential, you can. But if you are curious, AI as a tutor or coach is incredible. It amplifies our instincts—both good and bad.
[12:44] Debbie: I appreciated that Mollick is actually using the tools. In our last book club on Yuval Noah Harari’s Nexus, I didn’t get the sense Harari was using AI from a user perspective. Mollick’s education chapters were the core value for me. I have an eight-year-old son, and “AI literacy” is something I want to cultivate in him. I’m interested in cautious optimism rather than the extremes of “doom” or “utopia.”
[15:59] Le: I haven’t finished the book yet, but I appreciate the optimism. One thing that stuck out was when he used AI in his own writing and then did a “gotcha” to the reader, pointing it out later. It highlighted how fallible we are and why AI literacy is vital.
Also, the chapter on creativity changed my mind about AI art. We often have a knee-jerk reaction that “it’s not art,” but he focuses on the “flow state” the creator enters. The output is still a representation of their creativity.
[19:51] Jessica: As a newcomer, I’m early in my AI journey. If the metaphor is Cyborg vs. Centaur, I’m a “person leading a recalcitrant donkey.” For me, this was a perfect introduction. I also appreciate the reminder that human creativity doesn’t spring from nothing—we all build off the past.
[23:16] Debbie: The calculator moral panic he mentioned reminded me of photography. When photography emerged, people still painted, but the mediums shifted. Graphic design tools like Adobe Illustrator democratized creativity for me. At the end of the day, “taste is still taste.” I’m willing to critique AI art the same way I critique a computer-generated poster.
[26:32] Daniel: Mollick mentions AI as a “great leveler,” narrowing the gap between low and high performers. From a teaching standpoint, that’s huge. But as someone who spent years becoming an expert writer, it bums me out that anyone can now do it. It makes that labor less valuable economically.
If the New York Times or Netflix stops hiring artists and writers in favor of AI, that’s not good for the workers. Mollick is rosy about us “having more time,” but history shows we just get busier with other work.
[30:51] Tom Johnson: Mollick talks about giving students tasks that would have been impossible without AI—like making a non-coder build an app. I’ve been trying to push my own boundaries with “vibe coding“—generating code I don’t fully understand to build apps. My kids, however, hate AI. They likely pick that up from an academic environment that feels threatened by the model.
[33:09] Sharon: I hired an illustrator for a book, and when I uploaded her draft to AI to brainstorm changes, she was devastated. She felt so threatened. I feel that too; I’ve been a good writer my whole life, and now it feels like that special skill is being democratized.
[35:37] Nathan: Does the world need another app? We’re working in an old economic order where the “genius” creates an app and sells it. If a high schooler can make one, is it still valuable? Maybe education should move toward “service learning”—solving real-world problems in the community where AI is just a tool in the background.
[37:58] Le: I disagree. I’m a tech person, not a writer. AI has given me the confidence to express my ideas. Does the world need another book? Yes, because every author has a unique voice. Using AI to become a generalist is liberating.
[41:42] Lois: I wonder if the studies saying AI reduces the gap between programmers are conflating “good employees” with “fast workers.” A good employee is diligent and persistent. Does AI give you those traits?
[43:54] Daniel: We tell writing students: “Don’t think about the product, think about the process.” Even if your “vibe coding” app is half-broken, you are learning a process of experimentation that will put you ahead of people who never tried.
[46:21] Le: Exactly. Mollick calls it the “button.” I hit the “button” to get a starting point on a blank page. I’m becoming a better writer by refining AI’s verbose output. It gives me the momentum to start.
[48:22] Tom Johnson: At work, there is palpable “app fatigue.” Every two weeks there’s a new “must-use” internal tool. It’s an onslaught. For my own writing, I’ll write a draft, then use Gemini Deep Research to find sources and quotes to splice in.
I also use NotebookLM to check my memory against book PDFs. People at work warn me I might be losing my critical reading skills. It’s hard to force myself to do it the “hard way” when AI is so efficient.
[52:08] Tom Johnson: Our next book is God, Human, Animal, Machine. It gets into Descartes, the soul, and what it means to be human. I’m trying to build the 2026 list. People seem to like the “cautious optimism” theme.
[53:50] Le: Do you read exclusively AI books?
[54:02] Tom Johnson: We started focused on AI, but I’m broadening it. If we only read AI surveys, we just rehash the same ground. We want to look at technology, philosophy, and how they intersect.
[55:22] Daniel: I’m interested in the philosophical angle. Identity, consciousness—I have an appetite for those deeper topics.
[56:19] Tom Johnson: Thanks for the great discussion today. I’ll send out the recording and transcript. Have a great day, everybody!
About Tom Johnson
I'm an API technical writer based in the Seattle area. On this blog, I write about topics related to technical writing and communication — such as software documentation, API documentation, AI, information architecture, content strategy, writing processes, plain language, tech comm careers, and more. Check out my API documentation course if you're looking for more info about documenting APIs. Or see my posts on AI and AI course section for more on the latest in AI and tech comm.
If you're a technical writer and want to keep on top of the latest trends in the tech comm, be sure to subscribe to email updates below. You can also learn more about me or contact me. Finally, note that the opinions I express on my blog are my own points of view, not that of my employer.


