Book review of "Co-Intelligence: Living and Working with AI" by Ethan Mollick — an educator embraces experience-based AI learning
As this is the 10th book in our AI Book Club, I’ve started to notice a distinction between those authors who are grounded in experiential use of AI versus those who treat the topic at a more conceptual, theoretical level. Those who regularly use AI seem much more practical minded, with cautious optimism and more practical observations. In contrast, those who aren’t using AI regularly have more far-reaching claims and wild takes on AI.
Mollick, a Wharton Business School professor, draws on his expertise in studying how technologies are used, noting several “sleepless nights” he had played with ChatGPT when it came out in 2022 that ignited his curiosity (xi). I liked his hands-on, pragmatic tone as he walks through a progression of LLM-generated limericks with successively more advanced LLM models.
Mollick provides a middle path focused on agency and experimentation, rejecting the extremes of AI doomerism or boomerism. He doesn’t spend much time on existential risks, only briefly explaining Bostrom’s hypothetical paperclip AI scenario in a chapter on alignment. Ultimately, Mollick frames AI as “a new thing in the world, a co-intelligence, with all the ambiguity that the term implies” (xix).
Although I appreciated how grounded and reasonable his assertions were, I kept hoping that Mollick would “wow” me with some new idea. If you’ve been working with AI tools for a while now, you might find many of Mollick’s observations too obvious or cursory. Published in April 2024 (hence likely written in 2023), the book already feels somewhat dated, with the summaries of how LLMs work, alignment basics, and other survey-like topics.
Perhaps Mollick’s primary contribution, as members of our AI Book Club noted, is that Mollick presents AI tools through an educator’s lens. He argues that we should “always invite AI to the table” for our tasks so we can better understand AI’s jagged edge (47). What’s the jagged edge? Mollick says AI can be bad at easy tasks (spelling, for example) and good at difficult tasks (such as coding), but the line between what’s easy or hard for an AI to do is jagged and inconsistent, hence the “jagged frontier” analogy (47). Only by trying AI out with tasks do you develop a feel for when AI would be helpful and when you might push back.
Mollick explains:
“The AI is great at the sonnet, but because of how it conceptualizes the world in tokens rather than words, it consistently produces poems of more or less than fifty words. Similarly, some unexpected tasks (like idea generation) are easy for AIs while other tasks that seem to be easy for machines to do (like basic math) are challenges for LLMs. To figure out the shape of the frontier, you will need to experiment.” (47-48)
Amen to experimentation! I didn’t realize Mollick was one of the researchers of Navigating the Jagged Technological Frontier paper, which contains key concepts I’d encountered and written about before reading Mollick’s book (see my post Why attitudes and experiences differ so much with regards to AI among technical writers). In the Jagged Frontier research paper, the researchers promote two models for working with AI: a centaur — someone who “discern[s] which tasks are best suited for human intervention and which can be efficiently managed by AI” (16, Frontier). In other words, a centaur who knows when to be a horse and when to be man, leveraging a strategic division of labor. And a cyborg — someone who “intertwine[s] their efforts” with the machine, “alternating responsibilities at the subtask level, such as initiating a sentence for the AI to complete or working in tandem with the AI” (16, Frontier). Cyborgs work fluidly with AI, moving in a continuous back-and-forth between human and machine in a blended way.
I find these metaphors and observations to be spot on. I’ve developed an intuitiveness for AI literacy by bringing AI to every one of my doc tasks, and I usually follow more of a cyborg model (lookout transhumanism, here I come!). For those tasks in which AI doesn’t work so well (bug triage, goal planning), I’ve stopped using AI, so I act more like the centaur. But sometimes tasks I initially thought were out of bounds for AI are actually approachable in multiple stages or steps.
Returning to the educator lens idea, when you think about this pedagogical context, Mollick’s assertions are much more radical. Most people in educational settings (primary, secondary schools + colleges and universities) have a sour view toward AI and treat it like a toxic plague. It seems most of their discussions are about which AI detection tools to use to catch students using AI. In my kids’ school, AI sites are entirely blocked on the school network.
AI tools are often used by most students for cheating, I guess, so teachers have framed these tools in that context and imbue students with a negative attitude toward AI. Just as teachers see AI as a threat, students have also come to see AI as a threat. Consequently, my kids want almost nothing to do with AI. The worst is when a teacher accuses your student of using AI and gives a failing grade. Then you look at the paper and realize that the assignment involves summarizing some wikipedia-like topic that probably exists across thousands of pages on the Internet.
As a father who regularly sees his kids frustrated with math and hears complaints about teachers who can’t teach, I’ve wanted to use Gemini to rewrite textbook chapters (or rather show my kids how to do it)with better explanations and examples/tutorials, achieving the personalized tutoring that Mollick writes about. Mollick says that Benjamin Bloom’s “2 Sigma Problem” (159) found that one-to-one tutoring can have a tremendous impact on learning. Mollick summarizes Bloom’s research as follows:
“…the average student tutored one-to-one performed two standard deviations better than students educated in a conventional classroom environment. This means that the average tutored student scored higher than 98 percent of the students in the control group (though not all studies of tutoring have found as large an impact). Bloom called this the two sigma problem, because he challenged researchers and teachers to find methods of group instruction that could achieve the same effect as one-to-one tutoring, which is often too costly and impractical to implement on a large scale.” (159)
In other words, students who have personal tutors perform “two standard deviations better” than others (whatever that means). My wife works at Stanford, and I remember learning one day that each member of the Stanford football team has a personal tutor assigned to them (not sure if this is still true). That’s a 1:1 relationship between football players and tutors. Tutors make a difference. Why are we rejecting a technology that democratizes learning with nearly free tutor access for every student? AI tutors could provide the patient, knowledgeable, accessible-at-all-hours type of tutor that students need. Instead, most educators block AI sites or forbid AI use.
As a result, students are moving in the opposite direction of the future: away from AI fluency. Not only will there not be many entry-level jobs for students post-college, their anti-AI trajectory will likely steer them away from many career possibilities as well. (Consider this test: Suppose you’re the hiring manager for a tech-related role. A candidate tells you they refuse to use AI. Do you hire them, knowing that you’ll likely need two writers to do the equivalent work?)
If more teachers had Mollick’s willingness to experiment with AI and see how to use it in educational contexts, education would be going in a better direction. Teachers need to reapproach homework tasks and move away from the same kinds of assignments that can be easily answered with AI tools. One of Mollick’s techniques is to push his students to do tasks beyond their current capabilities, such as requiring a non-coder to build an app. Mollick writes that his first assignment for entrepreneurship students now reads as follows:
“Make what you are planning on doing ambitious to the point of impossible; you are going to be using AI. Can’t code? Definitely plan on making a working app. Does it involve a website? You should commit to creating a prototype working site, with all original images and text. I won’t penalize you for failing if you are too ambitious.” (167)
I love that idea. In fact, the advice reminds me of a conversation I had with some colleagues the other night at an afterwork get-together. Our discussion touched on eastern and western medicine along with neurology, and one of the PMs lamented that these domains/disciplines are often separated and siloed. He wanted to see more cross-domain blending of knowledge and more blurring of roles (like a psychiatrist suddenly performing some Reiki energy healing). It’s not just that he wanted to find a doctor who embraced both eastern and western medical practices. He sees a more extreme blurring of roles, so that you function as both doctor and patient, both engineer and tech writer, both creative writer and illustrator, both neurologist and hypnotherapist and philosopher, all in one.
The blurring of roles seems to be what Mollick’s pushing his students to embrace in the classroom. By asking students to go beyond their comfort zones and tackle a previously impossible task, something within the knowledge domain of some unreachable expertise, they learn to use AI tools to go beyond generating a mediocre essay or summary. Contemplating a future where skillsets and expertise blur across roles and boundaries, allowing us to cross-pollinate and expand ourselves in previously siloed ways, seems like an interesting future. Mollick’s book helps move us toward that ideal.
About Tom Johnson
I'm an API technical writer based in the Seattle area. On this blog, I write about topics related to technical writing and communication — such as software documentation, API documentation, AI, information architecture, content strategy, writing processes, plain language, tech comm careers, and more. Check out my API documentation course if you're looking for more info about documenting APIs. Or see my posts on AI and AI course section for more on the latest in AI and tech comm.
If you're a technical writer and want to keep on top of the latest trends in the tech comm, be sure to subscribe to email updates below. You can also learn more about me or contact me. Finally, note that the opinions I express on my blog are my own points of view, not that of my employer.