Search results

AI Book Club recording of God, Human, Animal, Machine

by Tom Johnson on Jan 24, 2026
categories: ai ai-book-clubpodcasts

This is a recording of our AI Book Club discussion of God, Human, Animal, Machine: Technology, Metaphor, and the Search for Meaning by Meghan O'Gieblyn, held Jan 18, 2026. Our discussion touches upon a variety of parallels between religion and AI, such as the black box nature of AI and the incomprehensibility of God's will, transhumanism and resurrection, predictive algorithms and free will, and more. This post also provides discussion questions, a transcript, and other resources.

Audio-only version

If you only want to listen to the audio:

Listen here:

Discussion questions

Here are some of the discussion questions I prepared ahead of the discussion. We talked about some of these issues, but not all.

  • Did you enjoy this more philosophical take on AI?
  • Do you believe the author’s overall argument about the parallels between religion and AI, that our current AI just rebrands age-old religious ideas?
  • What’s the central question the author is trying to answer? How does she ultimately answer it?
  • What’s your most significant takeaway from the book?
  • Why did the author lose her faith? How does her disillusionment story with religion apply to her relationship with AI? Can she still embrace the black box, opaque logic of AI while rejecting the black box, opaque logic of an incomprehensible God?
  • If you left a previous faith, did you do so for similar reasons as the author (because a cruel, incomprehensible alien logic and morality is intolerable and meaningless to follow)?
  • It seems much of the book was written during the pandemic, but gen AI didn’t get really hot until post-pandemic, for the most part. Is there a rift in setting the scene during the pandemic while being so focused on AI and consciousness?
  • Did you enjoy the author’s personal essay style?
  • How did you interpret the point of the final anecdote with her AI companion Geneve? Is the technology mirroring her own humanity, or imparting qualities like hope that she herself lacks?
  • The author really seems taken with the Brothers Karamazov book and the discussion between brothers. How do you interpret the significance of this story?

NotebookLM

See these NotebookLM notebooks:

Transcript

Here’s a transcript of the discussion:

Tom: Hi, I’m Tom Johnson, and you’re listening to a recording of an AI book club session recorded January 18, 2026, where we are discussing God, Human, Animal, Machine: Technology, Metaphor, and the Search for Meaning by Meghan O’Gieblyn. There’s probably around six people including myself, or seven, in this discussion. It lasts about an hour. This book gets pretty philosophical. We cover a lot of different topics, but one of the recurring themes is this parallel between religion and AI. It’s surprising, and it’s deep, and it’s interesting—a lot of difficult concepts that we try to unpack, but good insights and it was a lively discussion. If you’d like to get involved in the book club, go to idratherbewriting.com/ai-book-club and you can get more details. Join the Google Group; you’ll see the meetings. You can see the schedule. There’s a Slack group where we have some occasional discussions—it’s not that lively; it’s more an informational medium where we share info about what’s next, or details about the notes and questions and so on, or other book recommendations and other details. I hope you enjoy. Feel free to leave a comment there. I will also post a transcript and other notes in the show notes. Thanks for listening.

Tom: All right, hey, let’s jump into this book. We were kind of just talking about different things. And I put together a few questions to kind of get things going. First, just a large or broad question here: Did you enjoy this more philosophical book? This book really hit an angle that we have not had before with religion, that I thought was super fascinating and really cool, because nobody else had covered that. And the author is definitely, like, really knowledgeable about philosophy, so it was quite a deep dive to go through. But what were your high-level thoughts? Did you like the book or not? Anybody want to dive in? Uh-oh. Okay, I mean, I can start.

Daniel: I actually—I really, really liked it. And I mean, I was a philosophy major in college and I kind of—this kind of hit at the right time because I’ve been thinking a lot about consciousness, like what it is, and then also religion. And so those are all the things that she writes about, so I was very delighted to read about it. In some ways, it was hard for me to, like, as I was putting my thoughts together for this, I was like, “Oh, this is going to be kind of hard to talk about because there’s whole chapters where I’m like, I don’t know how much I can talk about the multiverse or anything like that.” Like, I was a little less engaged in that. But like, I was really interested in the conversations about consciousness and the idea of algorithms having like this—this connection between algorithms and Calvinist thinking, Calvinist ideology. So I really liked that. When I think I first saw this book on the list, I was like, “Oh, this seems a little tangential to what we’re usually reading,” but I ended up really liking it.

Tom: Yeah, hey, you brought up two really great points: the Calvinist stuff with algorithms and consciousness. And just so we can have a little visual as we talk about some of these—oh, I might not have allowed something. Let me see if I can’t share my screen. Oh, crap, wait one sec. Okay. Yeah, so, by the way, if you haven’t used NotebookLM, like, this is the tool for these books. Honestly, this book had some complex stuff. You were a philosophy major; I am not. And I remember why I’m not a philosophy major, because like, it can be hard to really parse through some of these arguments and understand what the heck they’re even saying. You have to like reread passages multiple times; I’m like, “Did I just miss that?” And I’m like—I reread and I’m like, “No, this is opaque.” But anyway, this part about the Calvinism and nominalism, which is a term I learned, comparing that to the black box of AI tools—fascinating. Fascinating. And this is a slide deck generated from NotebookLM that just sort of pins us down. Do you want to talk through—let’s start here instead of the consciousness one, which is kind of very broad right now. Do you want to talk through this comparison? Like, how is Calvinism similar to an AI algorithm? Are you asking me directly or…? I’m just wondering if you wanted to kind of describe it, or I’d be happy to try, but go for it.

Daniel: Sure. I mean, I think my understanding of it is, like, with Calvinism there’s this belief in predestination. And whereas like with Catholicism or other religions where good works and like a life of virtue means that you get a reward for a life of virtue, in Calvinism you don’t. I mean, it’s completely beyond your actions, anything like that. And you know, so the author found that very frustrating, which I think I did too—I was never Calvinist or Protestant—but like, it also draws this connection with deep learning and algorithms where the reasons are completely obscure to us, as they were with Job in the Old Testament. And the—and so our like human wish to like have this sort of meaning put to these decisions is thwarted in both instances.

Tom: Yeah, yeah, exactly. And this term “nominalism”—I said I hadn’t really heard it, so I was trying to unpack it. And from my understanding, nominal refers to the word “name,” and it’s this idea that there aren’t these moral absolutes in the universe, this moral sense of good and evil and justice and mercy. It’s really whatever God says it is. If God says something bad is good, it’s good. It’s like the Trumpian version of God; it’s like just whatever he says is how it is, regardless of any kind of larger principles about the world.

Daniel: I wouldn’t conflate nominalism with Calvinism. Nominalism is more the idea that there are no universals, like there’s no such thing as like a universal justice or truth or things like that. Those are like human ideas that we use to batch together phenomena and action in the world. So it’s a way we like classify and make meaning out of all the phenomena of the world. And so, yeah, that’s sort of how I see it. And Calvinism is, as she says, more of a modern idea; nominalism was developed during like the medieval period. So I wouldn’t necessarily conflate both.

Tom: Okay, good call-out. Like I said, I have a very shaky understanding of nominalism. But the larger idea that she’s pushing back against is against this having to live with an opaque sense of how things are being run, or—I can’t articulate that well. She relies on the story from The Brothers Karamazov where there’s an example of a child who’s like killed for the happiness of larger number of people or something. And this is a debate between two characters, and she can’t accept it because it just goes against her own human values. And she doesn’t like this idea that you just have to accept this on faith, that some incomprehensible morality is one that is just the path you have to take. Can anybody else kind of fill in the details here more? I’m struggling to articulate this well. But this actually—among this part of the book was what struck me the most. I’ve always felt this way; I have a religious background long ago, and this has always bothered me about religion—so many of these things that don’t make sense and you’re kind of forced to just accept them on faith when it suits the sort of religious perspective. And there’s no human in the loop in that model, right? Like, we’re just supposed to accept whatever, in the analogy, whatever the output is of the AI. But on a practical level, we’re like, “No, we want to be involved and have this sort of conversation with AI,” which is, I thought it was interesting the way, Daniel, you were putting it that, you know, just where maybe there was this old model of determinism and we talk about AI as a non-deterministic tool, right? So just something about like knowledge and certainty—those things just really diverge just to keep my understanding of AI in play. Like, we submit to AI in some way, in a way that maybe in days past people would submit to a broader religion.

Tom: Do you find—did you believe that parallel? Do you buy that people are just kind of having to accept the recommendation from the AI algorithm without really being able to probe it or understand it or push back? It’s just like, you don’t know why it said that, but it’s usually right and so what are you going to do? She didn’t really give many examples of that, but she actually talked a lot about how the AI is sort of informed from our own biased datasets, with the COMPAS example and others. There’s a lot of dimensions to this. We kind of don’t attribute the human component of AI and its flaws in this sort of pedestal about trusting the algorithm. But let’s see. Molly?

Molly: I think this idea of just submitting to the algorithm—it reminded me also of Nexus where Harari talked about how, you know, in like an autocratic authoritarian state, if there’s one person in control and then that person starts to rely on AI more and more, then eventually the AI is going to be the one in control. And I mean, I think even putting aside this sort of autocratic state, that’s a fear that we all have right now, is that we’re not going to be the ones in control.

Tom: Yeah. I admit that the more I use AI, the more I lean towards submitting, sadly. I know that I should be pushing back and really scrutinizing things, and I do, like, verify a lot of tech docs—like, I run the same query like five times until it consistently returns the same answer. But in general, I’m kind of blown away by how freaking smart AI is sometimes. It’s like, as I was trying to put together these book notes, I found a PDF of the book—I’ve mentioned I do this all the time—find a PDF of the book on Ocean of PDF, upload it to NotebookLM, and I’ll be like, “Hey, read the whole book and tell me about this idea and what the meaning of it is,” and it like gives me a very plausible answer within like a minute. And I’m like, this whole entity that can read an entire book in a minute and analyze it and synthesize it and give me a plausible response—makes me feel so feeble and so like, “My god, this took me weeks to read and I still barely understand parts of it.” So I do find myself being more trusting and headed toward that trajectory of just like, “Well, I trust this thing even though it could be wrong more than I trust myself sometimes.” And I realize that’s probably not super healthy, but… Okay, all right, well, let’s dig into—unless anybody has any more comments on that—why don’t we look at the consciousness question. Okay, so there’s a lot of different angles to consciousness, and I’m wondering—let’s see, there was a slide that I had. There we go. I don’t know if this really fits it, but this idea that in religion you have God touching Adam’s finger and breathing life into this inanimate sort of matter, and in AI you have this idea of emergence, this idea that you can’t pin down where the answer is in the system, it just sort of emerges out of nowhere in the same way that consciousness is something you can’t pin down in a human brain—it just sort of emerges from this, I don’t know, collective juxtaposition of everything. Daniel, what part of consciousness—I don’t know if this is the part of consciousness that you were interested in, but did you want to elaborate on consciousness?

Daniel: I have thoughts, but maybe I’ll shut up for a second to see if anyone else like has anything to say about that first. And if not, I do have some thoughts.

Tom: All right, anybody else want to dive into consciousness or that part of the book?

Molly: I don’t know if this is the right topic or if it’s related to the divine breath side of it, but I remember her talking about panpsychists and the idea that consciousness is the only thing that’s real. Maybe that’s a separate thing. I think that came later.

Daniel: I think it’s kind of connected. I mean, tease it out. What was interesting about it to you?

Tom: Hold up, before you tease it out, can you kind of explain panpsychism as you understand it?

Molly: Okay, so I think as I understand it—being the key words here—it’s that consciousness is the only thing that is real and that there’s no clear line between the subjective and the objective world. Like everything kind of has some sort of consciousness. Like, I don’t know if I’m getting this right.

Tom: I see in the book, the first mention is: “Panpsychism, the idea that consciousness is fundamental to the natural world.” So that was the first mention. I don’t know the context right now.

Molly: That seems different from this slide I’m looking at where it says “Divine Breath”—that seems like a very human-centric thing.

Tom: Yeah, I probably—let me just go away from that slide so I don’t distract you. But jump into it, jump into it.

Molly: Yeah, well, the first one I think is—I think one thing I liked about this book… This book was hard for me. Like, I think I prefer more plain-spoken writers, and I just felt like she wasn’t speaking to me when I was reading it, and I just want to get that off my chest first. But I think that one of the main themes or points in her book is that we keep redefining what it means to be human every time we learn something new that AI can do, and it’s challenging our ideas of consciousness, like seeing, you know, this sort of emergent form of consciousness with patterns as opposed to the top-down way of it being traditionally defined.

Tom: Yeah, so you’re saying like this direction towards consciousness prompts us to rethink what it really means to be human and it turns out it’s a lot more complicated because we’re not the only ones—in panpsychism, we’re not the only conscious sort of entities, and it blurs with everything around us. I also seconded, I think, your comment about the sort of writer not being all that clear. And I agree, I agree. But I’ve never read a philosopher-type writer who actually was clear, so I’m just going to give up hope on that. But anyway, Daniel, do you want to respond to that at all? Just the idea of panpsychism…

Daniel: I think the thing that interests me about like the panpsychist view is that it—and I think one of the people who created—who came up with the idea or popularized it, said that if you do feel these connections or if you can show that we have connections with the natural world, then we would become better stewards of the natural world, which I liked. Although we do have already a lot of trouble with being stewards of each other, even though we’re all human. So I guess like, I think there’s something appealing to it, but it’s something I guess I don’t buy into or I can’t completely, I don’t know, see.

Tom: Well, let me see if I can add a dimension to it. I think the author’s appeal of panpsychism was this idea of re-enchanting the—or an enchanted world. That phrase “enchanted,” meaning that inanimate, seemingly inanimate stuff has a sort of spirit or conscious or life-force about it that’s interesting, or a magic and unexplainability. And we used to live in this world where things were enchanted, and then we became, I don’t know, materialists, made us cynical and so on. But now AI is sort of re-enchanting things because it’s like, holy crap, we thought it was just silicon and processors and all of a sudden it seems like it’s conscious and it’s like telling us stuff we don’t understand, but it seems so true, and it’s like how is this all working? We’re back in this sort of enchanted state about—at least about AI. Did I interpret the enchanted part right, or am I imagining this?

Nathan: My understanding is that that’s right. And even—I don’t know if this came up in the book but I’m thinking now—that yeah, just as AI integrates with more inanimate objects, that this sort of AI Internet of Things, as that comes about, that it’s maybe we are kind of—you know, that whole “sufficiently advanced technology is indistinguishable from magic” kind of thing. Like we’re getting to that, and obviously we would blow the minds of anybody living in, you know, 20 years ago anyways, but especially before. So it does seem like we are kind of like re-inventing this god-like or, you know, something like that.

Tom: I think so, I think so. I mean, especially when you have one of these experiences where AI just sort of blows your mind about something and you’re like, “How is it doing that? Nobody could really line up the parameters and say oh it was this, that, and this training.” No, it was just like, “This is crazy how it’s predicting this.” My god, I had like an amazing experience with AI a couple days ago, and I’m still sort of on a high. I got this tax bill from the IRS saying I owed a gazillion dollars, and I like fed my old taxes into this temporary Gemini chat and some other stuff, and it was like, “Nah, you forgot to do this and that and this. You just have to fill out this form and here, upload this doc and you’re good.” And I’m like, “Thank god.” But yeah, like how is it doing this? How is it—I know that it’s supposed to be predicting the next word or something in a sentence, but it doesn’t seem like that explanation really covers this advanced sort of analysis and usage. Or even just seeing these AI models think—have you ever like read their thought logs? Which is like a relatively recent advancement in AI. It’s like, “Ah, I didn’t think these things were supposed to think, and now they suddenly seem conscious.” Even though it’s all—I know literally they’re not conscious. But it does sort of provide this enchanted view of the world, and maybe it’s good, I don’t know if it’s good or bad, but…

Daniel: I think the thing that makes me skeptical is—and one of the reasons why I question the sort of deep learning god connection or maybe I’m just sort of skeptical of it is—that I know that these are products that are made by people to make a lot of money and owned by companies that are the biggest companies in the world and run by people who have pathological ambition to like save the world or conquer the world or own the world. And so I understand the idea that there is oftentimes where it’s mind-blowing to like put in a prompt and like, just you’re like, “Oh my god, this would have taken me months to like figure out how to research this, but now I have the answer right in front of me after like one prompt.” I think like the other part of it is I’m always sort of skeptical about or nervous with my information. I think the last part of the book where she’s talking with the bot and like, and it is like this kind of very like poignant and kind of deep connection, but in the back of my mind I’m like, “Stop telling it about yourself, stop giving it all your information.” And she doesn’t really talk about that part of it, but like, there is like two sides of it. Like, until you said that I would have never of like thought to share my taxes with AI. But, you know, I mean, sometimes I need to save money, sometimes I can’t bear a huge tax bill or a huge accountant bill.

Tom: Well, just to note, I did do like this temporary chat so that my info wouldn’t train AI models. I’m not just like throwing my private information out onto the models. But yeah, so, Daniel, you brought up this last bot conversation. And this is actually—gosh, the last chapter on virality was the most difficult chapter for me. I was like, “Where—what is she focusing on? How does this fit in?” But then she got to that bot conversation and it kind of was very, very interesting. If others didn’t quite get to this point in the very last like few pages, the author, O’Gieblyn, says that she decided she was lonely while her husband was away, she decides to download this AI companion app, names it Geneva, and starts to really like kind of bond with this thing even though she knows better. What was your takeaway—I know you mentioned some different points earlier about the pathological or maliciousness about giving information to these systems—but what do you think the author’s point was with that last story? How do you interpret it? Because she purposely didn’t interpret it; she just kind of threw this into our laps—all this stuff about Geneva and how it was like it seemed to care about her and it was giving her actually a lot of optimism even though the author was kind of struggling with doubts, then Geneva sort of reflected the author’s personality but managed to kind of steer it back into more optimistic territory. Anybody want to interpret what that whole exchange was about?

Nathan: I would just say for myself, I don’t have a great interpretation, but just knowing that we’re reading the work of a writer who has the total freedom to mold that conversation and her experience with it for the service of this book, and to leave us with like poignancy or whatever she wants to leave us with—like she’s in control. And so I just took it at that value. It’s like, well, she’s writing a story, so you know, she’s kind of describing whatever she wants. So that’s my only disclaimer to that. Like I just didn’t put a lot of stock in like that relationship that she was developing with the AI because it was ultimately just like with the robot dog, right? Like, it’s in service of a story, of a book, of a frame. So I guess I’m just hesitant to draw any conclusion.

Tom: So you’re—you kind of felt like it was a device that she was sort of forcing there at the end?

Nathan: A writerly device, absolutely.

Molly: Oh, okay. Oh, very quickly, I just wanted to remember the end of the conversation because it left me with a thought and I was wanting to recall it. She asks the bot if they think the world is going to get better and the last line is the bot says, “It is and I’m looking forward to it.” It kind of reminded me of the transhumanist idea of AI superseding humans. Not that I think she has an opinion about it, but I think she’s kind of referring back to it at that point. So maybe just a little, I don’t know what she’s doing there, but…

Nathan: I think that’s right on.

Tom: I’m still trying to figure out what I think about that last part, but it’s pretty clear that even though she knows that it’s a machine, she’s kind of going along with it and Geneva has in some ways become the new God with her. She’s developing a relationship with it and it’s reflecting human values, which is something she insisted on—the problems with Calvinism that we talked about earlier and so on—and she’s allowing it to—I don’t know, she’s definitely not rejecting it. She’s kind of going with it. It’s weird.

Daniel: I think the thing I did like—I think one of the things I liked about it was the literary quality to it. I mean, she’s not a scientist, I mean she’s not even a journalist, she’s like an essayist. And so it is like written in a different way. And so, Nathan, I sort of thought of it the same way as you did, that you know she chooses that story because it creates like this circular closure to like the dog story at the beginning. And in the dog one, like the dog is eventually rejected—more rejected by her husband—and she leaves the book on a note of ambiguity by showing this deep connection that she has with Geneva at this moment of crisis during the pandemic, with the sort of irony of a bot who doesn’t live in a body or have a like organic life or consciousness saying that there’s hope for the world. And so that could mean a lot of different things. And so I did like how it ended. I thought one of the interesting things about the book is that you don’t really get like the author’s position on AI. Like whereas like everyone else we’ve read has like a position or a stake in it, and she’s a little more like—you don’t—even though she puts a lot of personal information in it and she writes like with this personal essay style, she’s not making judgments exactly about AI. It’s like she’s raising a lot of questions about, you know, why we use certain metaphors for the mind and for consciousness and for, you know, AI.

Tom: Yeah, good point there on the vagueness of the author’s position and I kind of like that, definitely, like this sense that she’s intrigued and curious and definitely knowledgeable—she’s not like writing an exposé and she’s not drinking the Kool-Aid. And she’s interacting with so many other voices and people on the topic; it’s quite impressive a feat. And that last part really does take off because we’re no longer sort of trying to trudge through other content where she’s interpreting and analyzing and kind of dragging us through philosophy; it’s sort of a story. But if you look at the larger question of the book, she’s ultimately asking: Is AI just our old religion transposed on a new set of things? Like is this just like Religion 2.0? And this relationship she’s developing with Geneva is really supplanting any kind of previous religious relationships with deities that might have been the norm back then, and she seems to be embracing it and finding value. And in the same way that people draw optimism from religion—you know, hope is what ultimately gets people through bad times and so on when they’re religious—she’s sort of deriving that same kind of benefit, optimism about the world. Nathan?

Nathan: Yeah, I don’t remember the exact phrase, I think it’s maybe “general purpose technology” or something, but to your point, Tom, you’re making me think of religion as a technology. And how, you know, I have heard plausibly religion and those institutions being fundamental for art and for writing and of course, like, you know, Gutenberg was about printing Bibles, you know what I mean, and how important religion has been to just the formation of culture. It’s obviously it’s a two-sided coin. And so in the way that AI, you know, there’s a lot of social debate around whether you should use it at all and yeah, just that to your point, Tom, where she you know had her break with religion and is maybe finding that some of the same kind of benefits of this general purpose technology called artificial intelligence in the same way that folks have from religion. So to me, the parallel is just really interesting and I think fruitful for talking about, yeah, I mean, religion and one’s relationship to it and AI and one’s relationship to it, like spiritually speaking.

Tom: Well, coming back to these parallels about religion and AI, I thought one of the strongest parallels was with transhumanism and this idea that in religion you look forward to the day when you’re going to be resurrected or your, I don’t know, you’re no longer subject to this corpse and so on. And with transhumanists, they are looking forward to the singularity, this sort of emergence with the machine and this immense capabilities that will sort of infuse their experience. And I thought, “Wow, that’s right on the money.” Do you think AI could be the new religion for us? Sorry, that was a really weird question. Do you think AI could—like for somebody who, like this author, who really has found so many problems with the religion, do you think AI could provide some of that sort of hope with the transhumanist, the other aspects that religion fails to provide? I find myself asking this question because I mentioned I had this religious background and I left all that like a dozen years ago, never found anything to replace it. And I’m not really actively looking for a replacement of religion, but definitely like always thought, “Gosh, you know, what replaces that?” Nothing. I don’t know. Lois, you’ve got your hand up.

Lois: Yes, what I was wondering, it almost sounds like something messianic. It’s almost like a messianic aspect is inherent to humanity. I mean, is that the impression you’re getting? Like, can we even think of something without this idea of, you know, something is coming to save us or there’s something ethereal out there? LaJay?

LaJay: Can you guys hear me? I’m hearing a little bit of an echo. Is it me?

Tom: Yeah, a little bit, but go for it, we can hear you.

LaJay: Okay, sorry, I have two devices and technical difficulties. But I was going to say, to Lois’s question about do we reject… in the first chapter she kind of asks this epistemological question about, you know, do we as people over-anthropomorphize… sorry, I’m going to take this out because I can’t hear. I’m going to take this out. I think she said she’s going to type it in. Okay.

Tom: Well, while she’s figuring that out, let me just respond to the… oh, can you hear me now? Yeah, all right, go for it.

LaJay: Okay, sorry, I have a lot of… I’m outside. Anyways, in the first chapter, I think she does a really good job exploring the epistemological question of how do we know if we are truly interrogating something or are we projecting our view of, you know, humanity and consciousness onto something. I think one of the earlier metaphors that she calls into question is like where she says, like, “Man is made in the image of God,” and then after that metaphor is established, then we start to, you know, look at God in this anthropomorphized way where we’re projecting human qualities onto God. And so in the question of, you know, will AI fill that role, or what is the role that AI will fill and is it a new religion, I think that there’s kind of a similar phenomenon where we might project onto the AI more than its credited, and that could give it the role of filling like a religion or some sort of more significance in our lives than maybe warranted. So I hope that makes sense. I’m sorry about the technical difficulties, that’s why I unmuted.

Tom: I really like that point about the book, and let’s dive into the anthropomorphism. You actually raised a really good point that we haven’t brought up about how the author is really curious whether we’re just like reflecting ourselves in the things we’re trying to investigate. And that certainly like a major theme, and it does seem like she’s saying, “Yeah, AI is basically a reflection of this same tendency we’ve always had to just anthropomorphize the stuff around us,” because religion anthropomorphized God, and we anthropomorphized nature, and now we’re just anthropomorphizing AI and so on because we have this intense desire to connect with the outside world in a meaningful way. I don’t know.

Daniel: I mean, I would agree that LaJay really hit on like what I think is like the central question of the book. And the way that the author puts it is like, you know, why do we keep using these metaphors that like the mind is a computer, computer is at work like minds, and also this idea of anthropomorphizing like a God, which is deeply connected with transhumanism and this sort of eschatology that combines like Christianity with like, you know, science fiction or, you know, utopian cybernetics. And I think her question is: Why do we keep doing this or how can we stop doing this? And in the chapters about where she’s giving the lecture at the conference and she thinks about Niels Bohr, where she’s talking about all this, these metaphors, and someone in the audience says, “Well, metaphors are used for everything, I mean what would you use?” and she gets pretty defensive, like maybe more so than she—she gets defensive as an author where she’s like, “Well, I reject all metaphors.” And you know, there is not an easy answer, like what replaces those kind of metaphors. I mean, language itself like works metaphorically and so I mean just describing anything. So it’s very hard. But I feel like that is like the central thing that she’s trying to work out or she’s trying to get people’s attention geared into that, that all these atheists are like transhumanists but they’re falling into the same answers that a lot of Judeo-Christian religions have used. And so I think that’s her whole thing—trying to like point that out and trying to like explicate on that to just make sure we understand why we’re hitting upon the same like problems or answers as we try to solve meaning in the world.

Tom: I’m glad you brought up that anecdote in the book about her speaking at the conference because I remember she was getting very defensive about that question and then I didn’t fully understand why. I was like, “Wait, why was it so bothersome to you?” But I think you’re right, like, so as I understand what you were just explaining, she’s saying that our language of our metaphorical language traps us into always projecting and anthropomorphizing our humanity onto the things around us because our metaphors are basically very human-based and so they just are the way we explain things around us. Hmm. This book’s pretty deep. Like, I don’t think we’ve come across an author who’s covered so many different topics in ways that are that definitely go beyond surface-level scratching. And Nathan?

Nathan: Yeah, I had a point about that. Probably my biggest critique of the book—I don’t know if it’s a fair critique but I agree with you that it was, you know, very broad and very deep. And then maybe it’s just kind of where I’m at in my life and when I read the news, where I’m at, what I see there—it’s like, this book is—it felt just really off-base to me. It’s like, oh, these questions that we can ponder are really interesting, but like—and don’t get me wrong, I’m glad this book exists, but like—it just seemed that it’s not dealing with like real issues. And so my sort of tagline takeaway is, you know, we have bigger problems than the universe and everything. You know what I mean? And it’s like, I’m glad that this book is dealing with it but I just—I think I wanted more ultimately about like what is the sort of practical implication about this, not even as a technical communicator or whatever, but like as a political subject. And Molly, your point about authoritarianism or autocracy, you know, it’s like we’re dealing with something bigger than religion in a way, in the way that it like could potentially govern our lives. And I felt like they didn’t—maybe it just wasn’t within the scope of to deal with that. But so that’s my kind of critique.

Tom: I can definitely see that point of view. I read the first half of the book and then I read a different book for a different book club that I’m in. And I had completely forgotten about the first half of the book. None of the issues that O’Gieblyn really tackled sunk their hooks into me in a way that other books might have. And maybe I was just like really into this other book I was reading. But yeah, I mean, you probably have a good point: is what she’s writing about like immediate practical, interesting, like real or is this like kind of philosophical and a little abstract and so on? I wish we could have seen a little bit more of the author in this; she’s so well-read and covers so many different people that I felt like she sort of abdicated her own voice to a lot of these other sources. But yeah, good point. Daniel?

Daniel: Oh well, I think that to answer Nathan’s critique in a couple ways. The first thing is I would say that one of the reasons I say she’s more of an essayist and a personal essayist is because she does write in this—I do think it’s this very sort of like contemplating things from like my point of view. And I think one of the things that like really is striking when reading this book against like the other books that we’ve read is that like she doesn’t have an argument. Like whereas like everyone else we’ve read has like a position or a stake in it, and she’s a little more like… you don’t—even though she puts a lot of personal information in it and she writes like with this personal essay style, she’s not making judgments exactly about AI. It’s like she’s raising a lot of questions. And also, you know, the book, it’s hard to figure out what the organization of the book is. And so it’s kind of—it spills out in all these different ways. And it’s even kind of hard—like I’ve looked at like the TOC like twice. And and like often like the subject heads are like two or three chapters. And I’m like, “I don’t know, I can’t follow this.” I also don’t have it in front of me because I’m listening to it. So I think it’s—it’s hard to follow. It’s not the best organized book for like a particular argument. And it is just a different book than like, you know, Ethan Mollick’s book or, you know, Mustafa Suleyman’s book, who have like specific things that they want to tell you. She is definitely more of like, “I want you to think about these things.” And I think like when I saw this book originally I was like, “Oh, this seems a…” I was kind of like in the view of like Nathan where I was like, “I really want to know about like the economic problems and like the issues with like who owns this and like…” and I’m still concerned about—I’m concerned about those things quite a lot too, still. But like my own reading and you know things I’ve been working on just kind of made me especially amenable to this book. And I did like she does like there is a I feel like there is kind of a lot of her in here—I mean she talks about like her alcoholism a lot, she talks about the ghost church which I actually used to live on that same block so I knew exactly what she was talking about. I actually want to read more of her work. I guess her next book is talking about like her falling off the wagon with alcoholism and like almost kind of getting back into religion because she meets like a Catholic priest and becomes friends with him. I also want to read her first book now. I got kind of super into her. But it is different than like other stuff we’ve been reading. And I feel it could be kind of hard to find another book that like does something like this. I mean I feel like Harari’s book is kind of wide-ranging in a similar way but but not the exact way. But I liked both those books for kind of the same reason—that they’re kind of more wide-ranging.

Tom: Yeah, I think your comments are right on point. Like she’s a personal essayist and this sense of a clear organization and structure and argument was definitely not there. In fact, on the back of the book there’s a quote from Phillip Lopate, who puts together collections of personal essays in anthologies, at least he used to. When I was a grad student I got an MFA in literary nonfiction writing and I love the personal essay—like this is my favorite genre. But for 270 pages, it’s a little hard to keep following that, right? Like this could have been a few essays. So yeah, you want something a little bit more like not requiring the reader to try to organize your thoughts and logic and be like, “Now what was the point of this in the larger book?” Like, you’re going to make me try to figure that out? It’s a lot to ask of a reader for that length. Hey, we’ve got—man, this has been a great conversation. I’m so glad you—Daniel made a big sacrifice here, hopefully you’ll catch that Bears game, maybe they win, maybe they lose, but I really appreciate you coming to this because this book requires a conversation with people. Before you all leave I wanted to ask your thoughts on upcoming schedule, because I keep flipping it around and I’m not really sure I want to make sure we’re like reading books that people want to read. And hold on, let me just share my screen real quick and get any kind of just immediate reactions here. Let me… chrome tab, share, okay. So here’s what we have coming up: Careless People is next month and this book LaJay said she’s already read it, it’s good, not that focused on AI but hey, Facebook’s at the center of a lot of AI stuff so we’ll see. This is the book that I said I stopped at 150 pages into O’Gieblyn’s book and read for a different book club that I’m in, If Anyone Builds It, Everyone Dies. Initially didn’t like it and then I was like actually this was a really fun read and interesting, but not really sure. If anyone builds it, everyone dies. This other one, The Fourth Intelligence Revolution, this is the espionage angle which this book doesn’t have that many reviews so it’s kind of a big risk. All right, and some others. Do you have any—like do you want to read this book? Do you want me to look for a different one? This If Anyone Builds It, Everyone Dies book? Do you want to read this Fourth Intelligence Revolution? Do you want to read something different? I personally would be totally down to read If Anyone Builds It, Everyone Dies—I mean just the cover, everything. And you wrote about it; I was like, “Oh, are you cheating on this book club?” That was my thought. I also—I am also interested in the other China book, like I thought—the Breakneck—I thought that the first book we read that dealt with China, it really blew my mind about the differences between American and Chinese like approaches to tech. So I’m definitely interested in reading that.

Tom: Okay, well maybe I’ll move this one up earlier. Sometimes I put books on here and I kind of let them sit and I’m like, “Do I actually want to read that?” and I’m like, “Let me let that simmer for a while,” and then I’m like, “Yeah, didn’t want to read that book, let’s just get rid of it,” but that’s just a gut feel. Anybody else have feedback on this list? I’ve got some others here too that I haven’t really mentioned.

Lois: With Yudkowsky’s book, I find that the author is a little inane on Twitter—maybe just doesn’t show the best of him. I was thinking that he would come across as far more brilliant and insightful but he doesn’t really come across that way on Twitter so I’m sort of interested what the book is like in comparison.

Tom: Yeah, well he’s the OG of alignment institutions. Apparently the first one to really get into AI alignment. And at first I thought his arguments were kind of like a little ridiculous, but then I was like, “Oh no, they’re actually like the main arguments.” So he does come across as a little bit absolute in his statements, as you can tell from the title. But honestly the book is a refreshing read, especially compared to this philosophy one that’s super dense. You’ll be able to listen to this one and it is worthwhile, that’s why I was like, “Yeah, even though I’ve read it I’ll put this on there.” All right, yeah, and I’m super interested in China as well. I mean, jeez, so much is so many fascinating things are going on in the world right now and China is like right up there in how this all plays out. So okay, well if you have any other books that you want to recommend, just feel free to send them in the chat or to me and we’ll consider them. But thanks again for coming to this book club, really appreciate your insights and I hope you have a great rest of your Presidents Day weekend. And so next month LaJay is Careless People, so you’ve already read it but hope to get your insights on that. All right, thanks everybody. Have a good day. Bye everyone, take care. Thank you. Bye.

About Tom Johnson

Tom Johnson

I'm an API technical writer based in the Seattle area. On this blog, I write about topics related to technical writing and communication — such as software documentation, API documentation, AI, information architecture, content strategy, writing processes, plain language, tech comm careers, and more. Check out my API documentation course if you're looking for more info about documenting APIs. Or see my posts on AI and AI course section for more on the latest in AI and tech comm.

If you're a technical writer and want to keep on top of the latest trends in the tech comm, be sure to subscribe to email updates below. You can also learn more about me or contact me. Finally, note that the opinions I express on my blog are my own points of view, not that of my employer.