AI Book Club recording and notes for The Singularity is Nearer, by Ray Kurzweil
- Book club meeting recording
- Audio only
- High-level topics discussed
- Discussion questions
- Extensive book summary
- NotebookLM podcast
- Transcript
Book club meeting recording
Here’s a recording of the AI Book Club: A Human in the Loop meeting, held June 15, 2025:
Audio only
If you just want the audio, here it is:
Listen here:
High-level topics discussed
Here are some of the high-level topics we discussed during the book club. (These are pulled and summarized from the transcript using AI.)
- Initial reactions to Kurzweil’s optimism: A look at the group’s general reception of the book, contrasting its futuristic, optimistic tone with the more immediate, cautionary perspective of other AI authors like Mustafa Suleyman.
- The Law of Accelerating Returns (LOAR) in Daily Work: The group discusses whether they feel the “acceleration” Kurzweil describes in their own professional lives, from increased productivity demands to the pressure to adopt AI tools.
- AI as a driver for workforce reduction: A candid discussion on how companies are leveraging AI—or the hype around it—to justify layoffs and restructure teams, with personal examples of a 40% department reduction.
- The gap between AI hype and business reality: Exploring why, despite the buzz, many businesses are slow to implement truly transformative AI, with many tasks remaining manual and the promised “agentic” future still out of reach.
- Democratizing AI vs. the digital divide: The debate over whether broadly distributing AI will reflect humanity’s best values or create a new digital divide, drawing parallels to the dual-edged impact of the internet.
- The high cost of gamechanging AI: A forward-looking conversation about the potential for a tiered system of AI access, where expensive, powerful tools create a growing capabilities gap between the “haves” and “have-nots.”
- Merging man and machine: sci-fi or inevitable? The members share their skepticism and thoughts on one of Kurzweil’s biggest ideas: the merging of human biology with technology and nanobots to achieve superintelligence.
- Redefining AGI: Are the goalposts constantly shifting? A discussion on the ambiguous definition of Artificial General Intelligence (AGI), whether we already have it by some measures, and what capabilities (like context and empathy) it still lacks.
- The “Builder vs. Learner” dichotomy in an AI-Powered Future: The group reflects on how AI might reshape professional roles, questioning whether the future will divide people into those who build AI and those who learn to use it, or if AI will augment everyone’s ability to do both.
Discussion questions
Here are some discussion questions I prepared for the book club. We didn’t discuss many of these, but they nonetheless reflect some of the themes and issues that stood out to me.
AGI by 2029? Do you really think that by 2029 we’ll reach AGI? That’s really close – just 4 years away. If it happens, how will that change our profession?
Accelerating returns cycle? How do some of the information assets we produce create accelerating returns? AI leads to producing more content with greater depth and accuracy. This information then leads to more abundant and rapid use of the technology, as it helps AI tools more quickly implement APIs and other tools/assets, which in turn leads to the development of more technologies. Is this a virtuous cycle of technological advancement?
Discuss unprovable futurism? Is there merit in getting so invested or antagonistic in a futurist position that seems so far-fetched as to be unprovable? For example, nanobots rearranging our molecular DNA to create immune responses from virus-type nanobots, or grey-goo scenarios. If something isn’t provable, is it worth truly discussing? Or, due to the Law of Accelerating Returns (LOAR), will these scenarios become near-immediate realities much faster than we realize, catching us unprepared?
LOAR and AI? I’m most persuaded by the Law of Accelerating Returns (LOAR). I think there’s merit to this idea. I had a discussion this morning about how AI is similar to the dot-com boom/hype of the early 2000s. The dot-com boom did lead to a crash, but the internet eventually turned out to be deeply transformative. AI isn’t just potentially similar; it can build on the existing innovation of the web to accelerate even faster, as the web enables better information sharing and distribution. Additionally, the web itself (all the content) provided much of the data needed to train and power LLMs. So I can see a case where these technologies build on each other and, in so doing, accelerate faster and faster. Do you agree with this assessment?
Radical life extension? Are we on the verge of radical life extension? If we can just hold out another 10-15 years, will we hit a point of “longevity escape velocity” with our health? What would that mean? Could a technical writer be 250 years old?
Singularity, new identity? If we hit a point of the singularity, we will no longer be “tech writers” because we’ll have so much knowledge at our disposal that we could be engineers, doctors, or rocket scientists, etc? The infusion of this knowledge will allow us to transcend this role and move into any other. Do we need to start thinking beyond our current identity to be more ambitious in what we want to undertake?
Your singularity age? How old will you be when the singularity (predicted in 2045) takes place? What will society be like when this happens? What if it only partially happens?
Acceleration already here? In what sense are we already seeing this rate of acceleration taking place? I feel like every day someone is sharing some new AI-related tool, link, or news with me. It’s certainly coming at an accelerated pace. Will this pace keep up?
Workplace acceleration impact? What happens when acceleration starts to become more apparent in the workplace? Technical writers (TWs) will be bombarded with documentation requests, potentially just requests to review documents engineers have already written, etc. TWs will either use AI to keep up or start outsourcing the documentation work to those using AI to build. How are you / will you deal with acceleration?
AI divide growing? Do you see a growing divide between those who can access and use AI tools versus those who don’t have access to them, or who have access but don’t know how to use them successfully? For example, if users have access to the top tiers of Gemini, Claude, or ChatGPT in their work (tiers costing $200+/month), will this separation start to create a divide between the AI-haves and AI-have-nots? What could you do with access to a top-tier model? What if a model is released at $5,000 a month, such as a powerful agentic model that lets you accomplish all your work while you sleep, or which reduces your existing workload by half, freeing up more of your time to pursue or accelerate other work? Perhaps we’ll start to see more acceleration in individuals, widening the divide. This is a new type of technological divide: not just people who have computers and know how to use them, but people who can access and successfully use advanced AI models.
Merging with AI? Kurzweil has said that in many ways, we’re already merging with AI. We carry smartphones in our pockets, and the documents we produce are collaborations with AI (helping us brainstorm, refine our thoughts, articulate our ideas, etc.). As others incorporate these human/machine collaborative products, the AI-based content proliferates and becomes embedded into even more parts of society. As a result, the distinction between human and machine is blurring. Will our close relationship with smartphones, all of which now offer AI in our pockets, accelerate our convergence with AI?
AI catastrophe risk? It seems that AI advancements present many new and novel ways for humanity to destroy itself. The grey-goo scenario, where nanobots out of control consume the biosphere in a matter of days, or where rogue actors bioengineer toxins (perhaps tailored to specific DNA profiles) present new avenues for annihilation. We’re lucky that so far, nuclear winter hasn’t happened. But what are the chances that some AI-engineered catastrophe won’t occur, given its increased accessibility and the variety of available scenarios that could lead to many possible attack vectors?
Complexity brakes and self-driving cars. Critics mention possible “complexity brakes” interfering with the projected timeline. I think self-driving cars are a good example. A lot of people anticipated that we would have fully autonomous vehicles several years ago, but it didn’t really happen. Only in controlled areas in select cities do we see AVs like Waymo operating. Autonomous driving turned out to be much harder than anything anticipated (for example, Elon keeps saying Tesla Robotaxis are just around the corner). It’s probably the same with many of Kurzweil’s predictions. The Law of Accelerating Returns (LOAR) will hit many complexity snags that slow acceleration down. Figuratively speaking, maybe accelerating a technology from 0 to 70 mph is somewhat easy, then from 70 - 90 mph much harder, and then from 90-95 mph the most difficult, and 95-100 almost impossible. I’m not sure if the LOAR curve is always exponential.
Brain-cloud interface? Connecting our brains to the cloud reminds me of a Black Mirror episode, specifically Season 7, episode 1: “Common People.” A woman has part of her brain removed due to a tumor, but a cloud service can provide similar functionality, connected to online services that stream data to her brain. It’s a pretty compelling episode. Of course, as with nearly every Black Mirror episode, the plot goes dark when the company starts showing ads and upselling services through the person (taking over their brain), and then amplifying her sensory experiences for even more upselling. But the larger point of this episode is to contemplate our merging with AI. Is it likely that an AI service, running on massive cloud computing, could provide a service that in some way connects to a biological component in a human?
Utopian or dystopian? Kurzweil’s vision of the future is a techno-utopian one, where technology eliminates disease, economic hardship, and more. It’s an era of abundance and opportunity. In contrast, many others perceive the future as much more dystopian and see many concerns about bleak futures. Do you perceive a techno-utopian future or techno-dystopian future, and why? Hollywood romanticizes the dystopian future for a more compelling storyline, but Kurzweil’s evidence showing advancements during the past 200 years presents a compelling case for continued and accelerating improvements (we live longer, have greater wealth, etc.). Are we overlooking these improvements? If we were to teleport back in time, would our current progress be much more apparent to us?
Book’s overall effect? Overall, this book is a refreshing argument in favor of techno-utopianism at a time when we’re seeing more anti-AI books come out. It makes me think we’re only going to see more acceleration, technological innovation, and advancements in the years to come. More readily available expert knowledge will be available. I think we’ll see a growing divide between those who can harness AI for their work and those who can’t. So it makes me want to double down on my AI learning and incorporation, as I think this will position me to ride the wave and learn to navigate what’s to come. What was the overall effect of this book on your perspective?
Extensive book summary
For notes summarizing the book, along with discussion questions, see this Google Doc. The Google Doc organizes information into various tabs (Book summary, Discussion questions, etc.) that appear on the left.
NotebookLM podcast
I uploaded the summary and discussion questions into NotebookLM and created a “Deep Dive” podcast from it. You can listen here:
Listen here:
Transcript
Here’s a transcript of the meeting, with the conversation made more readable through AI:
Tom: This is a recording of the AI book club, “A Human in the Loop.” Specifically, we are discussing Ray Kurzweil’s The Singularity is Nearer. This is a discussion among around five people for about an hour where we talk about different topics. A lot of us are tech writers, and some of us are in other roles, but this is basically a recording of a book club discussion. It’s not super focused thematically; it’s a lot of different perspectives, takeaways, and thoughts on Kurzweil’s book. So, enjoy.
Tom: Well, let me just go ahead and kick things off here. Welcome to this AI book club. We’ll be talking about Ray Kurzweil’s The Singularity is Nearer. And like the other book club sessions, this is being recorded, and I’ll distribute it afterwards. So just make sure you don’t say anything you don’t want to be heard by other people. That usually doesn’t happen.
We have about four people today. Last time, we had probably two or three times that. Unfortunately, I have a bad track record of picking holidays for this. I try to just do a regular, predictable cadence—the third Sunday of the month—and already I’ve hit Easter and Father’s Day. So anyway, Happy Father’s Day to anybody who’s a father out there.
Nathan: And you’re a father.
Tom: Yeah, I am. Happy Father’s Day to you. I’ve got plans right after this, going to a baseball game with my kids. I don’t even really watch baseball, but it’s kind of fun to go with my kids to a sporting event. So anyway, all right, let’s get into this book.
I did share a document. If you didn’t see it, let me just share my screen real quick so you know where all these notes go. I usually provide a notes document right before the book club, and the notes document just has—you can find the link right next to the book in the schedule. And this time, I created a few different things. First of all, I just drafted a big summary because I wanted to make sure I was understanding things. If you’ve never used Gemini Deep Research, or even ChatGPT’s deep research capabilities, it’s probably highly similar. It’s great, and it does a nice job of just summarizing all the main things. And with a book like this that’s so popular, there’s a lot of information online, so you could probably have more confidence in its accuracy. But I mean, if you’ve read the book too, you would know what’s on target and what’s not, and it surprisingly looks good to me.
I also added some discussion questions and invited people to add to them if they wanted. These are kind of long and different thoughts, and we can hit them or not. I don’t think anybody added one, but maybe I’m wrong. And then finally, I just added this this morning, but if you haven’t played around with NotebookLM, this is a tremendous amount of fun. It’s notebooklm.google.com. You just upload—in this example, I uploaded the book summary into it—and it generated a 28-minute podcast. And then I just downloaded and uploaded this. But it’s a good way to sort of ramp up on any subject.
Now, as people pointed out last time, none of these resources are really what people want, which is they want to know what others think about the book. What do you make of it? Do you agree? Did you like the book? How do you see this sort of applying and playing out in your life or not? So, why don’t we jump into that high-level topic? What did you think of the book? Did you enjoy it? Did you have strong reactions against it? What was your general reception?
Mette: I can start with the opening shot.
Tom: Go for it.
Mette: I liked the book. I didn’t like it as much as the last book by Mustafa Suleyman. I thought he was more—he talked more to an audience that was here and now. It seemed that—and I don’t know how to really pronounce his name, Ray Kurzweil, is that his name?
Tom: I actually don’t know either. Kurzweil is my understanding.
Mette: Yeah. His book was really centered on maybe not the people right now on Earth, but people maybe, I don’t know, 20, 30 years from now, or 50 years. So I kept on finding myself going, “Okay, buddy, what about us now?” He seemed to do a lot of—his argument often was, “Here’s all the wonderful things that AI is going to do for humanity. Oh, but it’s not going to be so great in the short term.” And I found that to be a continuing theme. Maybe I was in a pessimistic mood. But that said, he could certainly crunch the numbers, and I found that interesting. I don’t know where he got all that information. It’s a whole lot of research he did, certainly very optimistic research. I found the most interesting part of the book to be the bio part because that’s something Suleyman didn’t really go into in great detail, but he went into it in a lot of detail. I’m like, “Oh, okay, that’s what’s going to happen.” Okay. So, what astounded me was that it was all going to be happening pretty much in the 2030s by his predictions. And then on several occasions, he pointed out that he’s always been kind of correct on his predictions.
Tom: Yeah, I, wow, you brought up so many great points here. Let’s jump into some of these. The first one you mentioned about the audience: who is he really talking to? And you’re right that he seems to be addressing more of a future audience or not really talking to the people here and now. And yet, his AGI (Artificial General Intelligence) prediction is supposed to land around 2029, which is four years away. I’m like, “Oh my god, that’s, you know, really just a stone’s throw away.”
Mette: Yeah.
Tom: But I mean, there’s definitely an element of science fiction, it feels, especially when you get into Epoch 6 where the universe wakes up and so on. It’s like, very creative and imaginative, and it’s kind of like, wow, I mean, who knows what that time will look like?
But you also brought up the Law of Accelerating Returns, and I agree. I’m kind of taken by that as well. And this whole topic of acceleration is one that I find probably most applicable and most immediate. I know critics have said that he mostly cherry-picks examples from technology as proof, and we don’t really have that same kind of Law of Accelerating Returns in biology and other domains, or even just like airplane travel. So people say we’re hitting complexity brakes in those other areas. But I mean, do you—this is a question I asked my colleagues the other day—do you feel like things are accelerating in your own work?
Mette: Yes.
Tom: In what ways?
Mette: The expectations that are placed on me as a technical writer. It’s, you know, my company is kind but firm. It’s like, if you’re not using AI to show us how you can be twice as efficient or more, then, gee, there might be a different future for you that doesn’t include us.
Tom: Wow.
Mette: So I mean, it’s very subtle, but it’s there. And they’ve really gotten on the AI bandwagon. So I feel it every day, definitely. Of course, you know, just on my cell phone, I notice all sorts of things. So just apart from work, I see it happening every single day.
Tom: It’s interesting to hear feedback from people in other companies about AI momentum and pressure and encouragement. So yeah, that’s really, really fascinating. Do you feel like the work itself has doubled yet? Like if you’re being asked to do twice as much, do you also have twice as many doc requests and projects and things to do as before?
Mette: Yeah, and that’s probably because we laid off 40% of our department.
Tom: Oh, okay.
Mette: Because of AI.
Tom: Because of AI, initially? Wow. Wow, you’re really in the—you’re right in this stuff. That’s crazy. Okay. Wow. So they really, they really doubled down on people being twice as productive by laying off—
Mette: They really have.
Tom: Wow, that’s a trend that I don’t think has really caught on. I mean, I don’t see massive companies laying off half their staff and telling them to just use AI to pick up the slack, but I see it here and there and wonder how much speed it will pick up. Anybody else have any thoughts? Nathan?
Nathan: I think, just about companies and the choices they might make, I think AI gives them almost an excuse to shrink their workforce, right? I mean, so whether or not AI is actually making gains in any productivity for a company’s workers, a company can use that as an excuse to shrink their workforce. And there is, you know, I mean, there’s a benefit to the hype of AI, you know, in the way that those companies have said it needs to be regulated, it’s going to get out of control. I mean, there’s some marketing angle to that, I would say. So, you know, for friends of mine who have been laid off for reasons of AI, I definitely sympathize with all those folks, and I think in some cases it’s definitely true, but in some cases as well, there might be some, you know, kind of marketing spin to workforce reductions underneath this AI era. So I just wanted to throw that out there.
Tom: That’s a really good point. And I think another point on understanding why companies might be using AI as an excuse to lay people off is also just to justify the massive capital expenditure on all the AI infrastructure and tooling and resources. I mean, you can’t spend billions of dollars and not have a return on investment. So maybe the quickest way to show that is by saying we were able to cut our workforce in half. You have to satisfy Wall Street.
Nathan: True.
Tom: Anybody else have—go ahead, Liam.
Liam: Yeah, well, nice to meet y’all. I work at AWS, and obviously, whatever I share is basically my perspective; working there has nothing to do with my actual work. But we do a lot of work with GenAI, and I think what I was surprised by was how little the phrase “GenAI,” like generative artificial intelligence, was brought up. Ray talks about AI a lot, or AI superintelligence, or artificial general intelligence like you said, Tom. For me, when I think about where it goes in my work, how I feel like things are accelerating—I’m in sales, and so we’re working with a lot of SMB companies. I’m not a technical writer, but I thought what y’all were talking about was fascinating, so I wanted to join. We’re trying to sell the future, kind of. And so there are expectations put on us about what level of adoption and usage customers should be using GenAI for, whether that’s a concrete manufacturer or a digitally native business. And it feels like whether it’s Azure, GCP, Oracle, or AWS, a lot of people are guessing about what level of adoption will be like in day-to-day business. And I would say for me, what’s accelerating is that we’re not seeing agents do my day-to-day work, even though we see Service Now commercials or Salesforce commercials show that. We’re still seeing a lot of manual tasks done by individuals like myself and still trying to find ways, like, how do we move this from—interestingly, it’s commercialized so people use it day-to-day, but it’s not in the actual business. Like, it’s not seeing real business value being driven, at least when I’m working with my customers, other than like chatbots and intelligent document processing, which might have been started 5-10 years ago and is not a GenAI topic. It’s something that can be structured and responsive.
Suparna: Hey, oh, go ahead.
Tom: Super—I was just going to ask a follow-up question to Liam, really, but totally okay if you can’t answer. I was just wondering if you could give an example of one of those manual tasks that hasn’t yet been… that you haven’t been able to transition to AI?
Liam: Yeah, great question. I think the phrase wouldn’t be “can’t transition to AI.” I think what I’m seeing right now is, is the organization investing in doing that? So, for example, Salesforce is a great tool to track how you interact well with your customers. And so we use that to track who our customers are and how we interact with them, whether we had a meeting, like a billion other organizations do. And that could be a very agentic thing, I believe, where I have a meeting on my calendar, and let’s say a live meeting assistant takes the notes and then imports that into Salesforce, and then identifies the notes and then gives suggestions, writes emails for next steps, sends it out, sends the new meeting invite, or submits a ticket for a special request to meet with a technical advisor, like a solutions architect, about Amazon Rekognition to be able to recognize images on documents. Rekognition is an AWS AI tool, which you probably know. And so it’s not that it can’t be done; we’re just seeing a lot of these businesses that I’m working with are not implementing it at the speed at which I think a lot of leadership in the tech world think they would have by this time.
Tom: Speaking of agents and how much can be automated and accelerated, I was playing around with something the other day called Jules. I’m not sure if you’ve heard of this, but this is one of these agents that is somewhat experimental, I think. It’s an experimental coding agent that helps you fix bugs, add documentation, and so on. It integrates with GitHub. This is a Google experimental agent. And I was really trying to make it work. I was like, “Oh, this looks awesome.” You give it some tasks and it just goes to work, and you come back later and it’s all done. But it was really hard to figure out what tasks to give it that it would be successful with. What I wanted it to do, basically, is first just to check all the links of something and make sure all the links were valid. And instead, it built me a link parser out of Python. You know, I was like, “Whoa, that wasn’t what I expected.” And then I thought, “Okay, let me just update the information on this page. I know that’s somewhat outdated.” And I wanted it to go out and get all the information necessary and pull it down and unpack it and update the topic. But it’s a lot harder to pull that off.
I think there’s so much glue between the tasks we do. I feel like I have to set up AI so that it will be successful—give it all the info it needs, give it a very concrete task, give it access to a specific page, tell it what to do. And setting all that up, it’s hard, right? That’s all the meta-information between tasks that AI can’t really do—all that context, even prioritizing what page, where to start. So I don’t know, the agent thing is, I think, definitely the next level of AI. The ability for these tools to just update pages across a doc set in a very autonomous way is pretty amazing, but still hard to implement. Suparna, have you done anything—have you made progress with agents and getting significantly long tasks going with them?
Suparna: I haven’t, no. I used to work as a software engineer, and I’m currently laid off, with some thoughts of going into something slightly different. I am volunteering in literacy right now, and the only way that I’m using AI is—I used to try to take articles, for example, from BBC Features, and try to summarize them in ESL-friendly language. And then I think I heard from one of the tech-savvy staff members, he was like, “You should be doing this with AI.” And then I finally started just feeding those articles into ChatGPT, with a slight worry about, “Is this ethical for me to do this?” But it’s actually been really great using that because the ways in which the summaries have been generated allow me to focus on interacting with the content more and thinking about how the people I’m doing the article with can engage with the content. So I’m focusing more on discussion questions, which also could be generated by the chatbot, but I think I’m enjoying the questions that I think of and that they think of more. So I guess I kind of save that pleasure for myself. And so that’s the only way I’m really using it in a context that is more of a public setting, since I’m not working right now.
Tom: Yeah, well, that’s cool. That sounds like a worthwhile use. I’m just thinking back to the whole acceleration topic and how this might fit in. Let’s say you have 30 different articles that you’re trying to make more readable or more understandable. If you used an agent and you gave it the task of going through each one of these articles and applying certain style rules, it would probably do a decent job at it, and you could review it at the end. This speaks to this larger goal that many tech writers have, where you might have a whole doc set and you want to make sure that your style guide rules apply to every one of the 100 pages in the doc set, as well as other rules. And you give it this task, and then in the morning, you come back and you review the changes and you hit accept or reject. That kind of acceleration could be interesting. I haven’t achieved anything like that, but I think that is the promise. And I think many people do think that agents will be a key ingredient in the acceleration equation. But Kurzweil doesn’t really talk about agents so much, unless I missed that part. He’s very high-level.
Nathan: I’m searching a PDF of the book right now, and the word “agent” only appears a few times in the body of the book and a few more times in the endnotes. So yeah, it does appear.
Tom: I think at a more high level, what he says is that one innovation gives rise to another. He says we’re now using—like, before we had computers, people had to develop computers. But now computers are being used to develop more computer chips. So one invention leads to another. And I see this pretty regularly. I was thinking about just all the docs we produce or the information we produce will then accelerate the creation of other tools and APIs, which will then create more innovation. It does seem commonsensical or logical that things would accelerate because the tools we’re creating can be used to create more things more quickly. I was trying to think of an example of this.
Nathan: How about in the book? Yeah, sure. Or elsewhere outside of it?
Tom: Could you even say something that creates code automatically leads to greater creations?
Nathan: Oh yeah, yeah, yeah, that’s a great example. The Cursor AI code editor and just the whole integration now making it faster to create additional code.
Tom: Yeah, good example. All right, any other themes? Maybe we should move off of acceleration unless people have any more thoughts on that.
Mette: Towards the end of the book, he was talking about the Asilomar conference and how they came up with sort of some guideline bylaws by which humanity could use AI. And I guess my thought was, are we really dependent on this one thing to keep everything in check? Otherwise, bad actors are going to intervene. There’s so much vulnerability there. When I read that part, I was like, “Okay, that sounds great.” He read some of the bylaws, and I thought, “Sounds really great.” You know, why aren’t we hearing about this more? What’s going to happen? Because if they’re not followed, there are some really bad things that can happen.
Tom: I think that is a major criticism Kurzweil has received—that he’s very much a techno-utopian optimist. And especially in contrast to the last book we read with Suleyman—Suleyman devoted half of the book to containment strategies and theories.
Mette: I know.
Tom: And Kurzweil basically just says, “Well, the way to get past these problems is just to build better defense.” He doesn’t want to slow down the rate of development or put a bunch of regulation on it. He just wants to build better defensive mechanisms and that requires all of the man-made democratic tools that humanity has in terms of governance, which is threatened by AI, you know, according to Suleyman. So I’m like, okay, you’re placing greater weight on a foundation that is kind of crumbling. I don’t know. Yeah, it really perplexes me, the just variety of attitudes around AI. I also listened to a podcast, This Week in Tech with Leo Laporte. He’s very much embracing AI. He really wants to accelerate things.
I can’t help but think, is Kurzweil really so fully embracing of AI because he wants things to move faster to achieve the whole longevity escape velocity? Like, does he want things to get in motion before he has to, you know, take his exit off this planet? Because he’s—I don’t know how old he is, but he’s been around since going to MIT in 1965.
Nathan: So he’s getting up there. And I know there was a documentary on him, I don’t know, 10 or 15 years ago, that detailed some of his kind of vitamin life extension regimen, which he is on that train for sure. And he really does want to—and I can understand this—he wants to get to the next paradigm of life extension. And speaking of Father’s Day, I know he’s had a preoccupation with kind of reviving the image of his dad, who is deceased. He brings it up maybe once or twice in this book, and he definitely has elsewhere. But I feel like that really deep psychological thing that human beings have, and Ray Kurzweil in particular has, is kind of informing this techno-optimism. And, you know, I don’t know that Ray Kurzweil is looking at it from maybe just an average person’s perspective in terms of workers’ rights and environmental wellness and, you know, those sorts of things. He’s very much in his kind of ivory tower, it seems like, dreaming up these phases, kind of beyond the edge of reality. So he’s definitely an optimist. I want to be an optimist.
Tom: I would love that. That sounds great.
Mette: He certainly seems to be relegating all these threats to just, “Oh, details to be worked out.”
Tom: Yeah, yeah, for sure. Go ahead.
Liam: Yeah, just on—I totally agree with y’all. I was speaking to someone because I didn’t know who Ray Kurzweil was, and they were like, “Oh, you’re reading that book?” I was like, “Oh, is he actually a famous person?” When I was like 50 pages in, it was my perception, like, this guy’s kind of a wild writer. Like, some of these predictions are pretty nuts.
Meta—and I might be pronouncing that wrong, so please…
Mette: Meta.
Liam: One thing that stood out to me on page 421, on the very last chapter before the 200-page appendix, which was nuts, Tom—I was very afraid you assigned a 600-page book—he says at the very end of 421, talking about being cautiously optimistic, he says at the very end, “We should thus focus work towards a world where the powers of AI are broadly distributed so that it reflects the values of humanity as a whole.” And I like that he compared that to when we had 64,000 active nuclear warheads, and now we have 9,000. How he practiced nuclear fallout drills, and then we didn’t have a nuclear fallout, and then how maybe as a world, or the way we use nukes, is actually more reflective of our values of peace than we actually think. I’m coming up with these ideas on the fly. But just how AI could, if broadly distributed instead of just kept in the hands of the few, if it’s put in the hands of the many, it will reflect the values of humanity as a whole, is something as a concept I can definitely get behind.
Tom: Yeah, that’s a great debate that has opened up, right? Do you open source and broadly distribute this technology so that everybody has access and you get maybe more checks and balances? Or does it become a tool within the hands of the elite that they keep out of bad actors’ hands, or they try to?
Nathan: I have a question or kind of a thought forming on that. “We should thus work toward a world where the powers of AI are broadly distributed so that its effects reflect the values of humanity as a whole.” I think that sounds great, and I would not argue for the techno-elite to only have AI. But if we think about another technology that is broadly distributed, so let’s just say the internet. Does that reflect the values of humanity as a whole? I don’t know.
Tom: Good comparison.
Mette: I’m sorry, could you repeat that? It’s a good comparison.
Nathan: Yeah, and I mean, maybe it’s a glass-half-empty, glass-half-full kind of conversation, but I use the internet all the time. I love it, of course. But it does seem to reflect not values or virtues, but whatever the opposite of that is, you know, some really kind of negative traits of humanity. And in a lot of ways, it’s very corrosive to democracy; it’s very corrosive to mental health.
Mette: Yes. And you know, that’s really, I would think even kind of beyond a debate. So, but he doesn’t bring that up. When he does all his comparisons, he really doesn’t bring up the internet, but it’s a really good one because it was all wonderful, especially in the beginning. And I’m old, so I remember the beginning. And you know, more recently, we’ve seen the corrosive effects of the internet, particularly on really young people, children who grew up with it, and, you know, cyberbullying, all this stuff that is having a pretty great effect on society. And that’s worrying and something that he doesn’t seem to be dealing with. That’s his “details.”
Tom: I agree, that’s a great analogy. I mean, if you think about the truly transformative things that have been rolled out and unleashed, as you pointed out, the internet was believed to be a democratizing force that would lead to things like the Arab Spring and awareness and just greater collaboration and knowledge sharing, breaking down the walls between countries and cultures. And yeah, and now it’s like people are realizing that that was kind of a dream and the reality is much more corrosive and toxic and polarizing and radicalizing and fragmenting. Yeah, not what people had hoped. I think that’s totally right on.
Nathan: I, just in trying to be critical of my own thoughts, I do recognize that this conversation is made possible by the internet, and I think it’s very awesome. It’s very cool. And so maybe if it’s a question of optimism or pessimism, the answer is “yes,” you know? It’s both of these things, right? We can have billionaires basically censoring huge chunks of the internet to fit their kind of personal narratives, right? And we can have an Arab Spring. And, you know, maybe… I don’t know. But if AI is, you know, if we can make an analogy between AI and a broadly distributed technology like the internet, there is an optimistic world and a pessimistic world, and they’re happening at the exact same time.
Mette: Or the better angels of our nature and the more evil demons of our nature are all together.
Tom: He does spend a lot of time trying to make the case that technology has pretty much improved our lives, even if we don’t recognize it at the time. Go back a hundred years, and you might be surprised at the level of poverty and disease and so on. But I wanted to make a comment about the growing divide because this is also something that I’ve been thinking about. Yes, a lot of these technologies are being broadly distributed for $20 a month—which maybe that’s prohibitive, maybe it’s not. People can get access to a lot of these tools and have an immense amount of knowledge in their pocket. But we’ve also seen a tier of tools that cost $200 a month where you can get a lot more power.
And what if we just turned up the dial a little bit and said, “Okay, well, for $500 a month, now you’ve really got the AI that is game-changing.” Suddenly, you’ve got a group of people who have access to game-changing AI that they then use to maybe cut their workday in half and take another job or just do three times as much. For example, I would like to take and implement one of these AI agents on my API doc course to continuously update it and make it accurate and current in every way on a continual basis. And that becomes a little asset that I can also sell if people want to download the PDF and whatever. You know, like, what if AI enabled just the automation of little side businesses like that and larger ones? You get this growing divide between, yeah, people can access AI, but they get the cheap, crappy, free version. If you really want to have an impact, you’re going to have to have some money. I don’t know.
Nathan: That’s a really interesting idea. I wonder, like, why wouldn’t that happen, you know what I mean? For any company who’s monetizing AI, why wouldn’t they do that? I mean, yeah, obviously the $200 a month version, that’s for content creators, for people who have monetized their YouTube channel. It’s pretty much what you describe. And then, yeah, why wouldn’t the more luxury brands of AI kind of come out of that? And then, yeah, what about, you know, $2,000 a month? It seems like that’s the road we’re on.
Tom: I mean, if you paid $2,000 a month for an AI and it did something that gave you a return on the investment of $4,000, I mean, that would be pretty impressive. I don’t really know how that would play out, right? But you know, give it a couple of years and maybe we’ll figure out how. You could sketch this out. Like, I can think of one thing I want to do. I want to create a whole course on how to use AI for tech docs, something that I keep thinking I should do, but it’s a lot of time. It’s a lot of time outside of work that I don’t really have. So, but let’s say I figure… anyway, I’m sort of getting off track.
But what capabilities will evolve? And this book also made me think that there’s probably going to be a growing divide between people who can use AI to expand and augment their capabilities versus people who have rejected it. I think you mentioned, Mette, that the message among some employees is that they have to start using AI to double their productivity or find a different company. And yeah, I can see that same sentiment expanding. It’s like, you want to have the tech writers and the other workers who have figured out how to augment their capabilities five times or more. And all the people who haven’t are going to be turning to other sorts of lines of work. This growing divide.
Mette: Yes, yeah, yeah.
Tom: Reading these last two books really makes me feel a sense of urgency to up my skills and knowledge about AI. And even though I am experimenting and using AI heavily and I’m leading initiatives at my work, like the education initiative for tech writers and AI and so on, I still feel like there’s just so much I don’t know and I still feel behind, you know?
Mette: And every week it feels like there’s something new.
Tom: Yeah, yeah, people are always sharing new tools. They’re like, “Check this out, Tom.” You know, I’m like, “Oh my gosh, I, you know, this new paper, this, that.” And yeah, that pace of acceleration is really rotating fast. The wheel is going faster. And maybe I have to figure out how to leverage the AI tools to keep up. Or maybe that’s just part of the new reality, is like, there’s new stuff every day, you will never absorb it. I don’t know.
Nathan: I think that that’s true. And I honestly think that that’s been true for a long time. You know, I remember being a kid and going into the bookstore and then just really being blown away by how many books there are. There’s not enough time to read. It’s a thing I struggle with. And, like I said at the beginning, you know, how many book clubs I would love to join. There is always going to be more to know and more to learn. But it has been that way for a really long time. I mean, whenever the printing press came out, right? And now, just in this realm of AI, I mean, we could have GenAI write a book for us right now, you know? And there’s some kind of… the graph of that is… I mean, there’s just so much content now, you know, more than there are hours of human existence available to consume that content. So I just want to say, I think you’re doing a great job, Tom.
Tom: Well, the scenario that you’re describing is what Kurzweil is trying to say is that next epoch where biology and machine merge, and you’ll be able to process every book in the bookstore, right? This is where when our brains basically take on these computer-like capabilities through nanobots that are rearranging molecules—who knows what’s happening—but then, yeah, his argument is that you will be able to consume a library in a day or an hour or whatever, and you’ll be super intelligent. What are your thoughts on that? I mean, this feels very science-fictiony because I don’t work in biology. I haven’t seen, I’ve never experienced Neuralink. I don’t really see how we’re going to have this merger between machine and biology. But what are your thoughts on that?
Mette: That was the most optimistic part of the book. You know, we don’t know a lot about the human body. We know even less about the human brain specifically. And we’re already making these predictions. I don’t know, I just think there’s a lot of surprises out there. I kind of had the same reaction as you. For that kind of certainty that Kurzweil seems to have, I don’t know.
Nathan: Yeah, I mean, they haven’t been able to tame the issues of fully automated driving yet.
Mette: They don’t have an air traffic control system that is running on AI. They barely have one that’s working off computer software, really.
Nathan: It’s a great point.
Liam: I would say, I read an article or a post, like, “There will be two people in the coming age.” How much validity this has, who knows. “One will be the builders and one will be the learners,” like those who build the software and those who learn about it. And I think that’s one of the first things that drew me to this book club. Hey, I don’t think I was built to be a builder. I’m intellectually curious and I love to read. I’m clearly not a technical writer, however, I really enjoy the gift of learning. And so I think there will be a lot of good that will come out of just people that learn and don’t build, because you’re always going to have people that need to run the ship, and so they need to run the business or make the decisions.
And Meta, I agree with you about the cars. I thought his examples about Waymo from Google DeepMind and their ability to simulate 20 million miles from their actual car driving were interesting. And I liked his point at the end of the book that talks about Neuralink. I don’t understand what the back-and-forth between him and Cassandra was in the very last chapter. I don’t know who Cassandra is. But he says, “Hey, what if regulatory policy prevents us from the Neuralink happening for our neocortex for 10 years?” And that kind of feels like what’s happened a little bit for Waymo, from the tiny bit that I know of it. It probably drives a lot safer than I drive. And if regulation opens up for just allowing that, we’d probably not have 40,000 deaths a year on the roads. But regulation is such a huge part of it as well.
Tom: Interesting comments about the builder versus learner. That’s a point I was thinking about in other contexts. And I think that the promise of this massive merger of machine and biology and this ability to learn at such faster, accelerated scales means that you basically will be able to perform both roles. “Builder” won’t just be the super-detailed engineer-type roles. I mean, we’ve seen this now with “vibe coding,” where if you just can describe something, the application or whatever—Gemini, whatever you’re using—will try to build it. And it’s certainly interesting. I was vibe coding something—I’m not an engineer—and I was just like, I don’t even understand what this Go code is doing, but it seems to be working for a very small, little, tiny use case. I realize that if I had more engineering knowledge, I’d be a lot more dangerous. But yeah, this idea that whatever our role is—tech writer, salesperson, whatever role you have—the tools could easily allow us to expand beyond what these roles are. If you can consume a library in a day, I mean, then you could be a rocket scientist or a surgeon or whatever. You would have that knowledge that other roles have, perhaps. So I think maybe—I mean, this is the optimistic side—but in 2029, when AGI is here, maybe tech writers won’t exist, but you can do many other roles very capably. I don’t know.
Liam: The concept of… please, go ahead. I’m going to say what I thought so I don’t forget it. Just the augmentation and what that brought, and the lack of learning. Like, we need to learn less for each role. Before, you needed someone who could build shoes as a cobbler, and now you just need someone to be able to run a conveyor belt and pull the shoes down. So just the concept of augmentation, Tom. But Nathan, pass it to you, please.
Nathan: Oh, I was going to ask about AGI. So, you know, that’s a really important milestone on the Law of Accelerating Returns roadmap, as is the merging of biology with nanobots. I wondered about the AGI thing because I’ve heard it said by kind of contemporary or present-day technologists like Sam Altman, he says, “By one measure, we already have AGI.” And my understanding is that AGI just means that the AI can perform basically as well as a human in any number of tasks. Is there a hard test for that? Because I feel like we could probably say that with most human tasks right now. Or the point Sam Altman was making is that the goalposts kind of keep getting shifted for AGI. Because if we were to show ChatGPT in its current state to someone from five years ago, it would completely blow their mind, right? It blew my mind two years ago when I asked it to describe a technical procedure in the style of Edgar Allan Poe, and it did it like that. Wow. And now that seems a bit quaint. But yeah, in this conversation, is there any way to kind of refine my understanding of AGI? Would anybody push back on us having AGI right now?
Tom: I think the nuances that you’re bringing up are right on target. And yeah, Kurzweil doesn’t really get into AGI and define it, and it’s a subject that is highly contentious and variable depending on who’s talking. Like you said, the goalpost keeps shifting, and how do you define it? I mean, right now you could ask an AI tool pretty much anything you want, and it’s probably going to be fairly smart and capable. But what I don’t see happening is, let’s say that the AI takes over the controls at my computer, and now I want it to figure out what it should work on, evaluate the bugs in my queue, figure out which ones it should prioritize, who it should contact, whether there’s enough information to be actionable. Like, there’s a ton of contextual information that it would need to absorb to do the task of a human in that sense. That makes sense. So anyway, I don’t know, does anybody have any other thoughts on AGI? Liam?
Liam: I would say the Turing test—I think I first heard of the Turing test when I watched The Imitation Game with Benedict Cumberbatch a few years ago. And then the example of AlphaGo beating the world leader in Go, and then IBM Watson beating Jeopardy, but then Watson showing the three levels of its guesses. It’s like, its first guess was “the European Union,” like “What is the Parliament?” and then it guessed “women’s suffrage” and then “African safari.” And so what I found fascinating was Ray Kurzweil’s commenting on this—and this was kind of… it comes from the Ship of Theseus, the Greek philosophy of, as I remove one plank, when does it become a new ship? And so, AGI being—Watson thinks totally different. And if we make a brain out of brain matter or silicon, it doesn’t really matter if it has consciousness. That stuff I got a little bit lost in, just the philosophical part of it.
But I would say the Turing test comes back to what you said, Tom, about setting AI up. When you were talking about Jules, like setting AI up to do something that has context and can prioritize and doing that in ways that are contextual to human interactions or to everyday tasks. The reason I don’t think it’s passed the Turing test is because AGI might—I mean, AGI has to be really general. It has to be able to talk about the Renaissance and then also empathize with a child who stubs their toe, or be able to discuss with a human something irrational but really important to their mind. Because the reason economics is not an exact science is because humans are irrational. And so I think there is a bunch of stuff that wouldn’t really make sense to a robot, that wouldn’t make sense, that could pass the Turing test. So while it might be super intelligent, human-level at building code or creating APIs, it might be really hard to create a therapist AI that helps with a teenage girl or boy that is struggling with bullying or social media issues and with self-image in middle school. And so AI’s ability to—and by the way, this confused me—AI’s ability to deceive a human that it is a human, I just don’t think it’s there yet for all the niche parameters.
Tom: Yeah, and it seems like a lot of the AI tools are becoming more specialized. Like, you might have a medical AI and you might have a coding AI. We already have that with things like Codex and other things. And yeah, whereas AGI is supposed to operate across domains and be able to absorb this context. I don’t know how a tool can really pull in all the context that it would need to have awareness. Like, it would have to consume my whole bug queue, all my documentation, all my Gmail, understand who’s around me, where I am organizationally, and what the releases are coming up just to even operate in a documentation role, let alone, I don’t know, build rapport with colleagues and so on.
Nathan: I feel like that piece, you know, I’ve heard it suggested by some journalists that AI can synthesize different news articles, but it can’t actually go out there in the world and start talking to people. And to your point about context, I mean, that’s kind of it, right? It’s going and just kind of finding out little bits of information about who knows what or who’s on vacation and who else you have to ask. You know, those kinds of details escape AI right now. But even if it was able to say, “Hey Tom, I need to find out X. Where do I go to find that?” That would, I guess, be a start. But yeah, right now it’s like an artificial general sophist or something like that, you know, almost pretending that it’s smart.
Tom: Well, hey, before we wrap this up, I wanted to promote the next book just to put it on your radar. So this next book is—
Nathan: Perfect. Wait, what, I missed that.
Tom: Oh, the next book. I bought it because I was worried I wasn’t going to have enough time.
Nathan: Oh, you held it up. Got it. Okay.
Tom: Yeah, so we’ve got Supremacy: AI, ChatGPT, and the Race That Will Change the World by Parmy Olson. I thought this might be interesting because there does seem to be a tangible sense of the race toward AGI, to build it first. I mean, I’ve heard this in many different contexts where people really want to have their workers double down and move towards this. And I thought it might be interesting to see what this book is about. So I haven’t read it, but it does seem to get decent reviews.
Nathan: It did get an Editor’s Pick, whatever that—I don’t know how that’s decided. Financial Times Business Book of the Year 2024.
Tom: So it can’t be that bad with 739 reviews.
Nathan: Was it decided by AI? Was it a pick by AI?
Tom: I don’t know. We’ll find out. But usually when books have hundreds of reviews, they’re going to be substantial. And even this book that we read, despite its flaws, it was thought-provoking and interesting. And gosh, it gave me a lot to think about, and for that, I thought it was worthwhile.
But thank you so much for coming to this. I appreciate it. This is kind of nice to have a smaller group today. You’re able to get to know each other a little more, and you all are very sharp and insightful. So I appreciate your comments and thoughts.
Nathan: Yeah, thank you. It’s been great. Appreciate it.
Liam: These are really fun.
Suparna: Thanks, everyone. This is great.
Tom: Thank you. Have a good rest of your weekend, and I’ll see you next month.
All: Bye-bye.
About Tom Johnson

I'm an API technical writer based in the Seattle area. On this blog, I write about topics related to technical writing and communication — such as software documentation, API documentation, AI, information architecture, content strategy, writing processes, plain language, tech comm careers, and more. Check out my API documentation course if you're looking for more info about documenting APIs. Or see my posts on AI and AI course section for more on the latest in AI and tech comm.
If you're a technical writer and want to keep on top of the latest trends in the tech comm, be sure to subscribe to email updates below. You can also learn more about me or contact me. Finally, note that the opinions I express on my blog are my own points of view, not that of my employer.