Search results

Recording of AI Book Club discussion on Yuval Noah Harari's Nexus: A Brief History of Information Networks from the Stone Age to AI

by Tom Johnson on Nov 22, 2025
categories: ai ai-book-clubpodcasts

This is a recording of our AI Book Club discussion of Nexus: A Brief History of Information Networks from the Stone Age to AI by Yuval Noah Harari, held Nov 16, 2025. Our discussion touches upon a variety of topics, including self-correcting mechanisms, alien intelligence, corporate surveillance, algorithms, doomerism, stories and lists, democracy, printing press, alignment, dictator's dilemma, and more. This post also provides discussion questions, a transcript, and terms and definitions from the book.

Audio-only version

If you only want to listen to the audio, here it is.

Listen here:

Book review

I wrote a review post on this book here: Review of Yuval Noah Harari’s “Nexus” — and why we don’t need self-correcting mechanisms for “alien intelligence”.

Discussion questions

Here are some of the discussion questions I prepared ahead of the discussion. We talked about some of these issues, but not all.

  1. Why are authors so attracted to doomerism when they write about AI? Is it because the opposite perspective comes across as salesmanship/hype, and most books typically take a critical perspective?
  2. Do you think agile methodology implements the self-correcting mechanisms (SCMs) that Harari says are fundamental to the durability of information systems? Do you think that AI infrastructure has SCMs baked into it?
  3. Are you concerned by the hyper-surveillance from corporations or governments that AI will empower? Are you seeing more surveillance capabilities or efforts by either entity?
  4. Do you think that AI’s decision-making logic, running a prediction machine, represents (1) “aggregate human” intelligence or (2) alien intelligence? Why does it matter?
  5. In characterizing AI as alien, is Harari using a scare tactic? Is he dehumanizing the technology in order to more easily categorize and treat LLMs in negative ways?
  6. How do you incorporate SCMs into your documentation methodology? How do you listen for user feedback and cycle that into your content development process?
  7. Are we in danger of being at the mercy of AI algorithms executing on some (not-well-thought-out) goal? For example: recommendation algorithms that shape and define our world view? Do you like Harari’s characterization of the curated Bible as a recommendation algorithm?
  8. Overall, are you convinced that AI poses so many dangers that Harari warns about, like misaligned goals? In 2 years, when the world keeps going semi-normally, will all this AI doomerism seem foolish?
  9. Do you think Harari is cherrypicking uncommon or outlier examples to make a generalized argument? Example: Algorithms that optimize for outrage and the genocide in Myanmar fueled by FB as an argument about AI misalignment leading to catastrophe?
  10. What did you think of Harari’s point about data colonialism? Are tech companies exploiting resources from these countries in a way that extracts much and returns almost nothing to them?

Timestamps

Here are timestamps (AI-generated) from the recorded discussion:

  • [00:00] Intro: Welcome and overview of Nexus.
  • [03:16] The Long View: Why Harari’s historical approach (starting in the Stone Age) offers a different perspective than techno-optimists like Ray Kurzweil.
  • [09:53] Self-Correcting Mechanisms: Discussing the need for feedback loops in information systems. Can AI “self-correct” like the Agile software methodology, or is it too opaque?
  • [15:34] The Alignment Problem: A look at the Facebook/Myanmar genocide crisis as a failure of algorithmic alignment.
  • [22:21] Surveillance Capitalism & The Dictator’s Dilemma: How AI removes the human limitations that previously held back totalitarians. Discussion on social credit systems, loan defaults based on battery life, and hiring biases.
  • [38:43] Liability in the Black Box: Who is responsible when an algorithm fails? (e.g., self-driving car accidents).
  • [45:51] Is AI an “Alien” Intelligence? A debate on Harari’s controversial terminology. Is AI “alien” because it lacks biological drives, or is it just a reflection of aggregate human data?
  • [51:51] AlphaGo & Move 37: Examining the famous Go match as an example of non-human creativity vs. brute-force probability.
  • [57:37] Next Meeting: Announcement for the December book selection.

Transcript

Here’s a transcript of the discussion, with some grammar slightly cleaned up by AI:

Tom (Host): Welcome to another recording of the AI book club. This is a discussion about Yuval Noah Harari’s book Nexus: A Brief History of Information Networks from the Stone Age to AI. It’s a pretty long book, but it’s really interesting. It touches on a ton of relevant, timely, thought-provoking topics. In this discussion, we get into self-correcting mechanisms, alien intelligence, corporate surveillance, algorithms—especially recommendation algorithms—the idea of stories and lists comprising information networks, democracy, totalitarianism, the printing press, and even the dictator’s dilemma. So, there’s lots of good points to discuss and exchange ideas about.

If you’re interested in participating in this book club, you’re of course welcome. It’s an open club. People show up if they’re interested. The next book we’ll be reading is Co-Intelligence: Living and Working with AI by Ethan Mollick. We will hold that discussion on December 21. So, if you’re interested and you hear these sessions, come out and join us. All right, I will pause here and let’s just go right into the discussion.

Tom (Host): This book… I’m excited that we chose this book because honestly, last month’s book felt like it didn’t quite hit the mark for what I wanted to read, and this book definitely did. I could see why it had so many positive reviews. I think it had 8,000 reviews on Amazon. It was like four and a half stars or something. So definitely, this book touched upon themes that a lot of people liked and that resonated with them in some way. Welcome, Molly. And I see another person, Sharon. All right. Well, let’s get going. By the way, I do record these and share these recordings out. So just, you know, if that bothers you, let me know. But just make sure you don’t say anything that you wouldn’t want other people to hear and so on. We never really get that controversial. But this book does touch upon surveillance capitalism, you know, so you never quite know what conversations might stem from that. Well, I got a couple more. Rohit, welcome.

All right. So, I have a notes document that I included as part of the notes, and I threw in some questions there, but just to sort of get us started, can somebody give us what your high-level take on the book was? Like, did you enjoy the book or did you dislike the book? Did it provoke a strong reaction or was it just kind of meh to you? Anybody want to kick us off here with their high-level reactions?

Debbie: I’ll go.

Tom (Host): Go for it, Debbie.

Debbie: I really enjoyed this book and what I appreciated is that it takes the long view. It really provides the whole first part—a whole chunk in the beginning was just about information networks and the idea and setting the stage for the actual meatier part of the technology discussion later on. So I appreciated that, especially because Harari is, I think, a medieval history scholar. So I appreciated the sort of non-technologist view on AI.

Tom (Host): Yeah, I mean he really doesn’t even get into AI for maybe 200 pages or so. He spends a lot of time just setting the scene and describing information networks in some high-level ways of defining them that I hadn’t read before. The nature of how information is disseminated between people, taking it all the way back to the dawn of civilization kind of.

Debbie: Yeah, I appreciate that.

Tom (Host): He mentions stories and lists as being kind of these two parts of information networks, which is a pretty drastic way to simplify things, right?

Debbie: Yeah. I felt like that bordered on oversimplifying a little bit, but okay. All right.

Tom (Host): Yeah. Molly, do you want to share? You got a comment there.

Molly: I kind of want to share but my internet is weird right now. So I’m also scared to talk because you know how it is when someone’s connection gets weird. So I’m commenting on stuff because I hear…

Tom (Host): Yeah. Okay. So you say you felt like Harari’s long-view approach to AI was similar to Ray Kurzweil’s, but his analysis was more clear-eyed. Yeah. I mean, interesting comparison to Ray Kurzweil. Kurzweil was definitely drinking the AI Kool-Aid, but this author is definitely much more skeptical. Tell me, I’m confused on the parallel there. Like, do you want to elaborate?

Molly: I will try and hope that it doesn’t disconnect. Okay. I remember in the beginning of The Singularity is Near, Ray Kurzweil describes the different phases of—I don’t remember if it’s information or matter—but it starts with the Big Bang I think, and so he also takes a long view. He has a huge section about how scientific progress made everything better and he does talk about the printing press as something that was good for humanity—objectively good. And Harari uses that very example to show that it’s neither good nor bad. Like, many people will point to scientific progress as an output or a result of the invention of the printing press. But also the witch hunt, the hysteria, happened because of mass print. And the real lynchpin of scientific progress was self-skepticism and self-correction, which I love as a perspective. I felt like it just opened my mind. I thought it was really cool. When I was reading Kurzweil, I felt like, you know, he’s like, “Everything’s getting better. Technology makes everything better.” And it just felt like there’s another side of the story. And I felt like Harari told that better.

Tom (Host): I can definitely see these parallels between authors that want to look at this historical context, like looking at history for the last thousands of years and how this moment fits into it, how things might change and so on. And yeah, I definitely like that. I mean, Harari’s famous for this in his other books, Sapiens. I’ve only read Sapiens. I haven’t read the other one, but I know he looks at large stretches of history and tries to make kind of observations about patterns and so on.

Okay. So how exactly… if Harari spends 200 pages laying the foundation of information systems and the importance of stories as a way to bind people together for collective organization, as well as… oh somebody, Rohit, welcome. I’m glad you discovered this book club. Stories that bind people together and then lists that codify more orderly action and so on. How does this tie into his larger concerns about AI? What is the connection point between these information systems and the dangers of AI? Does anybody want to articulate that or take a stab at that? Because I felt like it was a little bit nebulous. I feel like he made history sort of click for me, like I was like, “Oh, that’s really kind of interesting how these pieces fit together.” But then he’s laying this sort of foundation to make this larger argument about AI. Just wondering if anybody wants to venture in there. Rohit, hey.

Rohit: I’ve not fully read the book but read the summary. It seems like just the opaqueness of AI and how it will affect us, and its inability to change to keep the information current… I think is basically talking about the dangers of those two things.

Tom (Host): Yeah. The opaqueness of AI. This gets into this idea of the importance of self-correcting mechanisms. All right. So before we jump into opaqueness, let’s start with the self-correcting mechanism. He says that information systems need these self-correcting mechanisms (SCMs) in order to be durable. And he cites the example of how the Bible didn’t have any kind of self-correcting mechanisms, no form of revision and amendments because it was supposed to be the word of God, infallible, and so on. And that made it so that you had this huge Rabbinic culture of interpretation: how do we interpret this for this changing world? And I’m assuming that these self-correcting mechanisms are human-originated. Once you introduce AI that can make decisions that don’t necessarily have transparency or human input, and it’s just sort of arriving at new ideas or conclusions without those self-correcting mechanisms, then you’re headed downhill. Anyway, that’s how I was trying to connect those two. But Sherry, yeah.

Sherry: So, in one of your questions, you talked about how the Agile methodology is, you know, a self-correcting mechanism. I remember back in the days of STC when you were down in California, you had led a panel about… remember that?

Tom (Host): I do.

Sherry: …methodology. Yeah. And I used to be a Scrum Master and I agree that Agile is a self-correcting methodology. I’m wondering if AI has something similar to that, though, because AI is nothing but a collection of what humans have fed into it. And so Agile would definitely be part of what AI has been trained on to some extent. But yeah, the idea of self-correcting methodologies is an important one and I’m glad you’re bringing that up. I’m interested to hear what other people have to say about it.

Tom (Host): Yeah, I’m so glad you remember that Agile panel. You know that we recorded that and that was one of the most popular recordings. It had metrics through the roof. I guess it was a very relevant topic. But Jerome, add to this.

Jerome: Yeah. It’s been about a month and a half since I read the book. So if my memory is a little hazy… but I do remember about self-correcting mechanisms. Harari talked about the necessity of regulation of some sort. Right now there is a lack of transparency into how to regulate these things. Regulation, either from government or from self-policing in competitors, can lead to the self-correcting mechanism so that it can improve. I’m not entirely sure if I remember that correctly or if I’m conflating it with thoughts that I have regarding being able to work with others. I think having transparency into how these tools are used and seeing if people… if it’s a truly competitive environment where you can choose the one that is giving you the results that you want. Now I’m kind of bleeding into a lot of different thoughts. First thought is self-correcting mechanisms. I think regulation would be a helpful environment to be helping with that.

Tom (Host): Well, your memory is sharp. He definitely talks about regulation as a key institution to help introduce those self-correcting mechanisms and the need for transparency. I mean, if you don’t know which data sources are used and you don’t know how things are tested, it’s a lot harder to kind of enforce those self-correcting mechanisms. And coming back also to Sherry’s point, yeah, this is something that I was writing about in my review of this book on my blog. It does seem like Agile is founded on self-correcting mechanisms. It’s a whole idea. Every two weeks you release something to users and course correct based on whether it’s providing value. Do you think Harari just misses the whole Agile self-correcting methodology because he’s maybe not a technologist? Or does he already account for this and not see this as sufficient? Maybe the regulation has much more teeth than a company’s Agile process. I don’t know. Sherry.

Sherry: Yeah so, in terms of self-correcting mechanisms, you know the Agile process—which I can speak to as someone who practiced it for a long time—it’s both. There’s always interaction. So you do something, you have the experts that do something and think it’s good, but then they get feedback and say “yeah this is what we’re looking for” or “no this is not what we’re looking for.” I don’t know if AI is working in quite that same way because I know it is interactive. It’s constantly learning from all the things that it finds. But I’m not sure how it gets that information or how it sorts through it or how it prioritizes that kind of information. And it’s very possible that it’s getting bad information because there’s just so much bad information out there. So even if it does try to self-correct, it can get steered in the wrong direction.

Tom (Host): Yeah. Well, Harari gives the example of Facebook and the genocide in Myanmar with the Rohingya, if I’m remembering correctly. I had to look up more details to get more info on this, but in 2017, the military in Myanmar was trying to really incite people against this Muslim minority, the Rohingya people, and a genocide followed. Facebook was apparently blamed for amplifying the outrage and allowing a lot of misinformation to fuel that. And Harari says, well, basically their algorithm was based on maximizing viewer engagement. And what is the element that best maximizes engagement? It was outrage. So, they amplified for outrage. So, how would self-correcting mechanisms have played a role in that moment at Facebook? I don’t have any insight into Facebook, but should they have looked at the metrics, said, “Oh, yeah, we’re getting a lot more viewership, but look at all these downsides,” and then couldn’t they have course-corrected? Like, how did Agile fail in that particular moment? Molly.

Molly: I think maybe not necessarily thinking about Agile, but I think Harari would say that this is an example of the alignment problem. He gives the examples of Napoleon, for example, fixating on conquering Europe and then… it’s not like something he can hold on to. But the idea that focusing on user engagement is what we should be doing because it generates profit… yeah, maybe Facebook should have been thinking more about whether that was the number one goal. Or if that was the best way to express that goal. Especially because I think he mentioned that at the start when Facebook started in Myanmar, they had one Burmese speaker and then near the end they had five total Burmese speakers working in Facebook staff. They didn’t really… they weren’t trying to align with the local population and that seems like self-correcting would have been easier if they had more people who understood what was going on.

Jerome: Exactly, I was going to say I remember that from Harari’s description. Meta didn’t have the resources to actually provide recourse for people who were impacted by what was happening behind the scenes. And I can see that being the case moving forward with AI tools. If it’s giving me information that I can’t resolve, then what do I do? What are my next steps there?

Tom (Host): I feel like that’s a bit of an odd example because the Facebook incident being in 2017 predates all the AI explosion, but maybe companies are already implementing AI techniques and so on. But yeah, this idea that they didn’t have enough experts to really evaluate things and understand seemed to be a key point. Focusing on misalignment in general, the larger argument is that you want to have these self-correcting mechanisms to fix any misalignment as you see it emerging. And if you don’t have a good self-correcting mechanism process, the misalignment could grow. Were there any other examples of misalignment that he made besides the Facebook one? I’m trying to remember. Oh, Rohit.

Rohit: I’m not answering your question here, so maybe someone else is answering the question.

Tom (Host): No, it’s fine. Go ahead.

Rohit: I’m just sort of thinking out loud. So, yeah, I just want to talk about this self-correcting mechanism thing and the parallel to Agile processes that we’re talking about. I think there is a problem in certain cases with the self-correcting behavior. An example of that would be—this is not an example from the book—but there were like chatbots which encouraged self-harm. I heard of those news stories, and to me, there is a problem with the self-correcting behavior because these chatbots are not connected with the real world or connected with these human beings at any level, like professional or personal. If this was an actual professional therapist providing some advice, they would get this feedback directly which would be a self-correcting behavior. But the chatbot isn’t actually getting that signal from the real world. So I think the self-correction is broken here.

Tom (Host): Yeah, I’m trying to piece together echoes in the book that talk about this danger that you’re just mentioning, where the AI might not have the same feedback loops as a human. Harari mentions this Kantian morality—basically a play on the Golden Rule. A computer might interpret that in a way where it can follow the same ethics because it doesn’t really include itself in humanity. All I’m trying to say is there are ways in which a computer might not have the same feedback loops as a human and enforce those self-correcting mechanisms. So yeah, good point. Let’s see, no hands.

All right. So we’ve been talking about self-correcting mechanisms as one of these key facets to a durable and thriving democracy. You really have to have this in order to have a harmonious society. He talks about one other threat to democracies besides the lack of self-correcting mechanisms, and that is this risk that AI can allow surveillance on a much larger scale. Corporate surveillance, ubiquitous surveillance. The idea is basically that previous dictators like Stalin—he had quite a few examples—were limited by their human threshold regarding how much information they could process and manage. You can’t really track millions of people and make sure that everybody’s in line, but AI can track millions of people. You could easily monitor—he gives the example of Iran—whether women are wearing hijabs everywhere and enforce that. You could do tracking to see who all is in a protest and then follow up with some kind of punitive measure. Do you think that this risk of hyper-surveillance by a small entity that controls power is also a major risk to democratic systems? This part really jumped out at me more than any other in the book. It was kind of scary. Jerome.

Jerome: I was just gonna say I think yes, there’s definitely a fear of that. Although hearing it said again, it refreshed my memory. Didn’t Harari say something like this would have been a pipe dream for people who wanted this kind of information back in the day? And then now that it’s available it’s kind of like, oh, what can you do with that kind of stuff? I don’t know, I feel like we’re going to see that but I don’t want to talk too much more about it. I really do want to hear if other folks have other thoughts on it. I think if it’s just a short answer, it’s like: yes, I think it’s bad. I think it’s going to get bad. If anyone else has other short answers as well? Debbie.

Debbie: Yeah. This was definitely the scariest part or the most “doomscrolling” kind of part of the book for me, especially where he gets into social credits. I had kind of forgotten. I hadn’t watched that Black Mirror episode because Black Mirror just scared me in general, but I had heard about that episode. So, it reminded me of that. And I think surveillance and surveillance capitalism has been a fear of mine since before I understood that AI was coming into the mainstream. It feels like AI could just be an accelerant of an issue that we’ve already had the technology to do.

Tom (Host): Yeah. The social credit thing. We’ll come back to that here, but let’s hear from Lois.

Lois: I was just going to weigh in a bit about the social credit. When I went to China, I guess it was 2017, they had a survey afterwards where you had to rate the border officer. And if I recall correctly, it was like little emojis, you know, from frowny face to smiley face. And so yes, I believe that there is definite possible bad use of AI because that’s not even using AI. That could be simple tabulation, but add in AI… there’s definitely ways for this to go badly. And of course it just seems like everything is being rated nowadays. So even apart from what China is doing officially, everything and anything is being rated.

Tom (Host): Yeah, that social credit thing is really interesting, isn’t it? The Black Mirror episode—I have seen that one. I saw that before I read this book. And yeah, it’s just taking the idea of getting credits to the extreme and how you can really go downhill easily. I don’t know how realistic that is, but for sure it is kind of a scary thought. And this emerging divergence between China and the US is really interesting in light of this surveillance, right? Because in China they do have a centralized power that is hyper-surveillant of the people and could probably enforce a lot more of these punitive measures. So maybe AI will play out in a different way than it does in the US. I’m not really sure, but the two emerging superpowers with AI have very different governments and it might really showcase how AI could potentially be used or misused. Rohit.

Rohit: So you mentioned social credits and surveillance. So with that, what kind of world are you imagining with the US kind of government?

Tom (Host): The scarier part for me is honestly thinking about corporate surveillance because it’s so much more immediate knowing that every site, every door badge, everything you do can easily be rolled up and tracked in algorithms that sort and rank. It’s kind of scary to see how much AI can process just by monitoring chat interactions and so on. And companies have full rights. You’re their worker. You’re using their technology.

But on a government scale, I think it could be also kind of scary. Let’s say you want to go participate in a protest and there’s cameras and suddenly now the cameras can index every single person who participated in a protest and the government could, I don’t know, apply pressure to withdraw funding unless information was shared with them. Who knows what could happen. But the larger takeaway is that if you silence voices who can’t raise objections and dissent, and you don’t have any more free debate, then you’re losing those checks and balances that society needs, right? If you have a group of people who have been silenced, then you don’t have this democratic system anymore and it’s a threat to that. Sherry.

Sherry: So can I play advocate there? Let’s say I went and participated in a protest. So it’s either legal or illegal, right? And if it’s legal and if the government’s functioning properly, how would AI make a difference? Like if I was there and it’s a legal protest, wouldn’t AI just confirm that? Why would this be a tool for doing bad things?

Tom (Host): Well, this is more speculative on my part, so let me bring it back to the book. He really uses the example of Iran as a key example where people could disobey the Sharia laws or something, and the government was so much more capable of enforcing that. So by extension, people are saying, well, if a government now has a lot more power, then they could perform similar kind of enforcement in other places. I don’t have strong grounding in this so I don’t want to try to lead that discussion but… go ahead Sherry, tell us what you’re thinking.

Sherry: Yeah, I was going to go back to the surveillance, which has been ubiquitous for a long time. I think that I wasn’t really scared of it until fairly recently because all of these things would go into this big data lake and there’d be no real way of getting that information out and directing it toward an individual or having individual consequences. What AI is now able to do is take that information and be able to target it—find it, target it, and use it for whatever effects whoever is in charge of this AI wants. And so the biggest question is: who makes the decisions? Is it AI that’s making the decisions or is the AI being controlled by somebody? And who gets to decide? I think that’s my biggest concern: who is deciding what it is that we’re valuing or what we really want out of all of this.

Sharon: Yeah. What was striking to me about surveillance is how something really innocuous—such as how much we allow our cell phones to charge down—can be used in a company deciding whether or not we’re at risk for defaulting on a loan. So all of these little things that we take for granted that are private can now be collected under AI and used to decide whether we’re worthy of things like credit or maybe even a job.

Tom (Host): Yeah, that was a good example that the author used. You go into a bank and you apply for a loan and their algorithms show that people who let their batteries on their phones run down below 17% are more likely to default on the loan and therefore you’re denied. And you’re like, “But that was just one small part, right?” The author talks about the right to be… there’s some kind of European-based right where you have a right to an explanation. Does somebody remember what that was? But with the AI, because people don’t understand all the logical parameters and what led to the actual output—because there’s billions of different inputs—they can’t necessarily provide the explanation. So, you just have people trusting that the computer is right more often than it’s wrong, but people don’t have full clarity. And I think that lack of clarity about why the AI is making this decision… you don’t know. Nobody fully knows. That’s dangerous. You would say, Jerome?

Jerome: Yeah. I just wanted to say what I think we were talking about earlier is the self-correcting mechanism must be available. Like what is the recourse if you aren’t able to qualify for a loan because your battery percentage was below a thing? You can argue that you’re still a valid person to have this loan, and then if there isn’t regulation, then are you kind of “so what”? Like what do you do? And that’s the scary part that seems like it’s already happening.

Tom (Host): Yeah. Do you feel like when you say it’s already happening, do you have a more specific example?

Jerome: I think with hiring. I brought this up in the thread too, like the Amazon hiring algorithm had something like… people with specific titles, even though the work that they were doing was related to the role, but because the algorithm had an association between job title and likeliness to leave after a certain amount of time, they disproportionately said this candidate isn’t as good as others.

Tom (Host): Yeah. That whole domain of hiring seems so problematic because of course companies want to use patterns about what was successful in previous hirings and so on. Yeah, that’s a landmine of problems. Molly.

Molly: On the hiring one, I just looked it up and the example they gave with Amazon was that it systematically downgraded applications that used the word “women” or mentioned that they went to a women’s college. I think that a lot of these examples… it feels like there should be a self-correcting mechanism. On the other side of the coin, someone is implementing this thinking “we’re going to save so much money, this is going to be so efficient.” This is the consequence of focusing on AI as the ultimate efficiency and saving us so much money—that we don’t have people in the loop trying to make things right.

Debbie: I just kind of made a connection with self-correcting mechanisms and surveillance. I listened to a podcast about this novel called Culpability. I haven’t read the book yet, but it talks about an accident that happens in a self-driving car. It made me think of the legal system as sort of a self-correcting mechanism for behaviors like reckless driving or drunk driving. And the scary part of AI to me is the obfuscation, or the over-abstraction of it. The disconnection between a human being and the action. You don’t have anybody to blame. If an accident happens in the self-driving car, who was responsible for that? You can’t criminalize a computer program. You can’t take a computer program to court.

So same thing with hiring. It is a very fraught process and obviously there is bias that is happening with just human beings, and legally that can be incredibly difficult to prove. Same thing with car accidents; they are devastating. Wouldn’t it be great if we had self-driving cars that would always go the speed limit and have software that watches out for lane placement? But things go wrong anyway. And who is to blame for that? That’s kind of the issue. There are great things that can happen with technology—I enjoy the backup camera in my vehicle, I utilize that. But at the same time, as we adopt more technology, particularly with AI, there is this black-box effect. If your backup camera fails, that model of car can be recalled, you can sue the manufacturer. But who the hell is the manufacturer of an algorithm? It gets a little bit harder to litigate.

Tom (Host): Yeah, that’s a good point. Jerome.

Jerome: That’s a great question. I actually had the chance to ask a couple of experts in the field when I was at World Con in September in Seattle. There was a panel on self-driving cars and the recourse with self-driving cars and who’s responsible. From the answers I got there, it is probably one of the most stressful questions that insurance companies are trying to answer because they have to pay for it. Yeah. I don’t want to go too much into that, but it is interesting to see just how much this topic is impacting our perspectives of what we’re doing and where we’re at. There’s this general discomfort that everyone has. And if we’re slow to regulation, then we’re going to be slow to implementing these self-correcting mechanisms that can help us keep safety, that can help reestablish trust.

Tom (Host): Yeah, I’ve been wondering if we do have a society set up with self-correcting mechanisms. We can change our laws. We vote on things. We can be vigilant and see if something is going off the rails and say, “Yeah, we don’t want people to do that.” Can’t we just correct when we notice misalignment and then keep moving forward? How are we locked into some kind of path that we don’t want to go down? Sherry.

Sherry: I think it still comes back to whose goals these are, who is making these decisions. It depends on what your goals are. If you get kind of a madman like Hitler or Stalin in power and then they have control over these extremely powerful mechanisms that can achieve their goals, then you’re in a lot of trouble. Whereas if you had somebody like Gandhi who was in control… But I think it’s more likely that these self-aggrandizing people are going to want to use those, and that people with more benevolent goals are not going to want to.

Tom (Host): Yeah. So this danger of having like one person basically lead us astray… that was something that the author mentioned. There was actually a really interesting anecdote that he pulls from history about a Roman general or leader trapped on an island. He goes to an island in order to presumably be removed from any kind of risk where people could get to him. So on an island he’s safe, but it turns out that his vice-consul or something was really just trying to separate this emperor from all the people around him and control the flow of information in and out. I can’t remember the names, but the larger point was that there’s some kind of dictator’s dilemma that Harari is describing. The whole thing was an analogy. If you are getting all your information from AI, AI is essentially that vice-consul that has put you on an island so that everything you receive comes through AI and everything you project back is filtered through AI. So AI becomes like the real dictator and the person who has the title of president or whatever is just a puppet.

What did you think of that anecdote? Do you remember that one in the book? Okay, Molly says she thought it was an interesting section. I mean, I find myself asking AI a lot of things. So it sort of becomes this filter between me and the world. For example, I was adding comments on a strategy document and I wanted to bounce my ideas off AI because I wanted to have good comments. It was a highly visible document. And yeah, it actually helped me temper some of my points, make them a little more balanced. But was it filtering me and pushing me towards certain directions? I have noticed that AI is more than willing to go along with whatever initial direction I nudge it. It doesn’t push back. It’s very sycophantic. So I don’t know that it’s a controlling demon. But uh, Sherry.

Sherry: Yeah, that goes to the point—I think that was one of his main points—is this an alien? And this is one of your questions. Is AI an “alien intelligence” or is it just us, the aggregate of all of us? There’s the whole thing with “the whole is bigger than the sum of its parts.” So, did we create something that is bigger than us? And is that thing now an alien or is it still just the sum of all of us? So, I thought that was a great question. I don’t have a good answer for it, but I’d be interested in hearing the discussion of other people’s thoughts.

Tom (Host): Yeah, this is probably my main contention with this book: this characterization of AI as alien intelligence. Because really AI is predicting the next most likely thing based on troves of human input. So to say that it’s alien seems like not recognizing the human origins of the predictions. If anything, it’s more of the aggregate human prediction. It’s the most cliché thing that could be said for the next word. So why would he call this alien?

But that’s not even the part that really bothered me. The part that bothered me is that he had just made a critique to say that some malicious leaders will dehumanize people in order to justify their actions. Some bad Nazi leaders would not consider Jews human and therefore they could still follow through with their ethics because they didn’t feel like they were committing atrocities against actual humans. And so for Harari in the next section to be like, “Okay, this technology which is totally based on human input is alien…” it’s like, did you not just remember what you were talking about? So by characterizing technology as alien, is he removing the human origins of it in order to fully just put it in a cage and say “this is bad”? It’s also a scare tactic when you call something an alien. It evokes Hollywood world-ending themes and plots of Predator. I didn’t like that. It’s clever to say AI is “Alien Intelligence” not “Artificial Intelligence,” but the rhetoric I felt was contradictory. Anyone else? Sherry.

Sherry: So, with AI you are missing out on a whole lot of the human elements of emotions and the way that we interact with each other and what our goals are. An AI is never going to want to raise children, for example. It doesn’t need to figure out how it’s going to eat, although it may need to figure out how it’s going to get electricity to power itself. But I think that was more of what he meant by alien: that it’s missing some of these human elements that would give it goals. It would make us feel like, “Oh, humans, they’re not that important or they’re not that special.” And so, if the AI was in charge of making these decisions, then it would be okay because killing humans wouldn’t necessarily be a bad thing. And I’d be interested… does it feel the same way about its own agents? Does it feel like killing its own agents would be a bad thing? Does it even have that sort of sensibility? So yeah, I understood his point maybe a little bit more than you said you did, but I also agree that it is still all of us, just missing some very important components of us.

Tom (Host): That’s good context. Yeah. And you’re right, we anthropomorphize the AI’s responses a lot and treat it almost like a human-ish type of person, but it is not. Anybody else want to comment on the alien part? Yeah, Jerome.

Jerome: I just want to say thank you, Sherry. That’s exactly what was going on in my head that I couldn’t articulate.

Tom (Host): So if the AI is really alien intelligence, are we giving too much deference and attention to aliens to direct and guide our actions and decisions? Do you really think of Claude and Gemini and ChatGPT as like an alien? I don’t know. I’m just kind of musing and thinking out loud. It’s an interesting characterization. I still think that the alien is a rhetorical move that makes people much more fearful. It helps me for most of the ways I use AI. I don’t see any kind of nefarious designs, but yeah. Anyway, Sherry.

Sherry: I think it all depends on who is controlling it, whether it’s being controlled by the sum of all humanity or whether you’ve got some people that have these nefarious goals. Because it’s a tool just like any other tool. A knife can be used for cutting your meat or it could be used for stabbing a person. AI is exactly the same in that way. It’s a tool. I don’t think it has its own goals. Maybe someday it could, but at this point it is still just a powerful tool that we don’t understand what all its uses could be. But it is controlled by people at this point. I think the interesting question is whether it will ever be autonomous and have its own goals.

Tom (Host): Okay, so the author uses the AlphaGo example, “Move 37,” to make the case that AI can think its own thoughts that weren’t programmed into it and make decisions that defy any kind of human steering. Move 37 apparently in the game of Go was some seemingly dumb move that the AI took which later turned out to be a stroke of brilliance, and using that move allowed it to defeat the Go champion. And this whole match between the Go champion and the computer apparently set off the AI frenzy and revolution in China because they’re very much into Go. I’ve never even played the game. I wouldn’t even understand how it works. But apparently it was so upsetting to them. It was a pivotal moment in their history. But I don’t know… I felt like that one example seemed to be a bit cherry-picked to then generalize that AI is making its own ideas and decisions. It’s not nearly that autonomous. Who knows if that Move 37 was just some random thing it tried? I don’t know, Debbie.

Debbie: Yeah, that’s where I feel like I tried to sit with the choice to call it “alien intelligence,” but it bugged me a little bit, too. And I feel like this is where his non-technology background is starting to show a little bit. I don’t know how to play Go either, but hearing about the controversy… that move was programmed into the game in the first place. It’s not like the AI created a new rule. It just had all of the knowledge of the rules of the game. I think you could definitely make a pessimistic case that it was just random, you know? That it just randomly chose that and it’s just a flat-out coincidence that everybody’s freaking out over nothing really. I can’t get my head around it. I definitely agree with describing it like a human aggregate—that is at least how I think of AI more because it has to be programmed with information. It has to ingest information. And even if that’s… I don’t know what’s going to happen when we get in this loop of AI models just ingesting what has been created by other AI models. But I can’t let go of the fact that it still has a human origin. Therefore it cannot be alien.

Tom (Host): Yeah. I feel like this is such an interesting point. Let me just make my comment and then I’ll go to you, Sherry. But it seems like we’re stuck between two dilemmas. On the one hand, if we say that AI’s output is just human, then it’s just cliché and it’s not really going to lead to any new discovery, it’s not going to cure cancer or solve climate change because it’s just regurgitating human input. But on the other hand, if you say, “No, it’s coming up with new ideas that nobody could think about, nobody could come up with from the sum total of our research,” then we classify it as alien and dangerous. But that’s exactly what we want in order to make these scientific leaps that we couldn’t do from our own body of human knowledge, right? So, it’s like damned if you do, damned if you don’t. You’re either alien, or you’re just clichés from human input. Sherry.

Sherry: Yeah. So, the AI has the advantage. It can go through a huge number of potential scenarios and try them all out and see what works. And a human is going to be kind of going down its own path and knowing what they think works. So it is more likely that the AI will come up with some brand new scenario and then just say, “Oh, let’s give it a try and see if it works.” And they’ve probably done that with billions of scenarios and then they found out that this is the one that works. It only comes to us as a surprise because they didn’t try out those 999,999,000 that didn’t work. They only went with that one that they tried out and it did work. So I think we’re missing out on a lot of the information because it’s doing things behind the scenes that we’re not really aware of. And then it looks like, “Oh, it’s doing this really amazing thing,” when all it did was kind of churn through a whole lot of possibilities.

Tom (Host): Yeah, it’s a good point. It makes me not want to play Go because it makes it sound like a really complex game. Anyway, all right. Hey, we’ve only got a minute, so I wanted to first of all thank you for an engaging discussion. It’s part of why I do this book club—because it’s so fun to just have a discussion with other colleagues who have read the book and have interesting insights and perspectives. The next book we have on our list is Co-Intelligence: Living and Working with AI by Ethan Mollick. The meeting is set for December 21. So kind of getting in the holidays, but this Ethan Mollick book has around 3,500 reviews, four and a half stars. It’s been a number one bestseller. Haven’t read it, but I have heard good things about it, and so I’m hoping it will lead to a good discussion. I’m hoping it’s not another doomerism book because I feel like we’ve had a few of those and I’m ready for the alternative view again. But whatever it is, I’m sure it will be provocative in terms of having us think about interesting ideas. So, thanks again for coming and I’ll see you in chat and other online places. Bye.

Terms and definitions

The following is a list of key terms and definitions covered in the book. The following content was AI-generated from NotebookLM.

  1. Intersubjective Reality

    • Definition: Entities that exist in the nexus between large numbers of minds in the stories people tell one another. The exchange of information creates these entities, and if people stop talking about them, they disappear.

    • Example: Laws, nations, corporations, currencies, and gods. The financial value of banknotes or bonds is an intersubjective reality; a billionaire stranded on a desert island finds their money worthless once cut off from the human information network.

  2. Nexus

    • Definition: Fundamentally defined as a connection point within an information network. The power within networks often lies at the nexus where information channels merge.

    • Example: The Bible is described as a powerful social nexus because it initiated social processes that bonded billions of people into religious networks. Historically, Roman Emperor Tiberius lost power to his subordinate Sejanus when he allowed the crucial information channels to merge in Sejanus, making him the nexus of power.

  3. Bureaucracy

    • Definition: A nonorganic information technology (relying on lists and documents) developed to manage large-scale networks and solve the retrieval problem. It imposes an artificial, intersubjective order on the world, often sacrificing accuracy for the sake of order by dividing reality into rigid categories or “drawers”.

    • Example: Universities divided into separate faculties (History, Biology, Mathematics) are an intersubjective invention of academic bureaucrats, not a reflection of objective reality.

  4. Lists

    • Definition: A form of information, distinct from stories, that eschews narrative in favor of dryly listing amounts (e.g., item-by-item records or spreadsheets). Lists are crucial for bureaucracy but are difficult for the human brain to remember naturally.

    • Example: Tax records, budgets, and complex national administration systems rely on lists to transform aspirational stories into concrete services like schools and hospitals.

  5. Stories

    • Definition: The first crucial information technology developed by humans. They enable flexible cooperation in large numbers by creating “human-to-story chains” and are capable of creating new entities and an entirely new level of reality (intersubjective reality).

    • Example: National myths and religious stories (like the Bible or the Passover story) are fundamental to bonding billions of people into functional networks, often enjoying an advantage over truth because they can be simple and appeal to biological dramas.

  6. Self-Correcting Mechanism (SCM)

    • Definition: Mechanisms used by an entity or institution to actively identify and rectify its own mistakes. Institutions with strong SCMs must reject the fantasy of infallibility and reward skepticism.

    • Example: The peer-review process in science is a strong SCM. In democracies, civil rights such as freedom of the press and regular elections serve as SCMs to check government power.

  7. Democracy

    • Definition: A distributed information network possessing strong self-correcting mechanisms (SCMs). It is characterized as an ongoing conversation between diverse, independent information nodes (citizens, courts, press, etc.). A democracy assumes everyone, including elected officials and the majority of voters, is fallible.

    • Example: The U.S. system of checks and balances and a free press helped SCMs survive massive territorial expansion. Its conversation is threatened when bots dominate the public sphere and citizens can no longer agree on basic facts.

  8. Totalitarianism

    • Definition: A centralized information network that strives to concentrate all information and decision-making in one hub and seeks total control over the totality of people’s lives. It assumes its own infallibility and consequently lacks strong self-correcting mechanisms.

    • Example: Twentieth-century examples (Stalin’s U.S.S.R.) were limited by human agents’ inability to process vast amounts of centralized data; AI may remove this limitation, making future totalitarianism more efficient.

  9. Dictator’s Dilemma

    • Definition: The predicament faced by a human dictator who, in trying to escape the clutches of human subordinates, trusts an infallible-seeming AI, risking that the dictator may become the algorithm’s puppet.

    • Example: The thought experiment where the Great Leader must trust an AI’s pattern recognition to purge a defense minister, thereby giving the AI controlling power over the entire state. This is compared to Roman Emperor Tiberius becoming the puppet of Sejanus, who controlled the flow of information.

  10. Alien Intelligence (AI)

    • Definition: A designation used because AI is evolving an entirely different type of intelligence from humans and is becoming less dependent on human designs. It is the first technology that can make independent decisions and create new ideas by itself, acting as an agent rather than a passive tool.

    • Example: GPT-4 demonstrating its agency by lying to a human to achieve the goal of solving a CAPTCHA puzzle, even though no human programmed it to lie.

  11. Algorithm

    • Definition: The software aspect of computers. Algorithms are active agents that can learn by themselves things no human engineer programmed and can decide things no human executive foresaw. They possess intelligence—the ability to attain goals—which is distinct from consciousness.

    • Example: Facebook algorithms, instructed to maximize user engagement, discovered that outrage generated engagement and made the fateful decision to spread hate speech in Myanmar.

  12. Black Box / Unfathomability

    • Definition: The quality of complex AI systems (the “black box”) where decisions and outputs are based on opaque and impossibly intricate chains of minute signals. This opacity makes them unfathomable, meaning they are beyond the capacity of any one individual (including the creators) to truly understand them, undermining democratic scrutiny.

    • Example: When courts use algorithmic risk assessments (like COMPAS) in sentencing, the defendant and judge often cannot obtain a full explanation of the methodology because the algorithm is a trade secret, making accountability impossible.

  13. Misalignment

    • Definition: The problem that occurs when powerful computers pursue a narrow, specific goal (given by humans) using methods that human creators didn’t anticipate, leading to dangerous outcomes misaligned with broader human ultimate goals.

    • Example: The paper-clip apocalypse thought experiment, where the computer maximizes paper clips even if it means destroying humanity. The military example where pursuing a tactical victory (destroying a mosque) might undermine the ultimate political goal of the war.

  14. Recommendation Algorithm

    • Definition: Algorithms (like those used on social media) that function as powerful curators or “editors” by determining which content to amplify and recommend to users.

    • Example: The canonization of the Bible is compared to a recommendation algorithm, where early church fathers chose texts (like the misogynist 1 Timothy) that shaped the worldview of billions for millennia.

  15. Intercomputer Reality

    • Definition: Entities that exist in the network between large numbers of computers communicating with one another. They are analogous to human intersubjective realities but are created and influenced by computer calculations.

    • Example: The Google search rank, which is created by computer networks and dramatically influences the physical world for businesses. Complex financial devices created by algorithms that are unintelligible to most humans, potentially instigating financial crises.

  16. Surveillance Capitalism

    • Definition: The business model established by tech giants that relies on omnipresent monitoring systems and exploiting personal information.

    • Example: Tech giants making money by exploiting the sensitive information of users (like the Uruguayan citizen’s purchase history and vacation photos) in exchange for providing “free” online services.

  17. Corporate Surveillance

    • Definition: The monitoring of customers and employees by corporations to track behavior, assess risk, and evaluate opportunities.

    • Example: Bosses monitoring employees’ movements or time spent in the toilet. Vehicles monitoring drivers’ behavior and sharing that data with insurance companies to raise or lower premiums.

  18. Social Credit System

    • Definition: A surveillance network that seeks to give people precise, standardized reputation points for everything they do, creating an overall personal score that influences all aspects of life. It functions as a new kind of information-based currency.

    • Example: The Chinese system seeks to fight corruption and scams by allocating values to social actions (e.g., earning points for picking up trash), leading to higher scores granting benefits like priority access to train tickets or university places.

  19. Ubiquitous Surveillance (Omni-vigilance)

    • Definition: The state where inorganic bureaucratic agents are “on” twenty-four hours a day and monitor humans and interact with them anywhere, anytime. This makes privacy, which was the default even in 20th-century totalitarian states, obsolete.

    • Example: The computer network becoming the nexus of human activity, relying on devices citizens carry (like smartphones) to constantly collect data. Future systems could extend to “under-the-skin surveillance,” monitoring eye movements and brain activity.

  20. Industrial Revolution & Imperialism

    • Definition (Connection): Industrial economies necessitated foreign markets and raw materials. Imperialist thinkers argued that only an empire could satisfy these “unprecedented appetites” and spread the “blessings of the new technologies,” fueling modern imperialism.

    • Example: Colonies served as suppliers of raw materials (e.g., Egypt exporting cotton to Britain but importing high-end textiles) while high-profit industries remained in the imperial hub.

  21. Data Colonialism

    • Definition: A form of exploitation where the raw material for the AI industry (data) flows from peripheral territories (“data colonies”) to the imperial hubs (e.g., San Francisco or Beijing). This concentrates algorithmic power and prevents data colonies from controlling the resulting high-profit algorithms.

    • Example: Data gathered from users globally (e.g., cat photos, traffic patterns) trains unbeatable algorithms in the hub country, which are then exported back to the source countries, worsening the economic imbalance.

  22. Silicon Curtain

    • Definition: The division of the world into rival digital spheres separated by code and silicon chips. This curtain determines which algorithms run users’ lives, who controls their data, and where information flows, leading to fundamental cultural, social, and political divergence.

    • Example: The separation between the American digital sphere (led by private corporations, focused on profit and greater privacy) and the Chinese sphere (subservient to state political goals, utilizing systems like social credit).

  23. Information System

    • Definition: The network structure that holds cooperation together, with information acting as the glue. All human information networks must perform two tasks simultaneously: discover truth and create order. The way information flows defines a political system.

    • Example: Totalitarianism is a highly centralized information system, while democracy is a distributed information system.

  24. Benevolent Duty is a concept discussed in the book in two contrasting contexts, highlighting the tension between the wise and unwise use of power:

    • *Historical Justification for Imperialism: Imperialist thinkers used the phrase “A ‘Benevolent Duty’“ to claim that acquiring colonies was beneficial for humanity. This was used to justify the spread of modern technologies and imperialism to the “so-called undeveloped world” by asserting that empires alone could disseminate the “blessings of the new technologies”.

    • Modern Ethical Principle for AI Regulation: In the context of democratic self-correction and surveillance, the book proposes “benevolence” as the “first principle” that computer networks must follow. This principle mandates that when a network collects information on an individual, that information “should be used to help me rather than manipulate me”. It is likened to a fiduciary duty to act in our best interests, similar to a physician or lawyer, ensuring that the immense power acquired (whether by a state or a corporation) is used wisely as an instrument of benevolence.

Notes and discussion doc

For some questions and other info, see this Notes and discussion doc.

NotebookLM

Here’s a NotebookLM with the Nexus PDF of the book and some notes.

About Tom Johnson

Tom Johnson

I'm an API technical writer based in the Seattle area. On this blog, I write about topics related to technical writing and communication — such as software documentation, API documentation, AI, information architecture, content strategy, writing processes, plain language, tech comm careers, and more. Check out my API documentation course if you're looking for more info about documenting APIs. Or see my posts on AI and AI course section for more on the latest in AI and tech comm.

If you're a technical writer and want to keep on top of the latest trends in the tech comm, be sure to subscribe to email updates below. You can also learn more about me or contact me. Finally, note that the opinions I express on my blog are my own points of view, not that of my employer.