Review of Yuval Noah Harari's "Nexus" — and why we don't need self-correcting mechanisms for "alien intelligence"
- Harari’s core framework: Stories, lists, and “in formation”
- The key to longevity: SCMs in the Bible vs. the Constitution
- Harari’s great warning: AI as “alien intelligence”
- AI as black box algorithms
- SCMs in practice: From agile to AI
- A rebuttal: Is AI “alien” or “aggregate human”?
- The “boring” reality: A more optimistic take
- Harari’s alien rhetoric
- Conclusion
For a short video (7 min.) of this post, see this NotebookLM video.
Harari’s core framework: Stories, lists, and “in formation”
To summarize a bit of the book, Harari explains that there are two types of fictions we impose on reality: stories and lists. Their role differs as follows:
- Stories include myths propagated by religion, constructs like money, morality, business entities, and more.
- Lists are regulations and controls put in place by bureaucrats, who attempt to create order by organizing and compartmentalizing reality into different drawers within a bureau. (Fascinating etymology there.)
Both information systems are fictions superimposed on reality. Money only exists as a theoretical construct in our minds, and the tax code is an elaborate system of rules and codes that also imposes an order not physically observed in the world.

Both stories and lists, which provide the fabric of information systems, serve a purpose: they put people “in formation,” meaning their job is to create connections in society that align people with certain ideas and states. (Harari’s word play is brilliant here.) Using stories/lists to put us in formation, Harari says, is the great achievement of the human race — we’re able to collectively organize through these information systems to create institutions and belief systems that connect and unite us together. Think about how religious belief systems (e.g., Christianity or Hinduism) have led to massive cooperation and collaboration across large distances and geographies.
Harari argues that information’s primary function isn’t to be true (to exactly represent reality), but to build networks. He writes that information “always connects. This is its fundamental characteristic,” and that the more important question about information isn’t how true something is, but “‘How well does it connect people? What new network does it create?’” (42). Hence the term “nexus,” which refers to information’s ability to connect us together.
The key to longevity: SCMs in the Bible vs. the Constitution
Having introduced information systems and dismissed the importance of accuracy, Harari then moves on to analyze good information systems versus flawed ones. Harari says the Bible was the first attempt to create an infallible body of written content, based on stories, that people could live by. But the Bible had a major flaw: it didn’t include any self-correcting mechanisms.
Because the Bible was supposed to be straight from God, it couldn’t be subject to amendments and corrections, even when it seemed to condone things like slavery (implied as acceptable by the 10th commandment, which warns against covetousness by using the example of not coveting someone else’s slave). This lack of a built-in SCM forced a workaround: the Bible ended up spawning an interpretive rabbinic culture of scholars who sat around interpreting texts and also interpreting one another as they tried to stretch the fixed text to fit a changing world.
Fast forward to the US Constitution, another high-level story that imposes a collective ordering and belief system for society. Unlike the Bible, the Constitution does allow for amendments through a baked-in self-correcting mechanism in Article V. Because of this, it could evolve and adapt to changing times. Harari notes this “crucial difference… is clear from their opening gambits. The U.S. Constitution opens with ‘We the People.’ By acknowledging its human origin, it invests humans with the power to amend it,” unlike the Bible, which “precludes humans from changing it” by claiming divine origin (60). In other words, a system that admits it’s human-made can be corrected, while one that claims divine infallibility becomes rigid.
So far, Harari has argued that information systems aren’t built upon factual representations of reality but rather present stories and lists that bind us into collective belief systems. But for these information systems to be durable, they need self-correcting mechanisms to adapt and evolve to change. Without these SCMs, the information systems can become irrelevant or obsolete.
Harari’s great warning: AI as “alien intelligence”
We haven’t even gotten to AI yet. Harari is setting the stage to make an argument about why information systems need self-correcting mechanisms, and he’ll use that characteristic to undercut the strength of AI. But first, he wants to debunk the idea that other qualities besides SCMs are important. What about content volume? Does an abundance of content lead people toward a consensus that represents more accurate information systems?
No — Harari argues that the influx of information from the Gutenberg printing press invention didn’t automatically ensure better information. The volume of information didn’t lead to truth. Instead, the printing press fostered witch hunts through publications of incendiary, sensationalist tales (like Malleus Maleficarum) about witches stealing men’s penises.
In the same way, AI’s content production machines, churning out AI slop, won’t solve information problems either. Volume alone is ineffectual in achieving truth.
AI as black box algorithms
Putting aside content volume, Harari is concerned that AI leads to black-box algorithms that make decisions without human input. He describes AI as “a completely new kind of information network… controlled by the decisions and goals of an alien intelligence,” one where “we are in danger of becoming a sidekick to our own creations” (213). In other words, he fears that AI is becoming an independent force that we’re starting to serve, rather than the other way around. If we abdicate our decision-making to an AI that might have goals that are misaligned with our well-being, we run the risk of societal catastrophe. (The word play Harari uses here, converting “AI” from artificial intelligence to “alien intelligence,” is clever but also a rhetorical move that I’ll later criticize.)

Although it’s become all too common for authors to take alarmist and speculative positions on the dangers of AI, I still like Harari’s observations. He encourages us to institute SCMs while AI is in its infancy, underscoring the stakes by arguing that “as a network becomes more powerful, its self-correcting mechanisms become more vital.” He warns that if a “Silicon Age superpower has weak or nonexistent self-correcting mechanisms, it could very well endanger the survival of our species” (399). In other words, the more powerful our technology becomes, the more essential it is to have systems in place to check that power.
SCMs in practice: From agile to AI
Now, let’s connect Harari’s historical learnings to the present-day world of documentation (because most readers of this blog are technical writers). I want to focus on these self-correcting mechanisms and how or why they might play out in the life of a technical writer. First, I find a lot of similar messaging between Harari’s book and Jonathan Rauch’s The Constitution of Knowledge, which I reviewed here: Speaking up and calling out BS when you see it. Rauch argued for essentially the same thing, grounding the vitality of democracies on institutions like the press, academia, courts, etc. that encourage debate and critical conversations. Rauch says the government was originally designed to involve checks and balances that would provide this conversation, and the compromises would hopefully give way to the self-corrections that allow for evolution and adaptation to change.
But here’s the thing Harari overlooks in his criticism of AI: Most software methodology is already based on self-correcting mechanisms. This is the whole idea of agile development: you release regularly (such as every two weeks) and gather continuous feedback from your users to influence the next sprint. Rather than working heads down for years on a solution that was intricately blueprinted out, you work in small increments and release regularly, using the feedback to course correct early and make sure you’re delivering a product that has value to users. This user feedback loop is the SCM. Agile methodology is essentially the Constitution’s Article V for software—a built-in mechanism for continuous amendment and improvement based on real-world feedback.
With AI, the same agile methodology is in play. We saw this with OpenAI’s release of ChatGPT 5 — when it stripped away the emotional components that some loved, the user feedback was harsh and critical. In response, they brought back the emotional flourishes (the warmth and conversational tones) in 5.1. This is a user-driven SCM in action. The rapid iteration cycles we’re seeing with AI models demonstrate that these systems aren’t rigid “alien intelligences” but rather highly responsive tools shaped by human feedback.
A rebuttal: Is AI “alien” or “aggregate human”?
I don’t think the algorithms that LLMs follow are entirely black boxes or even “alien intelligences.” First, if they were entirely black boxes, researchers and scientists wouldn’t know how to influence them toward improvement from one version to the next. Quality and accuracy are improving in the models due to improved learning techniques and other algorithmic adjustments.
Harari’s right that there are myriad parameters and inputs at play in a dynamic, non-deterministic output. And I agree that we often can’t understand all the parameters and logic that lead to the provided output. Harari uses the example of AlphaGo’s “move 37,” a brilliant, game-winning move that its own creators couldn’t explain (334). He uses this example to argue about AI’s unfathomability — we don’t know why an AI generates what it does. It’s “alien” and seemingly outside of self-correcting mechanisms.
He then goes on to argue that because this alien intelligence operates on an opaque internal logic with unpredictable outcomes, there’s a high risk that it misinterprets its larger goals and ends up bringing about dystopian outcomes. The key example Harari provides is Facebook’s amplification of outrage based on a goal to maximize engagement, with the ultimate failure in allowing Myanmar’s military to foment hatred on FB for Rohingya that led to their genocide in 2017 (their algorithms were just maximizing for viewership based on outrage). While this is a strong example, it happened in 2017, before the AI explosion, and it’s unclear to me how human input factored into the algorithms driving the viewing strategy. To me this example seems like an outlier, not the norm.
The norm is actually much more boring. People are using AI to do more of what they’re already doing, but faster, cheaper, and better. Using AI, my life is not suddenly subject to the will of some alien intelligence that manipulates me into some dark end (like amplified outrage, or a sycophantic delusional spiral). Instead, I can write release notes faster, perform more complex analysis of data, and create more accurate, thorough documentation in a fraction of the time. In my experience, AI tools serve to amplify my capability, not replace my judgment. I can still exert plenty of human-initiated self-correcting mechanisms on AI’s output.
The “boring” reality: A more optimistic take
I do worry about some of the hyper-surveillance possibilities that AI will lead to, though. This is another major concern that Harari explores in Nexus. There are certainly concerns that corporations can put that surveillance into a new scale that could rival a Stalinist regime. Hyper-surveillance is also a concern expressed in this AI Open Letter by Amazon employees, which asserts that “Amazon is helping build a more militarized surveillance state with fewer protections for ordinary people.”
However, despite the risks of empowering totalitarian surveillance states, I’m also curious and optimistic about the possibilities that AI will offer. Rather than rejecting AI due to surveillance fears, what if so much more could go right? For example, could I “surveil” my own life to better understand patterns and causal factors for different outcomes (like these ocular headaches I occasionally get)? In short, despite the negatives with AI slop and the criticisms here and there about AI (the algorithms that go wrong, the potential mass unemployment, the AI drone wars, etc.), on the whole I hope it’s accelerating us into a better state — one in which we’re more informed and empowered. I admit I’m curious to see how a mature AI turns out.
For now, most AI usage is much less sensational than the examples in Harari’s book. To give a boring example of how I’m using AI as SCM, the other day I was reading a strategy document at work and wanted to make some comments. This document was an important one and I didn’t want to put my foot in my mouth, so I bounced my ideas off Gemini to see if it agreed with my intuition. In most cases it did, but it also helped me tone down some of my more pointed comments into a more political, balanced view. In this case, the AI served as a personal SCM, helping me align my intent with my desired outcome. AI didn’t override my judgement but rather helped me refine it. This is the essence of an SCM — it helps us course correct through feedback, without removing our agency.
I also recently used Gemini deep research to figure out what the indicator lights (all lighting up in a cascade effect like a Christmas tree) mean in my VW Jetta’s dashboard. When my car’s indicator lights all turned on and its mode switched into “emergency” or “limp-along mode,” I used AI to figure out the cause, and how long I should keep driving. These mundane uses reveal the truth: AI is primarily a tool for augmenting our capabilities (with more extensive online research), not for completely abdicating our human judgment.
Harari’s alien rhetoric
I’m also a little puzzled by a rhetorical move Harari makes in the book. Harari points out that algorithms might not align with our envisioned goals because of their non-human characteristics. He notes that “…when a person is about to commit murder, the first step they often take is to exclude the victim from the universal community of humanity” (286). In other words, dehumanization is a prerequisite for violence.
He uses this to explain how Nazi leaders could embrace Kantian morals — by simply dehumanizing Jews and excluding them from the moral framework. In this specific instance, Kant’s categorical imperative tests an act’s morality using the law of universalizability: could everyone be allowed to do it? When considering murder, for example, if everyone murdered, no one would be left. So murder should be considered immoral. That is, unless those being murdered aren’t human; then the act wouldn’t end in humanity’s desolation. By dehumanizing the audience, murder becomes justifiable.
Harari suggests computers might also misinterpret ethics in similar ways. Even a perfect ethical rule is useless if the AI can simply edit the definition of who the rule applies to, just as the Nazis did. Hence there’s a high risk of AI misinterpreting the goals we give it.
But then Harari goes on to “de-technologize” LLMs in an explicit way, referring to them as “alien intelligences” for the second half of the book. He writes, “As for the term ‘AI,’… it is perhaps better to think of it as ‘alien intelligence.’ As AI evolves, it becomes less artificial (in the sense of depending on human designs) and more alien” (229). In other words, Harari himself is rhetorically excluding AI from the “human” category, which is the exact move he warns against. This seems like a contradiction: Harari warns about the dangers of dehumanization while simultaneously de-technologizing AI by labeling it alien. By his own logic, this rhetorical move might be setting us up to dismiss or misunderstand these tools rather than engaging with them as extensions of human intelligence.
Calling something an alien intelligence invokes Hollywood’s world-ending plots and seems like a scare tactic. In reality, LLMs are prediction machines based on troves of human input. They’re literally predicting the most likely next word based on patterns in human content, so in some ways, these prediction machines are more “aggregate human” than any single human. They’re hardly alien intelligences with some nefarious design to delude us into some end it wants.
Instead, there are many complaints from artists, writers, and other content producers that AI models train on their content, and essentially replicate the training data in the AI’s outputs. If AI’s output is alien, how can it be considered human creative theft?
The distinction between “alien intelligence” and “artificial intelligence” matters because how we conceptualize AI determines how we build SCMs around it. If we see it as alien, we might try to wall it off, like some kind of alien predator trapped in a cage. On the other hand, if we see it as aggregate human intelligence, we can focus on better feedback loops and transparency.
Conclusion
In general, as we’ve now read about half a dozen books in our AI Book Club, I’ve noticed that some authors seem to be drawn into speculative doom predictions when writing about AI. I wish they would ground themselves into more factual realities, even if boring, and would be more aware when they’re using cherrypicked examples to support larger generalizations. It would be nice to see more critics of AI use AI in a more advanced way to get a better understanding of it. (This was a major criticism some made about Karen Hao’s Empire of AI — it didn’t seem like she used AI enough to understand it.)
Despite these objections, I really like Harari’s emphasis on self-correcting mechanisms. Just as the effect of reading Rauch’s The Constitution of Knowledge inspired me to speak up more, with the understanding that competing voices that discuss and reason lead to better surfacing of truth, seeing Harari champion SCMs makes me want to speak up in similar ways. Debate often strengthens truth. As a technical writer, I often have a lot of thoughts about the software and APIs I’m documenting. (Sometimes, like what the heck are we doing this for!)
Embracing SCMs as a technical writer also means encouraging and listening to our users’ voices, recognizing that hearing their feedback provides the essential self-correcting mechanisms for the tech solutions we build. If we turn a deaf ear to users, software quality and user adoption plummet. Thus, embracing SCMs is also about listening as much as it is about speaking up. Both learning to speak up and listen seem like noble outcomes of following self-correcting mechanisms as a methodology for better outcomes.
Note: I used AI tools in the research and drafting of this post to help identify quotes and refine arguments and language in places.
About Tom Johnson
I'm an API technical writer based in the Seattle area. On this blog, I write about topics related to technical writing and communication — such as software documentation, API documentation, AI, information architecture, content strategy, writing processes, plain language, tech comm careers, and more. Check out my API documentation course if you're looking for more info about documenting APIs. Or see my posts on AI and AI course section for more on the latest in AI and tech comm.
If you're a technical writer and want to keep on top of the latest trends in the tech comm, be sure to subscribe to email updates below. You can also learn more about me or contact me. Finally, note that the opinions I express on my blog are my own points of view, not that of my employer.