Book review of Empire of AI: Dreams and Nightmares in Sam Altman's OpenAI by Karen Hao
In my AI Book Club, we recently read Empire of AI: Dreams and nightmares in Sam Altman's OpenAI, by Karen Hao. In this post, I'll briefly share some of my reactions to the book. The main focus in my review is to analyze Hao's treatment of the mission-driven ideology around AGI that explains many of the motivations for the researchers and employees at OpenAI.
- Introduction
- Clarifying people and timelines
- Boomers, doomers, and Altman
- The empire argument
- The author’s position on AGI and AI
- Wrestling with moral conflict
- Alternative trajectories for AI development
- Conclusion
Introduction
Hao’s book is a tremendous piece of journalism and research. Across 420 pages, Hao provides an inside look at OpenAI in more detail than is available anywhere else. The closest similar book (that I’ve read) is Parmy Olsen’s Supremacy: AI, ChatGPT, and the Race that Will Change the World, which focuses on the race to AGI and profiles both Altman from OpenAI and Hassabis from Deepmind in contrasting ways. However, Hao’s book has much more depth, reporting, and first-person accounts, and could be characterized as more of an exposé on OpenAI and Sam Altman. In fact, some have suggested Altman is a stand-in for Silicon Valley in general, in the way he espouses a “scale at all costs” ideology (see Empire of AI, with Karen Hao, from the Time to Say Goodbye podcast). I found the various interpretations of AGI both by the figures in the book and by Hao to be the most salient and interesting theme of the book.
If you’d like to listen to our book club discussion of the book, see Recording of AI Book Club discussion for Karen Hao’s Empire of AI. The discussion provides more balance across different viewpoints and takeaways. You’re also invited to join the AI Book Club: A Human in the Loop and be part of the next discussion.
Clarifying people and timelines
First, Hao’s book clarified many concepts, people, timelines, and movements for me. Although I felt the book could have been shorter, I appreciated the detail and reporting. For example, she describes Musk’s role early on and his reasons for leaving, mainly the power struggle with Altman for control and the differing directions the company needed to take to compete against Google. Musk’s xAI company is later born from this initial disagreement.
Hao also describes Dario Amodei and his journey away from OpenAI (along with his sister and several other senior OpenAI employees) due to their differing views on AI safety, particularly concerning the commercialization of GPT-3, and Amodei’s founding of Anthropic (which, interestingly, also fails to provide a model for AI that would satisfy Hao due to an ends-justify-the-means mentality that still permeates).
Hao also explains how Ilya Sutskever, chief scientist and co-founder, left for similar safety concerns to form his own company, Safe Superintelligence Inc. And how Mira Murati, former CTO, left to form Thinking Machines Lab.
I didn’t realize how many original AI superstars were originally consolidated within OpenAI, only to splinter out into various AI companies.
Boomers, doomers, and Altman
Hao contextualizes the struggles within OpenAI as an ideological conflict between Boomers (AI accelerationists) and Doomers (AI safety advocates). Specifically, many Doomers were concerned about having Sam Altman as the leader of the company that ushers in artificial general intelligence (AGI), especially due to his marginalization of the safety and alignment concerns. AGI could be a powerful, transformative technology similar to the atomic bomb, giving the company that develops it ultimate power. In that scenario, where the company leader isn’t just a CEO but rather one who makes god-like decisions that will shape society, many employees felt Altman wasn’t the right leader.
Hao explains that the conflicts, defections, and other tensions brewing inside OpenAI were fueled by Altman’s untrustworthy character. She describes Altman as an astute psychological observer who listened carefully to people to understand what they wanted, then promised to deliver on those wants, only to do the same for others with opposing views. For those who opposed his agenda, he would quietly work to eliminate them from the company.
The famous comment that Altman was “not consistently candid” (made during Altman’s temporary firing) is a characterization that becomes fully understood in the context of so many deceits and false promises described in the book. For the most part, Altman aligns more with the commercial enterprises of the Boomers than the Doomers, but Hao says that despite interviewing 90+ people, no one could definitively say what Sam actually believes, since he seems to align with opposing views depending on who he talks to.
Hao shows how Altman plays the right ideology cards related to AGI to his agenda. For example, as Altman testifies before the Senate and answers questions, he focuses on the long-term existential risks of AI, the potential threatening scenarios if an authoritarian state like China develops AGI first, and in so doing deflects attention from the immediate damages OpenAI is creating, with natural resource extraction, human labor exploitation, consolidation of knowledge assets, and more. This insight about how to manipulate people around their AGI alignment (fears, fascination, or other attractions) proves to be a point Hao returns to frequently in the book.
The empire argument
Hao’s central argument is that OpenAI is following similar patterns as empires of old. She describes four patterns of empire methodologies:
- Ideological justification: Using a “civilizing mission” (for example, “benefiting all of humanity”) and a “good empire vs. evil empire” narrative (such as a race against China) to justify their actions.
- Resource seizure: Claiming resources that aren’t their own, which includes data from the internet as well as natural resources like water and energy.
- Labor exploitation: This includes both the low-wage “ghost work” of data annotators and content moderators in places like Kenya, and the explicit goal of creating labor-automating technologies.
- Knowledge monopolization: The concentration of top AI researchers within corporations, which filters public understanding of the technology through a corporate lens.
Some have criticized her book for over-simplifying events and movements, and argue that AI colonialism is just the reality of digital capitalism. But the empire argument, even though her book’s title reflects it, isn’t one that she draws out in detail. For example, the only specific parallel she makes is with the British East India Company, which transitioned into an imperial power as it gained more political and economic leverage. Even with this example, there are some obvious differences between the ruling empires in previous eras and today’s AI empires, such as lack of military-enforced control, the company subjection to a country’s laws, and more. You won’t find more drawn out comparisons with the Roman, Mongol, Ottoman, or Persian empires, for example. The empire reference includes broad strokes only.
Many stories and events were selected to help make the larger empire argument. For example, an entire chapter about Sam Altman and his sister Ann serves as a metaphor for the way Silicon Valley treats the surrounding world, amassing wealth while those around it suffer in poverty and desperation. In this sense, the focus expands beyond Altman and OpenAI to serve as a critique of Silicon Valley in general and all the AI-focused companies there.
The author’s position on AGI and AI
As a journalistic work, the author is mostly behind the scenes, describing the events and details in a matter-of-fact way. However, as someone who prefers personal essays, I wished the author were more present in the writing, sharing more of her own thoughts, especially if those thoughts had conflicts.
As the empire theme came into sharper focus, it reminded me of something I learned in my literary non-fiction MFA program 25 years ago: any non-fiction work is a selection of details and observations in service of the story you want to tell. When we write, we include or exclude details to tell a particular story. As such, even the most objective, exhaustive reporting is a selection of facts to support a particular story.
Granted, without this story, the author is merely collecting details, so we want the author’s interpretation. At the same time, we don’t want the author to make more complex events fit a neat, simple interpretation and narrative (the “narrative fallacy”). Hao’s book is mostly journalistic reporting, but it’s also a strong argument and critique of Silicon Valley. It follows a more expected journalistic arc to hold the powerful accountable and highlight injustice and suffering among the vulnerable.
Hao doesn’t necessarily dismiss the transformative potential of AGI, but she critically examines how the rhetoric and ideology surrounding it are used as a tool. Instead of debating the likelihood of AGI’s arrival, Hao’s focus is on how Sam Altman and OpenAI leverage the AGI mission to justify their actions and consolidate power. She highlights that the “AGI for the benefit of all humanity” mission, while potentially starting out as a sincerely held belief, has become a potent formula for empire-building. Hao explains how this mission is used to justify OpenAI’s actions:
Six years after my initial skepticism about OpenAI’s altruism, I’ve come to firmly believe that OpenAI’s mission—to ensure AGI benefits all of humanity—may have begun as a sincere stroke of idealism, but it has since become a uniquely potent formula for consolidating resources and constructing an empire-esque power structure. It is a formula with three ingredients:
First, the mission centralizes talent by rallying them around a grand ambition…Most consequentially, the mission remains so vague that it can be interpreted and reinterpreted—just as Napoleon did to the French Revolution’s motto—to direct the centralization of talent, capital, and resources however the centralizer wants. (400)
Hao’s critique isn’t that AGI is a farfetched notion, but that its narrative is used to manipulate and distract. By focusing on the long-term, existential threats of AGI, figures like Altman can deflect attention from the immediate, tangible harms caused by AI development, such as resource extraction, labor exploitation, and knowledge consolidation. Hao’s book argues that whether AGI is a real possibility or not, its slippery and vague definition allows for a “ends-justifies-the-means” mentality, which she argues is a hallmark of an imperialistic approach to technology.
Hao’s overall position on AI is a central theme of the book, which she clarifies near the end:
The critiques that I lay out in this book of OpenAI’s and Silicon Valley’s broader vision are not by any means meant to dismiss AI in its entirety. What I reject is the dangerous notion that broad benefit from AI can only be derived from—indeed, will ever emerge from—a vision for the technology that requires the complete capitulation of our privacy, our agency, and our worth, including the value of our labor and art, toward an ultimately imperial centralization project.” (413)
In other words, Hao makes it clear that she isn’t opposed to AI as a technology. Instead, the book’s exposé stems from her deliberate journalistic choice to critique a specific model of AI development—one that’s centralized, extractive, and harmful. Hao’s position is that there is a different, more ethical path for AI. She spends her final chapter outlining this alternative vision, providing examples of projects that are smaller in scale, community-focused, and designed to provide specific benefits to the people who contribute their data and resources.
Wrestling with moral conflict
Hao’s argument about AI companies functioning similar to empires does seem to have legitimacy to a degree. It leaves one, especially working in tech, with a moral conflict. For me, wrestling with that moral conflict returns me to explore the legitimacy of AGI and the degree of its societal impact.
Overall, it’s hard to evaluate AI in the present moment. We might look back in 10 years and wonder how we could have all been duped by such silly and farfetched notions. Or we could look back in 10 years and note that we failed to take the safety and alignment concerns seriously at a time when it was possible, before economic and societal collapse. But in the present, we don’t know. Hao explains how the definition of OpenAI’s mission keeps changing:
… the creep of OpenAI has been nothing short of remarkable. In 2015, its mission meant being a nonprofit “unconstrained by a need to generate financial return” and open-sourcing research, as OpenAI wrote in its launch announcement. In 2016, it meant “everyone should benefit from the fruits of AI after its [sic] built, but it’s totally OK to not share the science,” as Sutskever wrote to Altman, Brockman, and Musk. In 2018 and 2019, it meant the creation of a capped profit structure “to marshal substantial resources” while avoiding “a competitive race without time for adequate safety precautions,” as OpenAI wrote in its charter. In 2020, it meant walling off the model and building an “API as a strategy for openness and benefit sharing,” as Altman wrote in response to my first profile. In 2022, it meant “iterative deployment” and racing as fast as possible to deploy ChatGPT. And in 2024, Altman wrote on his blog after the GPT-4o release: “A key part of our mission is to put very capable AI tools in the hands of people for free (or at a great price).” Even during OpenAI’s Omnicrisis, Altman was beginning to rewrite his definitions once more.” (401)
AGI is a slippery, hard-to-pin-down concept itself. We don’t know exactly how AI will transform society, as the models get more and more intelligent. As a technical writer, I’m constantly wrestling with this question as I see so many aspects of my job being automated. I’ve switched from writing tech docs to steering AI to write tech docs (and then often iterating and editing). But I’ve been a tech writer for 20+ years and never seen such a transformative shift in the profession.
Many AI experts, for example, Ray Kurzweil in The Singularity is Nearer, predict AGI to land in 2029 due to compounding acceleration and the general purpose nature of the AI technology. Does that mean in several years I’ll be out of work, in another profession entirely? That kind of existential career angst hangs over us. In this position, I’m much less likely to see the mission or ideology of AI companies as being a means of manipulation to extract and exploit the world around them for their own empire-like gain. I’m also more understanding of the slippery nature of AGI given that it’s so hard to predict its impact on my own profession 5 years from now.
Alternative trajectories for AI development
At the end of the book, Hao provides an optimistic chapter describing an alternative vision for AI. She argues for a model that’s smaller in scale, based on community benefit around the data contributed, uses algorithms more than scale, and pursues model outcomes that provide specific benefits in return. She cites a couple of examples, one being a community using AI to help preserve a Māori language by building a speech recognition tool to process audio archives. The book tries to end on this optimistic note, arguing that AI technology doesn’t have to follow its current trajectory (as outlined by CEOs like Altman and Amodei) but could be pursued in a more ethical way that doesn’t exploit resources and people at scale.
I appreciated the attempt to end on a note of optimism, but I wasn’t persuaded by this last chapter. OpenAI started out with similar altruistic aims, with goals to allow everyone to reap the benefits of AGI rather than concentrate the benefits in small circles of Silicon Valley elites. This altruistic mission is what attracted so many researchers and experts to OpenAI in the first place. But despite the altruistic goals, you need money for AI’s massive infrastructure and compute needs; you need enormous troves of data to train the models. As Olson describes, they had to basically compromise these ideas, making deals with Microsoft, Google, or other big tech, to accomplish their AI advancements.
Anthropic, despite its emphasis on safety and alignment, seems to follow the trajectory it’s following because there’s few other alternatives, especially with so much competition, including from China. In short, it’s hard to see the smaller, niche-focused model winning out, especially as more people use AI tools for general purpose information (rather than just for highly specialized models). There wasn’t much acknowledgement that OpenAI started out with more altruistic ambitions and why those proved futile in this space.
Conclusion
Overall, Hao’s book is well worth reading. It’s not a book that will leave you with warm fuzzies about the AI industry or suggest that infinite possibilities are ready to be unlocked (cures for cancer, climate change solutions, etc.). But her book will ground you in the real concerns about AI’s potentially damaging impact on the many societies and environments outside of Silicon Valley— the same societies and environments provide the support and resources needed for its development.
About Tom Johnson

I'm an API technical writer based in the Seattle area. On this blog, I write about topics related to technical writing and communication — such as software documentation, API documentation, AI, information architecture, content strategy, writing processes, plain language, tech comm careers, and more. Check out my API documentation course if you're looking for more info about documenting APIs. Or see my posts on AI and AI course section for more on the latest in AI and tech comm.
If you're a technical writer and want to keep on top of the latest trends in the tech comm, be sure to subscribe to email updates below. You can also learn more about me or contact me. Finally, note that the opinions I express on my blog are my own points of view, not that of my employer.