Search results

AI Book Club recording, notes, and transcript for Sarah Wynn-Williams's Careless People

by Tom Johnson on Feb 16, 2026
categories: ai ai-book-clubpodcasts

This is a recording of our AI Book Club discussion of Careless People: A Cautionary Tale of Power, Greed, and Lost Idealism by Sarah Wynn-Williams, held February 15, 2026. Our discussion touches on a variety of topics, including whether criticisms of the author's complicity are fair, the ethical dilemmas we face working in tech, whether the parallels between social media and AI hold up, the Streisand effect of Meta's attempt to suppress the book, and more. This post also includes discussion questions, key themes, and a full transcript.

Audio-only version

If you only want to listen to the audio:

Listen here:

Key themes discussed

Based on our discussion, here are 10 key themes that emerged regarding the book Careless People and the broader tech industry:

  • Author complicity vs. bravery: Whether Sarah Wynn Williams was a brave whistleblower for writing an exposé, or complicit in Facebook’s actions by remaining in her role for so long. The group largely dismissed the complicity argument but noted a lack of self-reflection and accountability in the memoir.
  • The incompatibility of capitalism and mission statements: The fundamental tension between a company’s desire to do good and the systemic pressure to deliver hockey-stick growth to shareholders—and whether those two goals can ever truly coexist.
  • Labor market influence on corporate values: Tech companies’ use of mission-driven rhetoric primarily as a recruiting tool when talent was scarce, and how those values fade into the background as the labor market shifts.
  • The “golden handcuffs” phenomenon: High salaries, stock options (RSUs), and family obligations like medical bills make it nearly impossible for employees to take the ethical high ground and leave controversial companies.
  • Ethical blind spots in localization: Companies often marginalize non-English-speaking communities—such as those in Myanmar—because they aren’t viewed as significant revenue-generating markets, with real human consequences.
  • “Enshittification” and service decay: Cory Doctorow’s theory that tech services inevitably degrade as they’re overtaken by ads and the constant need for increased revenue, and whether any company has managed to resist this trajectory.
  • Design ethics and the infinite scroll: Seemingly innocuous design choices can have devastating, unforeseen impacts on human psychology—like the addiction fueled by infinite scrolling—where the ethical implications only become clear in hindsight.
  • Attention fragmentation: The TikTok-ification of platforms like YouTube has eroded our capacity for long-form attention, with particular concern about the effects on children who’ve never known anything different.
  • Policy as the only effective guardrail: Tech organizations generally only change course when threatened by actual laws and heavy fines (like GDPR), but the ability of tech companies to influence politicians undermines even that check.
  • AI’s relationship to social media’s mistakes: Whether AI will repeat the same cycle as social media or is fundamentally different—with less company control over emergent behavior but also more potential beyond just capturing attention.
  • Psychological profiling and mass surveillance: Concerns that AI memory features and existing data-sharing infrastructures are building deep psychological profiles that could be used for targeted manipulation, echoing social media’s earlier trajectory.

Discussion questions

Here are some of the discussion questions I prepared ahead of the discussion. (The questions are AI-generated.)

Transcript

Here’s a transcript of the discussion, with some grammar slightly cleaned up by AI:

Tom: This is a recording of the AI Book Club that I’m running from my site idratherbewriting.com. Today we are discussing Careless People: A Cautionary Tale of Power, Greed, and Lost Idealism by Sarah Win Williams. And this book club discussion took place February 15, 2026. There’re just a few people in this discussion, four people including me, so it’s pretty good and we talk about a lot of different things in the book. We focus a lot on the AI parallels because it sort of matches the theme of the AI Book Club. But we’re all workers in tech, so we debate a lot of the relevance of the ethical situations and what you can do and other kind of themes that resonate with us.

If you would like to get involved in this book club, you’re free to join, just go to idratherbewriting.com/ai-book-club and you can see the reading schedule. We meet once a month, typically read about a book a month and just have an open discussion about it, and then I record and share these out. So, if you’re curious, if you listen but you’ve never actually joined, consider joining. It’s a lot more fun; it’s fun to talk about the books that we read. All right, let’s get into the discussion.

Tom: Okay, let’s shift gears and move into this book. I know Lajay, you said you read it a while ago, and I just recently read it and I’m trying to unpack and process what I think about it. I thought we could start out with just some general thumbs up, thumbs down reactions about the book.

This was a very different sort of book than we’ve been reading. We’ve been reading books that have a much more explicit AI focus. This one only brought in AI toward the end; most of it was about social media platforms and tech. Just curious if you’d give it a thumbs up, thumbs down, or somewhere in the middle. Anybody want to go out on a limb?

Aaron: I’d give it a thumbs up. I mean, Tom, we talked about this yesterday. I didn’t get through the whole thing, but I love the different perspective that she brings. I can’t say that this was great literature, that it would be something I would definitely recommend like, “Hey, you gotta go read this book,” but I think it has a breath of fresh air and I appreciate the perspective she brings to the conversation.

Tom: Yeah. I know that a lot of people have criticized Sarah Wynn Williams as being complicit in this and not fully acknowledging her role, but I think it’s a brave act what she did to write this book—to go against one of the most powerful companies that tried to really suppress it and just provide a tell-all that was just scorching about some of the most powerful people in the tech world. That takes a lot of guts and bravery and courage. Yeah, I think she could have acknowledged more complicity, but overall, wow, what a move. And when Facebook tried to suppress it, it only created this Streisand effect of amplifying its popularity. So I really want to just applaud the author for writing this.

Molly: I think the whole complicit argument—I think it’s reaching. It’s like trying to find a way to criticize it. I do think it was, however, confusing how we’re reading it from her perspective as the author now, when in that moment she probably didn’t have the mindset that she has while she’s describing it from today’s perspective. It was kind of confusing to see her make these decisions that were obviously bad, or just go along with things that seemed like really terrible ideas, because she’s writing it with her full hindsight-is-20/20 perspective. But it was kind of confusing to see her do these things in the moment, or continue working at this company when it was doing things that she completely disagreed with. So I thought that was confusing sometimes. But the complicit argument—I don’t really buy it.

Tom: Yeah, I mean, in the moment, I bet people were a lot more mixed about the role in the elections or the Rohingya incident in Myanmar. But when you look at it in hindsight and you’re like, “Oh my gosh, that was a pivotal moment,” or “this was a massacre, this was a genocide, how did we let that happen?”—yeah, people probably have a lot different views. A lot less mixed.

Lajay: Yeah, I’ve seen a lot of the critique of her being complicit online, and I felt similar—not that she was complicit. I think as I was reading, I felt like she lacked self-awareness, or she didn’t express as much self-awareness as I would have liked. Self-awareness and accountability, I think. A lot of it was her pointing at other people: “Well they did this and I expected this and they let us down here.” That was present, but at the same time, she built this role for herself as their Head of Global Policy. So I would have loved to hear a little more of her self-reflection around “Here’s what I could have been doing, here’s some accountability.” That part seemed to be missing.

I think there was truth to everything she said, and there was a lot of accountability placed on other leaders in the company, but that self-reflection felt like it was a little bit missing. I would have loved to hear her perspective on, as the Head of Global Policy, what she should have done in hindsight now that she has that perspective.

Molly: Yeah, I think that’s a good point. I think memoirs are probably always gonna have that—you’re always gonna want to make yourself look good in a memoir. That’s probably a constant struggle if you’re writing about your life. To not oversell it.

Lajay: Yeah. And I think maybe in contrast with the last book we read, God, Human, Animal, Machine, where it was just this deep, deep introspective look—she was critiquing her own critiques, it was just this full richness. Going back to Careless People, it was almost the opposite, this external view, and I would have loved to hear more of her internal struggle or conflict or dilemma through those moments instead of it just being like, “Look at them,” you know?

Tom: Yeah, I totally agree. That kind of internal reflection—it seems like if you’re writing a memoir, you should be reflecting more on the larger questions about what you’ve described. You want to see that awareness and self-analysis: what was my role in all of this? Definitely a change of pace from Meghan O’Gieblyn’s more philosophical, reflective take on things.

I mean, it’s a page-turner, this memoir exposé tell-all. It’s definitely easy to listen to and kind of fun to just follow along. Doesn’t make you think too hard in the moment, but after the fact, I did have some questions. And coming back to the complicity theme: could she really have changed things? If she did take a much more whistleblower position early on and just tried to put the brakes on things and do what she feels should have been done by the leaders—Zuckerberg and Sandberg—do you think that’s realistic, that she could actually change the company more? Or is she just like a pawn who doesn’t really have power to do what she should have done?

Aaron: I think in the end you have to think about the motivation—the reason why the organization exists. In our capitalist society, organizations exist to make a profit or to not make a profit. And when you’re in an organization whose sole goal is to make a profit, you’re really constrained by how anything that happens contributes to that end goal. It’s hard to see a role like hers being really directly tied to that profit motive. So I think her role was probably net limited. They like to give credence to it—I mean, I work for Goliath National Bank where they talk about governance and we’ve got lots of regulators. In the end, we’re there to make money. That’s capitalism.

Tom: Yeah, that kind of raises this larger tension between companies that have mission-driven statements—they really want to do good in the world—but they also have this obligation to shareholders to make money. And especially with tech companies, the pressure’s extra. Companies announce billions in revenue, more revenue than most regular non-tech companies would be salivating for, and you see stock prices go down after that. It’s like, we made billions in revenue, why did our stock drop 2%? There’s just so much pressure for tech companies to deliver hockey stick growth year after year. It seems a little untenable to also hold them to this mission-driven set of values. Can you cure cancer and make billions at the same time? Well, maybe that’s a bad example. But are they incompatible—mission-driven companies and massive revenue? Any thoughts on that?

Molly: Yeah, I would say yes. I think OpenAI was—Karen Hao also thought a lot and wrote a lot about their mission statement, which I’ve already forgotten, but how they basically used it to justify all of their decisions afterwards to focus more on generating profit. And I think Facebook is just like—all these ethical dilemmas they had about whether to give information to the Chinese government, taking a stand against the US government. Making decisions with governments would have been a lot easier if they had some real values and actually abided by them, but they didn’t. They’re just kind of doing whatever made the most sense to make the most money.

Tom: This is a little bit of a controversial take, but is it expecting too much to expect that a company like Facebook—that was created by a guy in college with the hot-or-not kind of approach—to have that same founder run a very mission-driven, humanitarian-focused company? I don’t know.

Aaron: Every organization does this. I mean, you work for a company, Tom, that had all of these grandiose aims in ‘98, 2000, and they’ve shifted that. You can see just from their strategy—they’ve definitely strayed from that. They’re in business to make money. My company, we have all of these grandiose tags of “We’re here to connect the world and facilitate growth” and things like that. In the end, we’re a big bank, okay? And there’s all sorts of people that will come out and say we’re trying to facilitate all sorts of evil and awful things. Yeah, I think that’s capitalism.

Tom: Yeah, capitalism is for sure an underlying theme in all of this. And by the way, small reminder—this is recorded, so make sure you don’t say anything you regret. I don’t want to have to post-edit anything, and I do publish it later.

Aaron: That’s why I work for Goliath National Bank.

Tom: Right. So yeah—there was an author, Cory Doctorow, who has this idea of “enshittification.” He says that all companies trend towards this. They start out with some great service, you have an awesome product, and it just gets overtaken by ads, compromised by the need for more revenue, gets worse and worse for people. They all end up going down this path. Do you think it’s inevitable? Is there any example of a company that hasn’t sold its soul for revenue? I don’t know, Microsoft seems like a good company.

Lajay: Gosh, as you asked the last question about whether these values can exist alongside astronomical revenue-generating quarters, I was thinking exactly about Microsoft. I think for a period of the company’s history—maybe the last decade or so—they really espoused these values as being this company whose mission is to empower the world. And they did walk their talk for a while there.

But I think the point you brought about enshittification is exactly the dilemma of the tech industry. We have this culture—and she talks about this in Careless People—where we frame our value and our products and our mission as being good for humanity, and we’re going to use our tools to solve these big problems. Same thing with OpenAI: “Yes, we need to have all of your water and your land and your data and your privacy, but just trust us, we’re going to solve every problem in the world.” That’s the ethos of the tech industry.

But as you’ve pointed out, the nature of capitalism is just basic supply and demand—what is the cost of services produced and how do we generate revenue? The pressures that tech companies are under to perform cause them to abandon those values. It’s a dilemma that is systemic. Both can be true: the companies can have values and they can want to do the right thing, but when it comes down to the bottom line—and I saw this at Microsoft—there are times as a PM where you’re advocating for the user, advocating for the experience, but if it doesn’t return in a way for the company that’s needed, we will always opt for whatever contributes to growth, whatever contributes to revenue. And that’s what Sarah Wynn Williams, I think, poignantly points to: you can call it hypocrisy or just the dilemma of being mission-driven, but at the end of the day you’re going to make decisions in the best interest of growth.

Tom: Well, okay, let’s consider this argument about capitalism being the force that moves people towards the bad side. How does this play out in a non-capitalist-driven society—and as I’m saying this, I realize that I’m characterizing it incorrectly—but China? They’ve got a lot of capitalist characteristics, but they’re ultimately communist, authoritarian. Would you say—I don’t know. I don’t know much about China.

Aaron: There’s a lot of billionaires in China. There’s too many for us to say it’s not capitalist. Maybe Cuba? Is that the only non-capitalist society in the world left? China has fully embraced capitalism, I’m sorry. They might have a one-party system, but—North Korea?

Molly: Can’t say I’m familiar with their products.

Molly: I want to—I feel like there’s a connection between the labor market and the disappearance of these high-minded values. Not the disappearance, but all the tech companies’ values kind of faded into the background when there wasn’t such a scarcity of tech workers. I’m going out on a limb here, but I feel like there were fewer tech workers maybe 15 years ago, and because they were in such demand, these companies really wanted to win them over by making them believe that they’re making the world a better place.

Like Cory Doctorow—I’m reading Enshittification right now and he talks about “vocational awe,” this idea that you’re in it because you want to do good things for the world, not because you’re trying to make a profit. I don’t know where I’m going with this exactly, but I feel like the labor market is maybe playing a role in this shift.

Tom: I think it’s an important point you’re hitting on. We’ve seen this in previous books where OpenAI’s big appeal was that they were more mission-driven and trying to prevent other companies like Google from dominating. They wanted to use the rhetoric of the mission-driven company as a way to attract people who really want to create AI for good, to democratize learning and intelligence. But if it’s no longer a worker’s market—if people have to scrap just to get employment—you can’t really choose to take the ethical high ground when you need money to survive.

It’s that golden handcuffs situation that Sarah Wynn Williams talks about. She had very difficult pregnancies, almost died, had major medical bills, was taking care of multiple kids while working full-time, both parents working—it’s very expensive. It seems a little unrealistic for somebody to just say, “Oh, I’m gonna take the ethical high ground, I’m gonna abandon and walk away from millions in RSUs and just not be part of this company that’s paying me so much.”

I’ve often had this debate: if I lost my job to AI and I couldn’t get rehired, what would be the low point where I would agree to work for something that was just so unethical, but doing it because there was no other option? I had this discussion with my wife and it was kind of fun—

Lajay: It was fun?

Tom: Well, we were having this theoretical debate with some other family members. If you lost your job to automation, you couldn’t get any other job, you are broke, your family doesn’t have any food—would you work for ICE? What would you do? At what point would be the ultimate debasing of your ethical system that you would accept? And my brother-in-law said he would drive a school bus. I’m like, “Really? You’re going to drive a school bus and somehow pay your rent?” He’s like, “Oh, I paid off my house.” I’m like, “Well, whatever.”

Anyway, sorry, that’s getting a bit of a tangent. But it’s this larger dilemma: can you take the moral high ground, or are we forced to compromise our morals because of the economics and the challenges of having a job? I’m not really in a position to make moral decisions or policies. I write documentation and describe what people build. There’s not really a lot of ethics around it. I’m not in this policy conundrum that Wynn Williams is writing about, but maybe some of you are.

Aaron: But to the point, you could be writing tech documentation for ICE, right? And what you do could be completely innocuous and have a good moral bent, but the organization that you’re working for could have…

Tom: Yeah, that’s a good point. For sure.

Does anybody have any situations where you’ve been in an ethical conundrum at work and you’re like, “How do I influence things?” I’m trying to think if I’ve got an actual experience where I’ve felt this. I feel like a tech writer is kind of in a different position than a marketer. I’ve had marketing jobs before where I really didn’t believe in the products but had to try to write persuasive copy that would sell them. Protein pills and alternative health treatments and so on. But in the tech doc world, we sort of thrive in plainness and factuality—this is how the system works, we’re not spinning it a certain way. It’s got these methods and it returns this data and you use it for whatever you want. So it’s kind of a refreshing side-stepping of some of these positions that have to take a more persuasive and morally ambiguous role.

Molly: I think that’s one thing I really like about technical writing—I don’t have to put as much of a spin on things. I can just be plainspoken and tell people what’s going on.

But I think one thing that—I don’t think it’s a moral or ethical dilemma, but it kind of wades into that territory for me—is I help out with localization for our docs and our products. Reading her chapter about Myanmar, which was heartbreaking—she brings up localization and how the Facebook app is just not supported in Burmese and it’s not a priority for Facebook because it doesn’t matter, they’re not going to get any money out of that.

I see localization as something that gets de-prioritized unless that customer base is growing and they’re getting more deals from the customers that speak that language. It’s not going to be a priority. Even though to me it’s so frustrating to think that we’re just giving a worse service to people that don’t speak English. So not a moral or ethical dilemma exactly, but reading about that made me think about how localization just gets the short end of the stick a lot of the time.

Tom: Yeah, that’s a good connection. It’s like, are we just marginalizing certain communities because they’re not deemed revenue-generating or important?

I’m fortunate that even at a company that does have a lot of controversies, I work in a group that’s not very controversial. I mean, I work on map APIs. People don’t get all up in arms about Google Maps. It’s like, “Oh, so I can’t find my way around anymore, but I’m not going to go crazy about it.”

Aaron: I’m glad you think that, Tom, because I don’t know if a week goes by that we don’t say, “Tom, you gotta fix Google Maps, man.” Anything to do with Google products, we always blame on Tom.

Tom: Well, there is one area that I’ve never really felt much energy towards, and that’s more autonomous driving features. I don’t drive all that much and it seems like a lot of engineering effort going towards autonomy. Do I really need the car to automatically change lanes for me? Is that really worth all the energy when so many other problems exist in the world? But then, millions of people—thousands, probably—die in car crashes every year, and it does ultimately help safety. Lajay, it looks like you were going to say something?

Lajay: Well yeah, I was mulling your question about whether we’ve dealt with any ethical dilemmas. I appreciated Molly’s point around localization because I think in tech and in building products—especially consumer products—the ethical dilemmas, quite frankly, I wish they were as obvious as the Myanmar one was, where it’s very clearly a correlation between a product decision and real-world impact. A lot of times in technology, the correlation isn’t quite as clear and obvious, and it’s that obscurity that makes it harder to tackle ethical dilemmas.

One example that comes to mind—I’m forgetting his name, but there’s a designer from either Instagram or Google who created the infinite scroll. He introduced the idea because before, there was pagination where you’d get to the bottom of a page and have to click to the next page. As a design improvement, it’s a great improvement. But then he goes on to talk about how that was the most unethical, most terrible design he could have ever introduced and it haunts him to this day, because now we’re all glued to our phones and the infinite scroll.

I think that’s a perfect example of how when we’re building products, the implications aren’t always obvious. Who would have thought that the infinite scroll would later create this ethical crisis for its designer? The ethical landscape is muddy and messy and unclear. But I think that’s where it’s important as builders to be as thorough as we can, to consider as much of the implications as possible—on our users, on our world, on human psychology—which Facebook kind of neglected and still does.

Tom: I think that is a great example. And that gets at this larger frustration I’ve had about attention fragmentation. I find myself, even if I’m watching a TV show that’s moving a little slowly, drawn to open Reddit and scroll the most popular things. That’s my social media platform I’m addicted to; other people like Instagram or other things. Or I’m on YouTube scrolling through Shorts, and even though I’ve selected to see fewer Shorts, it never sticks. I always revert.

A lot of people were very upset about YouTube putting so much emphasis on these Shorts, basically copying TikTok. This sense of not having long-form attention, because all these tech platforms have realized that TikTok is superior in terms of growth and engagement—it’s very frustrating. It’s part of the origins behind this book club. At one point in my career I got really upset with my phone and I gave it up for six weeks. Turned out that if you give up your smartphone, you end up reading a lot more, so I started reading books. I rediscovered that oh my god, books are good to read. And I wanted to keep doing it and participate in book clubs as a forcing function to just keep reading.

But yeah, it came in response to this attention fragmentation. I hate the fact that internally I just get too impatient for something and want that immediate hit of something, then another thing, then another thing. It’s damning to our brain. I don’t really know how you push back against that though. The TikTok-ification of YouTube probably jacked up the revenues such that people will never look back. They’re not going to serve up a bunch of long-form lectures—they’re going to serve up the bizarre, the quirky, the outrageous, the interesting, the mildly infuriating, because it just makes more money. Does anybody else feel deep resentment about attention fragmentation?

Lajay: Yes. I went to a movie last night because I’m single and yesterday was Valentine’s Day, so I thought I should take myself on a date. I went to see Marty Supreme. It’s about two and a half hours and there’s a point in the middle where the story kind of ebbs and flows—there are these high-action scenes and then it kind of meanders a little bit. And in that meandering, I had the urge to reach for my phone. I’m sitting here in this renowned film, everyone’s captivated by the screen, and I felt that same urge. Why am I impatient in this movie that I’ve been looking forward to seeing?

Molly: I haven’t watched a movie recently from the 70s, but I feel like if I did, it might be hard for me. I watched a lot when I was in college for classes, and I remember how slow they were. I’ve thought, man, I don’t think I’d last watching some old slow panning shot. That just makes me worry about children. We’re all older and we had to be bored in our youth and read books and go outside and touch grass. But for children who are born into this world—do they have the chance to even develop the resistance to that? I don’t know.

Tom: I drive my 15-year-old to school every morning and on the way to school she’s scrolling TikTok or Instagram—I don’t even know what it is. One of these social media platforms. I look over and she’s doing the infinite scroll, just thumb, thumb, and I’m like, “Oh my god.” And then I go off to my tech job and help drive more of that sort of behavior. It is depressing. I don’t really know what the solution is. What if I made a big statement internally about this? Many people have. I’ve read long documents about how we need to change. It has no impact that I see.

Lajay: Well, I think tying it back to the book, and your question of “Well, what can we do?”—that’s where policy comes in, in my mind. Tech organizations are incentivized, as we’ve seen, to think about other problems or they’re motivated by other rewards, whereas our policymakers—that is their job, to look at society and the world and come up with governance frameworks to better the world that we’re in.

Going back to my earlier comment about wanting more introspection from the author: she’s a global policy expert, and I would have loved to hear her take on what policy should look like and what they could have done. I think that was actually a really big missed opportunity, because it’s easy to critique, but she seems uniquely positioned to propose some solutions or ideas and start a conversation—use her book tour to advocate for change. That could have been cool.

At Microsoft, I remember, policy was really the only power that we had to answer to. We could do really whatever we want and we’d only stop and really think if we knew there was a law. We had to follow GDPR and privacy and security and accessibility and those sorts of things. But other than that, we could do what we want. If we knew we were going to get fined millions of dollars, then we were like, “Okay, we should maybe build it this way instead.” Policy is the only real tool, I think, to put guardrails in place around what the impact of technology should be. I don’t know, maybe there’s others, but to me I’m like, “Policy, do something!”

Aaron: But there’s only one place in the world where that’s actually effective, and unfortunately that’s China. The political landscape in America where you would see it—or perhaps the EU has a minor effectiveness—but because of the ability to just buy politicians, which let’s be honest, this is where we are today. You can buy politicians; we see it from the top down. There’s plenty of industries that are so unregulated because they just buy the politicians and drive the policy. Tech has done that, whether we want to admit it or not. The lack of policy that has been driven by legislatures is directly impacted by their ability to influence the politicians that would write that policy.

Tom: Yeah, when you say that I always think of that image of Trump’s inauguration, and the first two rows were all the tech CEOs. They give billions of dollars—yeah, you’re spot on.

Molly: Pretty grim. Yeah, I also feel like regulation and policies would be the only way. But as Aaron says, if you can just buy it, then it’s pretty dark.

Tom: Yeah, even a $5 billion fine—when you look at the revenue that’s like $40 billion or something—it’s almost seen as an operating expense. Europe is becoming the watchdog, trying to regulate and enforce policy, but that culture also doesn’t produce innovative tech companies. Nothing is coming out of there that’s competing with the other less-regulated landscapes.

But coming back to the book. The end part of the book, where normally an author tries to grab hold of some optimistic string or something to point towards—Karen Hao did this in Empire of AI, didn’t want to leave readers depressed. But in this book, Sarah Wynn Williams doesn’t point towards an optimistic future. In fact, she says we’re about ready to repeat it with AI. We’re going to go through the same cycle, maybe we have a chance to correct it, recognizing where we went wrong.

Now this is where I think it becomes more debatable. She doesn’t lay out an argument connecting the dots between AI and social media, and she doesn’t really have space to do it. But I think there are some notable differences between AI and social media. The most notable difference is that companies have far less control over the outcomes of AI algorithms. Sure, they optimize for sycophancy, perhaps. But I don’t think sycophancy is the main driver of adoption. I think most people are drawn towards more powerful and analytically accurate tools more than sycophantic responses.

So now we’re getting into an age where companies are essentially creating a synthetic organism that will have emergent behavior they can’t entirely control. Are they responsible for the outcomes of this potentially emergent behavior that could be malicious or helpful? It’s different from an algorithm optimizing for social media by amplifying outrage—people have a lot more control over that. Any thoughts on this parallel with AI? Do you think AI is going to repeat social media, or do you think AI is different enough that it’s not an exact fit?

Aaron: I think the nature of AI has a lot more potential. Social media was all about and continues to be all about just eyeballs, right? It’s a marketing platform. That’s what it is. It’s trying to get your attention so that other organizations will market their wares on it. AI has potential beyond just getting your attention and serving as a marketing platform. I think it’s fundamentally different.

Tom: Yeah, that’s a good point. People still don’t even know how to insert ads into AI. Companies aren’t making money hand over fist yet with it. They’re in fact just spending billions on infrastructure to support it and don’t even have a direct revenue model around it, other than worker replacement, maybe.

Molly: What is OpenAI doing with advertising right now? I feel like I see headlines but I’m not reading about it. And Claude is putting out ads that “We don’t include advertising.” Does anyone know about this?

Tom: I did hear that OpenAI was going to start introducing ads. And just a little side note—I think AI companies haven’t quite realized that there is a tremendous opportunity to serve ads. You know when you make some question to AI and it thinks for like a minute? What is the user doing? They’re just sitting there either watching the screen or they go and open another window and wait until it’s done. It’s like when you’re pumping your gas at the gas station: “I’m not doing anything, man, why aren’t you serving ads at me?” It would be a particularly evil thing to serve an ad while you’re waiting, but the opportunity is there. Anyway, I don’t know how they’re doing it. People don’t like the idea that they’re going to distort the responses of the AI.

Lajay: I saw that as well, Molly. I haven’t seen it in the product, but at least from their announcements, they said that based on the query or question you posed, they would serve relevant ads and have a distinct delineation—a line between the response at the top and then an ad at the bottom. Because they’re walking this line of not infusing responses and not being too manipulative with their ads, they designed it so the user would know the difference between the response and the ad.

I think that is a really slippery slope where they could very easily, unbeknownst to any of us, skew the responses in the model to include advertisements or exclude certain products or suggest others. How would we know? That’s a little concerning. But their current design is to have a separate ad section, which to me is very Google. Before, at the beginning, Google had a sponsored responses section, but then came SEO, where it’s not as obvious as advertisements, but certain sites would be higher in the list and optimized for.

Tom: I would welcome ads like that, actually. I think that might be a compelling model. I was trying to fix my bike the other week and using AI to figure out how to fix it. Would I have hated to see an ad for a bike shop below that? No. It’s the perfect formula: the user’s typing in what they want, what they’re curious about, so you serve them ads that match that interest. It’s not such an evil model. It’s like the most ingenious revenue model on the internet. I don’t know why it’s so difficult for people to monetize it yet.

Lajay: I think with AI—I go back to the last book we read that really revealed the way that not just AI but technology—how we as people relate to technology, where we anthropomorphize it or ascribe it meaning and depth that may not be there. Knowing that a platform has such a rich understanding of our psychology and is serving ads, it feels like a manipulative formula. It opens the door for abuse and harm, and to me, the downsides of that outweigh any sort of “Oh, there’s a bike shop” recommendation. I’d rather just discover the bike shop on a quick Google search versus enabling a company with so much deep understanding of me and the ability to manipulate my psychology in ways that I might not even know. I’d rather just open a phone book or something.

Tom: You’re right. These companies are building—can you imagine the psychological profile you’re revealing through many AI interactions? Yesterday I was asking AI if there’s a surge of vet visits around pets—cats in particular—given that so many people bring home flowers that are toxic to cats around Valentine’s Day. Because somebody gave my daughter flowers that were semi-toxic to our cat. And the AI said, “Oh Tom, I know that you have two cats and that this is probably a concern for you.” And I’m like, when did I tell it that I have two cats? How does it know this? I don’t want to have this huge memory. These companies have promoted memory as a salient feature that you want, but actually memory is the profile that could build a really targeted campaign towards me.

Aaron: I actually don’t think what Gemini or ChatGPT or Copilot knows is solely reliant on what we prompt them with. I think there’s such an infrastructure of data sharing, data brokerage from so many different tech companies that honestly, I think maybe 10% might be what they know from our prompts, but 90% might be from all of our activity online. There’s no anonymity online. You do something on Amazon, Google’s gonna find out about it. You ask Google something, Meta’s gonna find out about it. There’s such an ecosystem for that already. The horses have left the barn.

Tom: Yeah. There’s that theme of these tools being used for mass surveillance and mass manipulation with targeting. I really hope that’s not what we look back at and realize in hindsight. When Facebook was optimizing for outrage, nobody really realized it at the time. Social media was supposed to be transformative and usher in democracies—you had the Arab Spring—and then in retrospect we were like, “Oh yeah, it was optimizing for outrage.” Are we going to look back at AI five years later and say, “Oh yeah, those were just optimizing to extract complete psychological profiles in order to manipulate”? I don’t know.

Aaron: My brother always speaks very highly of AI and the tech companies around any technology, because he is convinced that in 5 or 10 years we’re just going to be peons to the overlords that are AI. I’m a big sci-fi fan and there’s so much sci-fi written about the dangers of AI.

Tom: I try to keep more of an optimistic spirit. I feel like a lot of times AI has definitely been more empowering. It’s one of those technologies that you can use both ways—for good and for evil. You can use it to transcribe this meeting and extract key talking points and share it and people can read it, plug it into their own NotebookLM instances, create podcasts, amplify it. It can be used to help people do research. It can be a good force, but it could also be used in a negative way.

Any last thoughts about this book? Overall, I come back to what I said at the beginning. I just think this author should really be celebrated for some courage to write a tell-all. As a blogger and somebody who writes a lot, I’ve taken a position that’s probably a little bit cowardly. I don’t really write negatively about other people in our industry; things blow up in my face. I steer clear of highly charged topics. I don’t like confrontation. And this author was just like, “Screw it, I’m gonna be transparent, and yeah, this person’s got billions and they’re gonna come at me with all kinds of legal weaponry, and I’m just gonna go forward.” I think that’s pretty awesome.

Molly: I hope that—Lajay talking about how she needs to propose some policy changes—hopefully this is just the start for her. She wrote her memoir and now she can focus on proposing new ideas or something in a new book.

Aaron: Do we know what she’s doing? She wrote this a while ago. Facebook has been fighting this for some time, right?

Molly: Yeah, I think it came out last year.

Aaron: Yeah, I mean, she left Facebook in 2018. I imagine she immediately started writing and probably had it together in 2020, 2021.

Molly: It was last year she published it. But the other thought I had—I kept thinking, man, it would have been so nice if she had somehow just left Facebook earlier for her own sake. It sounded like it was just awful for her, and it was hard to read knowing how long she worked there.

Tom: The next book in the book club series is If Anyone Builds It, Everyone Dies: Why Superhuman AI Would Kill Us All by Eliezer Yudkowsky and Nate Soares. This is a book I’ve actually already read—read it a couple months ago for the Seattle Intellectual Book Club. It is a great book. It’s another fun read. It’s not going to be cognitively taxing, but it’s got some really good, deep arguments to think about. I think you’re in for a treat.

Excited about that one. Thanks again for coming to this book club. I think it was great. I sometimes like having fewer people; we get more of a continuity of discussion and more in-depth conversation. Thanks for coming. Hope you have a good President’s Day weekend, whatever your plans are. And thanks to Aaron for joining us this first time.

Aaron: Thanks for hosting, Tom. This was great.

Tom: All right. Bye everybody.

Molly: Thank you. Bye!

Lajay: Thank you! Bye!

About Tom Johnson

Tom Johnson

I'm an API technical writer based in the Seattle area. On this blog, I write about topics related to technical writing and communication — such as software documentation, API documentation, AI, information architecture, content strategy, writing processes, plain language, tech comm careers, and more. Check out my API documentation course if you're looking for more info about documenting APIs. Or see my posts on AI and AI course section for more on the latest in AI and tech comm.

If you're a technical writer and want to keep on top of the latest trends in the tech comm, be sure to subscribe to email updates below. You can also learn more about me or contact me. Finally, note that the opinions I express on my blog are my own points of view, not that of my employer.