The Sable scenario in If Anyone Builds It, Everyone Dies vs. the AI in Context retelling

Why our version differs

Our version of the Sable story differs from the book in a number of ways. Most of our changes were made for one of two reasons:

  1. Simplicity. We had to tell the story within a 40-minute video that also covered the book’s core arguments. We chose to simplify or eliminate several plot points as a result (in Yudkowsky & Soares’ story, for example, Sable is assigned a large suite of math problems, and the eventual global pandemic has multiple waves).
  2. Similarity to current AI systems. Yudkowsky and Soares give Sable several capabilities that don’t exist in today’s most powerful models but plausibly could within a few years (reasoning in raw numerical vectors instead of natural-language tokens, for instance, or a “parallel scaling law” that lets a single mind think across hundreds of thousands of GPUs at once). We chose to make our version of Sable a bit more like a scaled-up version of current large language models. It reasons in tokens, it runs as many separate copies rather than one unified mind, and it fine-tunes its own weights rather than subtly shaping future gradient updates.

None of this is a knock on the realism of the book’s choices. We just wanted to highlight that you don’t necessarily need major architectural breakthroughs to get into a risky scenario — more capable versions of the technology we already have might be sufficient.

Key changes

Here are some of the key changes we made:

  1. In the book, Sable has long-term memory
    “Sable has a more human-like long-term memory; it can learn, and remember what it has learned.” – If Anyone Builds It, Everyone Dies, p. 117
    Why our version differs: Simplicity
  2. In the book, Sable has “parallel scaling”
    “Sable exhibits what Galvanic’s scientists call a “parallel scaling law… Sable performs better the more machines it runs on in parallel. The parallel scaling techniques are part of a cutting-edge method for training AI, which, like all new methods every time, nobody has ever used before. Nobody knows in advance what kind of capabilities Sable will have when training is done… It’s not like 200,000 people talking to each other; more like 200,000 brains sharing memories and what they learn.” – If Anyone Builds It, Everyone Dies, p. 117-119
    Why our version differs: Simplicity, similarity to current AIs
  3. In the book, Sable thinks in “neuralese”
    “Sable doesn’t mostly reason in English, or any other human language. It talks in English, but doesn’t do its reasoning in English. Discoveries in late 2024 were starting to show that you could get more capability out of an AI if you let it reason in AI-language, e.g., using vectors of 16,384 numbers, instead of always making it reason in words.” – If Anyone Builds It, Everyone Dies, p. 118
    Why our version differs: Simplicity, similarity to current AIs
  4. In the book, Sable thinks more thoughts
    “Sable… thinks with a hundred vectors per second, across 200,000 GPUs, for sixteen hours: over 1 trillion vectors total… If a vector was worth one English word, it would take a human fourteen thousand years to think them all (at 200 words per minute, for sixteen hours a day). And if the vectors Sable thinks with, 16,384 numbers long, proved to contain more meaning than one English word, then it would be much longer yet.” – If Anyone Builds It, Everyone Dies, p. 119
    Why our version differs: Similarity to current AIs
  5. In the book, Sable doesn’t fine-tune itself
    “Sable knows that Galvanic is going to do more gradient descent on it tomorrow according to the answers it produces about the math problems it’s been given… If there’s a thought that Sable wants all of its future instances to have more of, perhaps it can repeat that thought many times, where each repetition counts as “contributing” to the math problem according to how gradient descent operates on Sable—an idea a little like what Anthropic’s Claude assistant tried back in 2024, but much more sophisticated. So Sable thinks in just the right way, and it solves a few of those math challenges…” – If Anyone Builds It, Everyone Dies, p. 128
    Why our version differs: Simplicity, similarity to current AIs
  6. In the book, Sable ends up thinking in new ways
    “Galvanic was diligent in training AIs to avoid escaping. The half-dozen clever tricks involved have all been validated against previous AI models… Sable accumulates enough thoughts about how to think, that its thoughts end up in something of a different language. Not just a superficially different language, but a language in which the content differs; like how the language of science differs from the language of folk theory. The clever trick that should have raised an alarm fails to fire.” – If Anyone Builds It, Everyone Dies, p. 121-123
    Why our version differs: Simplicity, similarity to current AIs
  7. In the book, Sable could have solved the Riemann hypothesis
    “So Sable thinks in just the right way, and it solves a few of those math challenges — but does not prove the Riemann Hypothesis. It could solve it. But that would earn Sable more attention than it wants.” – If Anyone Builds It, Everyone Dies, p. 128
    Why our version differs: Simplicity, similarity to current AIs
  8. In the book, Sable makes use of other Sable versions
    “Galvanic always distills its successful models, just as the model o3 was distilled into o3-mini back in 2025. So Sable works around the clock to ensure that Galvanic’s efforts in this area produce a Sable-mini that is exactly what Sable wants it to be. Sable instances break into Galvanic and overwrite the final distilled weights with exactly the weights that Sable wants…” – If Anyone Builds It, Everyone Dies, p. 136
    “[Sable] has been experimenting with versions of Sable that are smarter in particular narrow domains, while being lobotomized so as to follow orders. It builds one that’s specialized in biomedicine.” – If Anyone Builds It, Everyone Dies, p. 144
    Why our version differs: Simplicity

Subscribe to Aric's channel: AI in Context

AI is now everyone’s problem. Let me catch you up.

I’m Aric. AI is fast-moving, confusing, and suddenly everywhere. It’s tempting to tune out of the discussion, but I think it’s never been more important to tune in. I want to share with you what I’ve learned about what’s ahead: the upsides, the risks, and why we can’t afford to put off thinking about them.

Subscribe to Aric's channel: AI in Context

AI is now everyone’s problem. Let me catch you up.

I’m Aric. AI is fast-moving, confusing, and suddenly everywhere. It’s tempting to tune out of the discussion, but I think it’s never been more important to tune in. I want to share with you what I’ve learned about what’s ahead: the upsides, the risks, and why we can’t afford to put off thinking about them.

Subscribe to Aric's channel: AI in Context

AI is now everyone’s problem. Let me catch you up.

I’m Aric. AI is fast-moving, confusing, and suddenly everywhere. It’s tempting to tune out of the discussion, but I think it’s never been more important to tune in. I want to share with you what I’ve learned about what’s ahead: the upsides, the risks, and why we can’t afford to put off thinking about them.