The OpenAI Reckoning: I’ve Been Watching This Slow-Motion Car Crash for Years—And This Week It Finally Hit the Wall

By Carl Austin November 20, 2025

I’m going to be blunt: when I saw those resignation letters drop on Tuesday night, I felt something close to grief.

Not surprise—God no, the surprise burned off sometime around the third “restructuring” in 2023—but genuine, gut-level sorrow. Because these weren’t anonymous Twitter egg accounts screaming into the void. These were the people who taught me, and half the field, how to think rigorously about alignment in the first place. And they just said, in public, that the institution they helped build is no longer salvageable.

That hurts. It should hurt all of us.

The Moment the Mask Slipped

Let me paint the scene for you the way it actually felt in the group chats I’m still in (yes, the invites never quite get revoked).

Monday: Sam on stage, smiling that practiced smile, unveiling the Superintelligence Governance Board like it’s the second coming of the Charter. Wednesday morning: seven letters, each more scalding than the last, hit the shared drive. By lunchtime the Slack channel was a ghost town. By dinner, the phrase “security theater” was trending.

I’ve read every public resignation from OpenAI since 2021. These ones are different. They don’t plead. They don’t negotiate. They simply state, with exhausted clarity: We are done.

The Part Nobody Wants to Say Out Loud

Here’s the sentence that keeps me awake: the Governance Board was the biggest concession Sam Altman has ever made—and the safety people still walked.

Think about that. They were offered a seat at the absolute top table. Veto power. Twelve-month delays if needed. A direct line to the release valve on superintelligence itself.

And they looked at it, looked at who the board ultimately answers to, looked at the Microsoft cap-table looming in the background like a silent partner in the room, and said: No. This cannot be fixed from the inside.

That is not a policy disagreement anymore. That is a declaration of institutional bankruptcy.

The Contrarian Who Lives in My Head (and Is Getting Louder)

Of course there’s another voice—and it’s a voice I respect—that says: “Carl, you’re being dramatic. These people have been moving the goalposts for five straight years.”

And honestly? That voice has a point.

2019: “Solve alignment before AGI.” 2021: “At least give us scalable oversight.” 2023: “Fine, just give us a serious superalignment effort.” 2024: “Okay, but keep the team together.” 2025: “Actually, no internal structure under investor pressure can ever work; the only moral choice is total refusal.”

At some point, yes, this starts to look less like principled caution and more like the labor theory of value for existential risk: only the safety researchers get to decide when humanity is ready, and the answer is apparently never under current economic reality.

I feel the force of that argument. I really do. Because the alternative—watching the capability curve go parabolic while the people who understand the danger most viscerally refuse to engage—feels like watching someone burn down the fire station during a wildfire.

The Ghost of Los Alamos Keeps Whispering

Every time this happens I think of Leo Szilard in 1945, circulating his petition against using the bomb, collecting signatures, watching it get buried by Groves and Oppenheimer. Szilard was right about the moral horror. Oppenheimer was right that the world had already changed and someone would build it.

We never really resolved that argument. We just lived with the fallout—literal and moral—for eighty years.

The difference now is speed. Szilard had months to organize. We have weeks.

My Own Position, Since You Asked

I’m not neutral. I never really was.

I believe we are going to build superintelligence in this decade. I believe the default outcome, absent heroic effort, is catastrophic. And I believe the heroic effort is no longer possible inside the current OpenAI.

That leaves three paths:

  1. The company somehow rights the ship against all precedent.
  2. The accelerationists win and we roll the dice.
  3. The safety community finally admits that winning inside the existing labs is impossible and starts building the alternative institutions we’ve been talking about since 2016.

I am begging—actually begging—for door number three.

Because door number two is how civilizations die screaming, and door number one feels, at this point, like a fantasy we tell ourselves so we can sleep.

A Personal Note to the People Who Just Left

If any of you are reading this: thank you. Seriously. Your letters were acts of courage that most of us (myself very much included) have never matched.

But please—don’t stop here. The world needs the thing you build next more than it has ever needed anything. Not another scaling lab. Not another critique from the sidelines.

Something new. Something that proves superintelligence can be pursued without selling your soul to the growth gods. I don’t know what it looks like yet. But I know you’re the only ones who can invent it.

And if you do, I’ll be first in line to fund it, cheer it, defend it.

Because this isn’t just about OpenAI anymore.

It’s about whether humanity gets to keep a seat at the table we’re building.

And right now, that seat is looking awfully empty.

Carl Austin is an independent writer covering the intersection of technology, ethics, and the future of intelligence. You can find his archive at ThinkForgeHub.com