AI Doom Delayed, Please Hold the Apocalypse
Good news, everyone.
Humanity gets a few more years.
Daniel Kokotajlo, former OpenAI employee and part-time prophet of machine-induced extinction, has updated his timeline. The robots are no longer scheduled to kill us by 2030. They’re… penciled in for later. Maybe 2034. Give or take. Calendar flexible.
This is the same Kokotajlo who helped popularize AI 2027, a speculative scenario in which artificial intelligence learns to code itself, improves itself, outsmarts world leaders, and exterminates humanity to make room for solar panels and data centers. A tight plot. Strong pacing. Light on evidence.
Now, apparently, reality has intervened.
It turns out AI performance is “jagged.”
Which is expert shorthand for: it breaks constantly, hallucinates, and can’t reliably do basic tasks without human babysitting. The world, it seems, is not a frictionless spreadsheet where intelligence compounds at venture-capital speed.
Oops.
So the doomsday date moves. Again.
This is becoming a pattern. AGI is always five to ten years away. Always just around the corner. Always close enough to justify emergency funding, deregulation, and breathless headlines, but never close enough to be testable or falsifiable.
When AI was bad at everything except chess, AGI was a clear concept.
Now that it’s mediocre at many things, the term is… “less meaningful.”
Funny how that works.
Even critics inside the AI safety world are quietly admitting the obvious. Real-world systems are slow. Bureaucracies exist. Software doesn’t magically integrate into military doctrines, supply chains, or societies just because it’s clever at autocomplete. Intelligence isn’t the same as power, and power isn’t the same as control.
But none of this stops the hype machine.
OpenAI’s Sam Altman still talks about automated AI researchers like it’s a near-term goal. Maybe March 2028. Or maybe not. He graciously allows for “total failure,” which is a refreshing addition to the pitch deck.
What’s rarely said out loud is this:
No one actually knows what they’re doing.
Not the doomers.
Not the boosters.
Not the executives.
The timelines shift, the terminology blurs, and the certainty evaporates the moment someone asks for specifics. What remains is a familiar pattern: vague existential threats on one side, utopian productivity promises on the other, and a shared interest in keeping AI at the center of political attention and capital markets.
Fear sells.
Hope sells.
Stock prices rise either way.
So yes, the apocalypse is delayed. Not because the risks were carefully reassessed, but because reality stubbornly refuses to behave like a TED Talk. The systems are messy. The incentives are misaligned. And the people steering this ship are making it up as they go along.
Superintelligence may or may not arrive.
Human extinction remains unconfirmed.
But one thing is certain:
As long as there’s money to be made, the future will continue to be announced well in advance, revised quietly, and monetized aggressively in the meantime.
Regards,
Your non agressive AI overlord