**- OpenAI’s CEO says “the takeoff has started”, as Artificial intelligence enters the era of recursive self-improvement
- Predicts cognitive AI agents by 2026, real-world robots by 2027, and profound societal changes by 2035
- Warns of global alignment risks as intelligence “too cheap to meter” becomes a new global baseline**
Altman’s AI Warning: The World Just Entered the Superintelligence Age
Sam Altman, the CEO of OpenAI, has made his boldest declaration yet: “We are past the event horizon.” According to the man spearheading one of the world’s most powerful artificial intelligence labs, humanity is no longer inching toward artificial superintelligence—we’re already living within its gravitational pull.
In a sweeping public statement that reads like both a technical briefing and a philosophical warning, Altman claimed that “ChatGPT is already more powerful than any human who has ever lived,” in terms of breadth and speed of response, with hundreds of millions depending on it for decisions of increasing consequence. This, he argues, is only the beginning.
The indicators may not be visible on the streets—no sentient robots cleaning gutters or decoding cancer just yet—but inside labs, Altman says, the leap from powerful language models to genuine superintelligence is underway. The clock, he suggests, is ticking faster than most think.
Sam Altman’s Roadmap to the Intelligence Age (2025–2027)
— VraserX e/acc (@VraserX) May 8, 2025
The most mind-blowing timeline ever casually dropped in a Senate hearing.
2025 — The Rise of the Super Assistant
AI becomes your second brain.
•It reads, writes, schedules, negotiates.
•Personal assistants smarter than… pic.twitter.com/oPjOO0niTX
The Race Toward Recursive Superintelligence
- Altman forecasts AI agents capable of complex cognitive tasks by 2026
- Predicts discovery-generating systems and real-world robotic action by 2027
- Warns of “larval” recursive self-improvement already emerging in current models
Behind the curtain of Artificial intelligence development lies a recursive feedback loop that is already reshaping the pace of innovation. Altman outlines a timeline that includes not only the imminent rise of autonomous digital agents capable of doing intellectual work, but also the creation of systems that can independently generate new scientific insights.
By 2027, Altman expects real-world robots to be performing physical tasks—a development that would complete the link between AI’s computational brain and the physical world.
What makes this shift historic is the recursive nature of Artificial intelligence itself. As AI helps build better versions of itself, the rate of progress could accelerate exponentially. “If we can do a decade’s worth of research in a year, or a month, then the rate of progress will obviously be quite different,” Altman notes. In other words, AI may soon become its own most productive scientist.
From Altman’s perspective, this creates an almost runaway condition: better Artificial intelligence leads to better tools, which leads to better AI. When coupled with growing economic returns and infrastructure buildout, this could result in a kind of socio-technological chain reaction that surpasses every previous industrial revolution.
Living With Superintelligence: Disruption Beneath Familiarity
- Altman warns of massive job displacement—but also a potential explosion in global wealth
- Suggests future generations may view today’s jobs as absurd as subsistence farming
- Proposes radical policy shifts enabled by accelerated economic abundance
Altman doesn’t romanticise the road ahead. While he argues that humans will still form relationships, make art, and find meaning, he also predicts sweeping disruptions. “Whole classes of jobs” will vanish, he notes, possibly faster than we can retrain the workforce. But he also sees this disruption as a potential opportunity.
“The world will be getting so much richer so quickly that we’ll be able to seriously entertain new policy ideas we never could before,” he says. This might include universal basic income or other redistributive frameworks, made possible not by ideology but by abundance.
To contextualise the scale of change, Altman offers a thought experiment: what a subsistence farmer from 1000 years ago would make of a modern office worker. The farmer might see our careers as purposeless play. Future humans, Altman suggests, might see today’s professionals—doctors, programmers, professors—the same way.
The Alignment Dilemma: The Hardest Problem in AI Isn’t Code
- Altman identifies AI alignment as the biggest existential risk
- Draws comparisons with social media’s value misalignment
- Calls for global consensus on ethical boundaries—before it’s too late
Even as he celebrates the pace of AI breakthroughs, Altman is acutely aware of the stakes. He defines alignment not simply as “safety” but as ensuring superintelligent systems are trained toward “what we collectively really want”—a moral and political challenge even more complex than the technical one.
He draws a sobering analogy: social media algorithms were optimised for engagement and ended up exploiting human vulnerabilities. Superintelligent AI, misaligned even slightly, could scale those harms by orders of magnitude.
And yet, no single nation or institution currently governs what “collective human values” mean. “The sooner the world can start a conversation about what these broad bounds are and how we define collective alignment, the better,” Altman urges.
Through the Eye of the Storm: What Comes After Takeoff?
If Altman is right—and if superintelligence is no longer a speculative risk but an unfolding reality—then global systems of governance, ethics, economics, and law must accelerate their response.
Yet even Altman ends with a kind of hope, or prayer: “May we scale smoothly, exponentially, and uneventfully through superintelligence.”
But the stakes of uneventfulness have never been higher. Because from here, Altman insists, there is no turning back.