With input from Axios, Bloomberg, Business Insider, and the Wall Street Journal.
Sam Altman isn’t just building powerful AI. He’s sketching out how the entire system around it might need to change – and fast.
In a new policy blueprint from OpenAI, Altman lays out something closer to a political manifesto than a tech memo. The message is clear: what’s coming with AI isn’t incremental. It’s a shock big enough to demand a reset on the scale of the New Deal.
He’s not speaking in hypotheticals either. In a recent interview, Altman said superintelligence – machines that can outperform humans across most tasks – is getting close. Close enough that governments should already be preparing for fallout: job losses, cyberattacks, even biological threats.
That last part isn’t abstract. He warned that within a year, AI could enable serious cyber incidents. And on the bio side, tools designed to cure diseases could just as easily be misused to create new ones. The upside and the risk are arriving together.
So what does he want to do about it?
The ideas are wide-ranging, and in some cases, pretty radical.
At the center is a proposal for a public wealth fund. Think of it as giving every citizen a stake in the AI boom – profits from the technology flowing back to the public through a government-managed investment pool backed in part by AI companies themselves.
Then there’s the tax overhaul. If AI eats into jobs, payroll taxes won’t cut it anymore. The plan floats shifting the burden toward corporate income, capital gains, and even taxes tied directly to automated labor – basically, robots paying into the system they’re disrupting.
Work itself could change shape too. The blueprint suggests experimenting with a four-day workweek at full pay, letting productivity gains from AI translate into more free time instead of just higher output.
Another idea: access to AI as a basic right. Not a luxury for big companies, but something widely available – schools, small businesses, communities that usually get left behind.
Some parts of the document read more like contingency planning for worst-case scenarios. OpenAI openly discusses the possibility of advanced systems that can’t easily be shut down – autonomous, self-replicating, hard to contain. The answer there isn’t just technical fixes; it’s coordination with governments.
There’s also a kind of automatic safety net built into the thinking. If AI starts wiping out jobs at scale, support systems – unemployment benefits, wage insurance – would kick in based on preset triggers, then fade out once things stabilize.
Altman frames all of this as a starting point, not a finished plan. He wants debate, input, pushback. But the urgency is baked into every section.
Of course, there’s another layer here.
OpenAI is one of the companies driving this shift. Its models are among the most advanced and widely used. Laying out a roadmap for regulation also positions the company as a responsible actor – the one that saw the risks coming and tried to get ahead of them.
Altman acknowledges the weight of that role. He insists no single person should shape decisions that affect everyone, even as he helps steer one of the most powerful AI efforts in the world.
Still, the underlying message lands either way.
The people building the future of AI are openly saying the current economic system may not hold up under what’s coming next. Whether you see that as foresight or strategy, it’s a rare moment: a tech leader arguing that the technology he’s racing to deploy could force a rewrite of capitalism itself.









The latest news in your social feeds
Subscribe to our social media platforms to stay tuned