On April 6, 2026, OpenAI published a 13-page policy paper calling for a "New Deal-scale" overhaul of American society to prepare for superintelligence. Hours earlier, The New Yorker published an 18-month investigation by Ronan Farrow and Andrew Marantz documenting, in granular detail, why the person leading that company may not be the one who should be making those promises.
The investigation
Farrow and Marantz interviewed more than 100 people with firsthand knowledge of how Altman conducts business. They reviewed internal documents including roughly 70 pages of memos compiled by former chief scientist Ilya Sutskever, and more than 200 pages of private notes kept by Anthropic CEO Dario Amodei, who left OpenAI over concerns about its direction.
Sutskever's memos begin with a list. First item: "Lying." He documented instances where Altman allegedly misrepresented the safety approval status of GPT-4 to the board. When the board pressed for documentation of a safety panel's sign-off, none could be produced. He also allegedly told then-CTO Mira Murati that safety reviews weren't necessary, citing the company's general counsel as his authority. The general counsel later said he had no idea where Altman "got that impression."
"I don't think Sam is the guy who should have his finger on the button," Sutskever told a fellow board member.
Amodei was more direct. His conclusion, after years of working alongside Altman: "The problem with OpenAI is Sam himself."
The firing that changed nothing
In November 2023, OpenAI's board fired Altman citing a "lack of consistent candor." During the confrontation, board members pressed him to acknowledge a pattern of deception. His response, according to reporting by Ars Technica citing the New Yorker: "I can't change my personality."
One board member interpreted that plainly. "What it meant was: 'I have this trait where I lie to people, and I'm not going to stop.'"
Altman was reinstated within five days, after a coordinated campaign involving investors, a crisis communications manager, and direct pressure from Microsoft. The law firm hired to investigate the circumstances of the firing, WilmerHale, the same firm that handled the Enron and Tyco investigations, produced no written report. Findings were delivered through oral briefings only.
After his return, the culture shifted. Safety teams were deprioritized. The superalignment team, publicly promised 20% of OpenAI's compute, was working with 1-2% on the oldest hardware. Then it was dissolved entirely. When a New Yorker reporter asked to speak with researchers working on existential safety, an OpenAI representative said: "That's not, like, a thing."
Safety as a performance
The lobbying record is where the public posture cracks most visibly. Altman has testified before Congress, traveled the world calling for AI regulation, and positioned OpenAI as a company that welcomes oversight. Behind that, according to The Decoder's coverage of the investigation, OpenAI's orbit was working against specific safety bills.
The Leading the Future PAC, funded primarily by OpenAI president Greg Brockman and Andreessen Horowitz, lobbied against New York's RAISE Act and California's SB 53. Both were state-level AI safety measures. Both faced organized opposition from the industry while Altman was publicly calling for the government to step in.
In 2019, Altman publicly warned against releasing GPT-2 in full because it was "too dangerous." A few years later, he made models many times more capable available to everyone for free. His explanation for why so many safety researchers have left OpenAI: "My vibes don't really fit with a lot of this traditional A.I.-safety stuff."
One unnamed OpenAI researcher described the pattern to The New Yorker with unusual precision: Altman "sets up structures that, on paper, constrain him in the future. But then, when the future comes and it comes time to be constrained, he does away with whatever the structure was."
The org chart tells the same story. OpenAI started as a nonprofit. Then a capped-profit structure. Now a full for-profit conversion is underway, with Altman reportedly targeting an IPO as early as Q4 2026 and a five-year spending commitment of $600 billion. OpenAI's own CFO reportedly believes the company is not ready for an IPO and is uncertain the revenue can support the commitment.
The timing
The policy paper OpenAI published on April 6 is not a trivial document. The 13-page paper proposes a public wealth fund seeded by AI companies, a shift in the tax base toward capital gains and automated labor, four-day workweek pilots, and government-industry containment playbooks for rogue AI systems. It frames all of this as keeping "people first."
Publishing it the same morning the New Yorker investigation landed was either spectacularly bad timing or a calculated attempt to shift the news cycle. Anton Leicht, a visiting scholar at the Carnegie Endowment for International Peace, described the paper as "comms work to provide cover for regulatory nihilism."
That framing is hard to argue with when the subtext of the investigation is precisely that: the company most loudly calling for regulatory frameworks is the same one that funded campaigns to defeat them.
Why it matters
The standard reading of this story is about Sam Altman's character. That reading is incomplete. The more important pattern is structural.
Altman has consistently built governance structures around OpenAI that appear to constrain him, then dismantled them when they became inconvenient. The nonprofit board. The safety teams. The superalignment compute pledge. The post-firing investigation that left no paper trail. Each structure was real enough to generate credibility, and then gone when it mattered.
The same logic applies to regulation. Public advocacy for oversight creates the impression of a company that welcomes accountability. Private lobbying against specific bills ensures that accountability never arrives in a form that actually binds.
This is not a story about one CEO's personal ethics. It is a story about how the most powerful AI company in the world has learned to use the language of safety as a regulatory shield while systematically preventing the conditions that would make safety enforceable.
The people who built the technology kept the receipts. Sutskever's 70 pages. Amodei's 200 pages. The oral-only WilmerHale briefings that left nothing to subpoena.
If the structures that were supposed to constrain this company have each been dismantled from the inside, what makes anyone think the next one will be different?
Originally published as an Instagram carousel on @recul.ai.