Three companies that have spent years poaching each other's researchers, undercutting each other's pricing, and racing to obsolete each other's products are now sharing intelligence files. The target isn't each other. It's the labs copying them.
What adversarial distillation actually is
The technique has been around long enough that Stanford accidentally published the playbook. In 2023, researchers built Alpaca, a model trained almost entirely on outputs from OpenAI's text-davinci-003. Total cost: under $600. The result behaved, by most measures, like a significantly more expensive system. That was a research demo. What Anthropic and OpenAI are describing now is the same concept run at industrial scale, systematically and covertly, by commercial competitors.
The method: create fake accounts, feed a powerful model thousands of carefully constructed prompts, collect the outputs, and use those outputs as training data for a cheaper knockoff. You don't need access to the weights. You don't need to understand the architecture. You just need API access and patience.
The receipts
The documented record here is unusually specific for a corporate intelligence dispute.
In January 2025, Microsoft flagged DeepSeek for extracting large volumes of data through OpenAI's API. By February 2026, OpenAI went to Congress directly, telling the House Select Committee on China that DeepSeek was engaged in "ongoing efforts to free-ride on the capabilities developed by OpenAI and other US frontier labs."
Days later, Anthropic published something more granular. The company identified three Chinese labs by name: DeepSeek, Moonshot, and MiniMax. Together, they ran over 24,000 fraudulent Claude accounts and generated 16 million exchanges. MiniMax drove the majority of volume. Anthropic said it traced some of the accounts to senior staff at the labs themselves.
"These labs generated over 16 million exchanges with Claude through approximately 24,000 fraudulent accounts, in violation of our terms of service and regional access restrictions."
That's not a security researcher probing for vulnerabilities. That's an organized extraction campaign.
The alliance
As of April 7, 2026, OpenAI, Anthropic, and Google are sharing threat intelligence through the Frontier Model Forum, the nonprofit they co-founded with Microsoft in 2023. The model mirrors how cybersecurity firms have operated for years: pool attack data, identify patterns faster, make it harder for any single actor to stay undetected across the industry.
The Trump administration's AI Action Plan had already called for exactly this kind of information-sharing center to combat distillation. The labs got there without waiting for a mandate.
No formal legal actions have been announced. Partly because AI outputs cannot be copyrighted under U.S. law, which leaves the legal toolkit thin. Terms of service violations are real, but enforcing them against labs operating in China is a different problem entirely.
Why the financial threat runs deeper than it looks
The revenue math is straightforward. U.S. officials estimate distillation costs Silicon Valley labs billions of dollars in lost annual revenue. But the more fundamental threat isn't the lost API fees. It's what a good-enough knockoff does to the subscription model.
Most Chinese models are open-weight and free. If DeepSeek releases a reasoning model that benchmarks competitively with Claude or GPT-4o, the case for paying $20 a month gets harder to make. When DeepSeek released its reasoning model in January 2025, it wiped nearly $1 trillion off U.S. and European tech stocks in a single day. That wasn't just a market overreaction. It was investors doing the subscription math in real time.
The safety argument also matters, though it tends to get less attention in coverage focused on IP. A distilled model doesn't inherit the original's guardrails. It inherits the outputs, stripped of whatever alignment work went into the system that produced them. U.S. firms have specifically flagged that distilled knockoffs could remove protections against misuse, including generating bioweapon synthesis routes or running disinformation at scale.
Why it matters
Step back from the competitive framing for a moment. What you're watching is three companies that have every incentive to keep their threat intelligence proprietary deciding that a shared defense is worth more than a private one. That's a significant shift. Intelligence sharing creates its own risks: the labs are reportedly wary that going too far could invite antitrust scrutiny, and any coordination among direct competitors on anything adjacent to pricing or market strategy is legally sensitive territory.
But the more interesting signal is what the alliance reveals about the scale of the problem. If the answer to distillation were a better API rate limiter or a smarter anomaly detection system, each company would have built it quietly and kept the advantage. The fact that they're pooling resources suggests the problem is harder than any of them can solve alone, and that they've accepted that reality.
The Frontier Model Forum was founded in 2023 as a safety coordination body. Its first real test wasn't a rogue AI. It was a business model attack.
Three companies that compete on everything just agreed they share one enemy. If the combined threat intelligence of OpenAI, Anthropic, and Google can't stop distillation at scale, what actually can?
Originally published as an Instagram carousel on @recul.ai.