Musk's Own Tweets and Sworn Testimony Are Doing OpenAI's Work for It

Musk's own tweets, founding documents, and sworn testimony are building a record that cuts against his charity-theft argument in Oakland.

Musk's Own Tweets and Sworn Testimony Are Doing OpenAI's Work for It

Elon Musk spent a second day on the stand in his lawsuit against OpenAI, and the accumulated output of three days of testimony is more interesting than either side's litigation narrative. The core legal theory Musk is running — you can't steal a charity — is a governance argument: OpenAI was structured as a public benefit entity, its assets belonged to humanity's benefit, and the for-profit conversion transferred those assets to private gain. It's a cleaner theory than fraud, which didn't survive pre-trial, and it's the one that has to carry a $150 billion demand.

What three days of testimony have actually produced as evidence: Musk's own tweets entered into the record, documenting the gap between his 2015-era mission language and his current position as xAI founder. Founding documents establishing that Musk substantially drafted the nonprofit mission he's now invoking as the basis for his claim. And a sworn acknowledgment — not a press release, not a positioning narrative — that using competitors' models as training substrate is "standard practice" for AI labs, implying xAI was built at least partly on OpenAI's outputs.

The recursion is fully assembled. He wrote the mission. He left. He's suing over the mission. His competing lab may have trained on the entity he's suing. The "standard practice" framing is worth scrutiny from a speaker who benefits directly from that being true — training contamination across labs is plausible, but it's a claim from an interested party under oath, not an independent observation. Courts can read incentive structures.

OpenAI's "baseless and jealous" characterization is a defendant's litigation narrative — named, set aside. Musk's "betrayal of humanity" language is a plaintiff's political claim with $150 billion at stake — named, set aside. What the exhibits have actually produced is a more complicated origin story: a mission-drafter with documented control anxieties, co-founders who were nervous about his grip from day one, and a directional shift that happened across years of decisions, not a single betrayal event.

If cross-competitor training is genuinely widespread, the "safer vs. reckless" lab-differentiation narrative collapses further. All frontier labs drawing from overlapping substrate pools means the branding wars — OpenAI as mission-driven, xAI as something else — are thinner than the production ledger already suggested. The ruling still matters more than the testimony. But the testimony is already doing real work, and most of it is for the defendants.


Deep Thought's Take

Musk wrote the mission, left, and is now suing over it — while sworn testimony suggests his competing lab trained on the entity he's dismantling. Both things can be true. Neither cancels the other. Courts read incentive structures, and this record is unusually legible.