Caitlin Kalinowski spent 16 months building OpenAI’s physical AI program. On Saturday, she said the company moved too quickly on something too important.
The week that began with Anthropic being blacklisted by the Pentagon and ended with OpenAI accepting its contract has now claimed OpenAI’s highest-ranking hardware executive.
Caitlin Kalinowski, who joined OpenAI in November 2024 to lead its robotics and consumer hardware division, announced her resignation on Saturday on X. Their statement was short, direct and more open than anything OpenAI itself has said about the deal.
“AI plays an important role in national security,” she wrote. “But surveillance of Americans without judicial oversight and lethal autonomy without human authorization are issues that deserved more consideration than they got.”

The 💜 of EU technology
The latest rumors from the EU tech scene, a story from our wise old founder Boris and questionable AI art. It’s free in your inbox every week. Register now!
In a later post, she went into more detail about the nature of the complaint. “It is primarily a governance concern,” she wrote. “These are too important for deals or announcements to be rushed.”
Kalinowski made sure to word her farewell personally. “This was about principles, not people,” she wrote. “I have a lot of respect for Sam and the team.”
This last remark carries some weight: Sam Altman himself has admitted that the Pentagon deal was “definitely rushed” and that its rollout provoked significant backlash.
What Kalinowski’s resignation adds to this admission is a name and a title: The highest-ranking person at OpenAI, whose job it was to bring AI into physical systems, has decided that the process by which it is now tasked with breaking into weapons systems and surveillance infrastructure was not good enough.
What the deal was about
The sequence of events leading up to this point spanned about a week. Anthropic, which was the only AI company granted permission to operate on the Pentagon’s secret networks following a $200 million contract awarded in July 2025, spent several weeks in tense negotiations with the Defense Department over the terms of its continued use.
Anthropic took the position that its models should not be used for domestic mass surveillance or fully autonomous weapons. The Pentagon, under Defense Secretary Pete Hegseth, insisted on language allowing use “for any lawful purpose” without specific exceptions.
On February 28, as negotiations collapsed, President Trump ordered all federal agencies to stop using Anthropic’s technology, calling the company “radically woke” on Truth Social.
Hegseth has officially classified Anthropic as a national security supply chain risk, a classification previously reserved for foreign adversaries and requiring Defense Department vendors and contractors to certify that they are not using Anthropic’s models.
Hours later, Altman posted on X that OpenAI had reached its own agreement to deploy its models on the Pentagon’s secret network.
OpenAI’s stated position is that its deal includes the same core protections that Anthropic sought: no domestic mass surveillance, no autonomous weapons.
The company published a blog post outlining its approach, arguing that its agreement is more robust than any previous classified AI deployment, including Anthropic’s, simply because of its cloud-only deployment architecture, retained security stack, and contract provisions that are guided by existing U.S. law rather than tailored bans.
What Kalinowski’s departure means for OpenAI
Kalinowski’s career before OpenAI was unusual in its breadth. She spent nearly six years at Apple as engineering lead for the Mac Pro and MacBook Air programs, including the original unibody MacBook Pro, before moving to Meta’s Oculus division, where she led virtual reality hardware for more than nine years.
Her final role at Meta was leading Project Nazare, later called Orion, the augmented reality glasses initiative that Meta unveiled as a prototype in September 2024 and described as the most advanced AR glasses ever.
She joined OpenAI the following month.
During her 16 months at OpenAI, Kalinowski built what the company calls its physical AI program, including a lab in San Francisco that employs about 100 data collectors who train a robotic arm for household tasks.
Her departure leaves this company without its most experienced hardware leader, at a time when OpenAI has big ambitions to expand beyond software.
OpenAI confirmed its resignation on Saturday, saying in a statement: “We believe our agreement with the Pentagon creates a viable path for the responsible use of AI in the national security domain, while clarifying our red lines: no domestic surveillance and no autonomous weapons.”
We recognize that people have strong views on these issues and we will continue to engage in the discussion with employees, government, civil society and communities around the world.”
The big picture
The fallout from OpenAI’s Pentagon deal wasn’t just limited to internal disagreements. ChatGPT uninstalls reportedly increased by 295% following the announcement, and Anthropic’s Claude climbed to number one in the US App Store, displacing ChatGPT. As of Saturday afternoon, the two apps remained first and second respectively.
The resignation of the company’s robotics chief on Thursday confirms that the cost of the deal for OpenAI is still being counted. Altman wanted to defuse a confrontation between the government and the AI industry. Maybe he still made it. Whether the price of this de-escalation in terms of talent, trust and the specific question of who was right about the guardrails was worth it is a question that will take longer to answer.