By CEO Sam Altman’s personal admission, OpenAI’s cope with the Division of Protection was “positively rushed,” and “the optics don’t look good.”
After negotiations between Anthropic and the Pentagon fell through on Friday, President Donald Trump directed federal businesses to cease utilizing Anthropic’s know-how after a six-month transition period, and Secretary of Protection Pete Hegseth mentioned he was designating the AI firm as a supply-chain threat.
Then, OpenAI quickly announced that it had reached a deal of its personal for fashions to be deployed in categorised environments. With Anthropic saying it was drawing crimson strains round using its know-how in absolutely autonomous weapons or mass home surveillance, and Altman saying OpenAI had the identical crimson strains, there have been some apparent questions: Was OpenAI being sincere about its safeguards? Why was it capable of attain a deal whereas Anthropic was not?
In order OpenAI executives defended the settlement on social media, the corporate additionally printed a blog post outlining its approach.
In reality, the put up pointed to a few areas the place it mentioned OpenAI’s fashions can’t be used — mass home surveillance, autonomous weapon programs, and “high-stakes automated choices (e.g. programs similar to ‘social credit score’).”
The corporate mentioned that in distinction to different AI corporations which have “decreased or eliminated their security guardrails and relied totally on utilization insurance policies as their main safeguards in nationwide safety deployments,” OpenAI’s settlement protects its crimson strains “by a extra expansive, multi-layered method.”
“We retain full discretion over our security stack, we deploy through cloud, cleared OpenAI personnel are within the loop, and we’ve got robust contractual protections,” the weblog mentioned. “That is all along with the robust current protections in U.S. legislation.”
Techcrunch occasion
San Francisco, CA
|
October 13-15, 2026
The corporate added, “We don’t know why Anthropic couldn’t attain this deal, and we hope that they and extra labs will take into account it.”
After the put up was printed, Techdirt’s Mike Masnick claimed that the deal “completely does permit for home surveillance,” as a result of it says the gathering of personal information will adjust to Executive Order 12333 (together with various different legal guidelines). Masnick described that order as “how the NSA hides its home surveillance by capturing communications by tapping into strains *exterior the US* even when it comprises data from/on US individuals.”
In a LinkedIn post, OpenAI’s head of nationwide safety partnerships Katrina Mulligan argued that a lot of the dialogue across the contract language assumes “the one factor standing between Individuals and using AI for mass home surveillance and autonomous weapons is a single utilization coverage provision in a single contract with the Division of Struggle.”
“That’s not how any of this works,” Mulligan mentioned, including, “Deployment structure issues greater than contract language […] By limiting our deployment to cloud API, we are able to be certain that our fashions can’t be built-in immediately into weapons programs, sensors, or different operational {hardware}.”
Altman additionally fielded questions in regards to the deal on X, the place he admitted it had been rushed and resulted in vital backlash towards OpenAI (to the extent that Anthropic’s Claude overtook OpenAI’s ChatGPT in Apple’s App Store on Saturday). So why do it?
“We actually wished to de-escalate issues, and we thought the deal on provide was good,” Altman mentioned. “If we’re proper and this does result in a de-escalation between the [Department of War] and the trade, we’ll appear like geniuses, and an organization that took on a whole lot of ache to do issues to assist the trade. If not, we’ll proceed to be characterised as […] rushed and uncareful.”
