@BlackHC
I'm speechless at OpenAI releasing that contract excerpt and acting as if there aren't gaping holes that could be exploited far beyond their stated "red lines." I'm not a lawyer, but this is pretty obvious and common sense. (And to be clear: if Google had signed the same deal, I'd be saying the same thing internally. The issues here are bigger than friendly competition between companies.) OpenAI's "red lines" are: no mass domestic surveillance, no directing autonomous weapons, and no high-stakes automated decisions. They argue their cloud-only deployment + safety stack + cleared OpenAI personnel "in the loop" make violations impossible. They also claim the contract references the relevant laws/policies "as they exist today" so future changes won't weaken the standards. But the actual language they published is still full of obvious escape hatches. This is why Anthropic refusing to sign makes sense. Reporting on the Anthropic–"DoW"/Pentagon standoff described them saying the proposed contract language was framed as compromise but paired with "legalese that would allow safeguards to be disregarded at will." You don't need to agree with Anthropic on everything to see what they're reacting to: language that sounds like ethics but cashes out as essentially "subject to whatever the government decides later." ## Autonomous weapons The problem is that the restriction is conditional: it depends on what "law/regulation/policy requires human control" for. If policy definitions are weak (or later revised), the contract language itself doesn't read like a durable "no autonomous weapons" ban. It reads like "we'll follow whatever the current regime says requires human control." OpenAI says elsewhere that the agreement "locks in" today's standards even if laws/policies change. If that "freeze" clause is real and enforceable, sure, but it's not visible in the excerpt itself, so the excerpt alone doesn't justify the level of confidence they're projecting. ## "High-stakes decisions" Same loophole. This forbids only decisions that already require human approval under whatever authorities apply. If a decision doesn't formally require approval (or can be reclassified/reshaped), the clause doesn't obviously prohibit automation of the step that matters. ## Surveillance "directives," "purpose," and "unconstrained" are squishy on purpose: "DoD directives" aren't laws; they're internal policy. That matters because we have real precedents for administrations leaning on aggressive internal legal/policy interpretations as a shield until courts/politics catch up. If you think "secret memos" is alarmist, look at the pattern: 1. Reporting in early 2026 described a previously hidden DHS/ICE legal memo position asserting warrantless/forced home entry under certain circumstances, which is the kind of internal-lawyer move that tends to get written, circulated, and only later litigated and retracted. 2. And historically, the Bush-era OLC torture memos are the canonical example of "legalistic compromise" that later turned out to be a moral and legal disaster. (You don't have to litigate the details to make the point: internal legalese can be used to launder outcomes.) "Unconstrained" is not a real safeguard. Surveillance can be huge while still "constrained" by selectors, categories, time windows, or a stated "foreign intelligence purpose." And it only covers private information, so not the massive world of public data that can still be used for profiling, targeting, and "pattern-of-life" analysis at scale. ## Domestic law enforcement > shall also not be used for domestic law-enforcement activities except as permitted by the Posse Comitatus Act and other applicable law. This is not a hard prohibition. "except as permitted" is not a ban. It's a permission for exceptions, and "other applicable law" is an open-ended bucket by design. If you want a concrete, recent example: the Associated Press reported that formal orders extended the Washington, D.C. National Guard deployment through Feb. 28, 2026, to protect federal property/functions and to support federal and D.C. law enforcement. That's exactly the sort of "domestic deployment supporting law enforcement" scenario where this clause stops sounding like a "red line" and starts sounding like legal throat-clearing. ## "Cloud-only / no edge deployment prevents autonomous weapons" rings false OpenAI's own argument is: cloud-only (no edge devices) means you can't power autonomous weapons. But that's not convincing. You don't need GPT-5.2 running on the missile. You can use a cloud model for high-level decision-making (tasking, prioritization, target recommendation, mission planning) over a satellite link (Starlink or otherwise), while a separate local system handles actual guidance and execution. High latency is totally compatible with "strategic / operational" autonomy while still enabling lethal outcomes. Once the pattern exists, "additional safety layers" are a policy choice and implementations change, exceptions get made, but today's contract language tends to get "grandfathered" into tomorrow's contract template. So layered safeguards can reduce risk today, but the contract language itself is exactly the kind of "looks strict, bends easily" compromise that becomes precedent. And creating precedent is the real problem here.