«

OpenAI与五角大楼的"妥协",正是Anthropic所担忧的局面。

qimuai 发布于 阅读:6 一手编译


OpenAI与五角大楼的"妥协",正是Anthropic所担忧的局面。

内容来源:https://www.technologyreview.com/2026/03/02/1133850/openais-compromise-with-the-pentagon-is-what-anthropic-feared/

内容总结:

OpenAI与美国防部达成协议 人工智能军事化应用引伦理争议

在美军对伊朗实施打击、急于推进其政治化人工智能战略的背景下,OpenAI与美国国防部达成的一项合作协议,将科技公司与军事应用之间的伦理困境再次置于聚光灯下。

2月28日,OpenAI宣布已达成协议,允许美国军方在机密场景中使用其技术。公司首席执行官萨姆·阿尔特曼承认,此次谈判是在五角大楼公开谴责其竞争对手Anthropic之后才匆忙开始的。

OpenAI在声明中竭力划清界限,强调协议禁止将其技术用于开发自主武器或进行大规模国内监控,并称其并未接受Anthropic曾拒绝的条款。表面看来,OpenAI似乎既赢得了合同,又占据了道德高地。然而,细究之下,两者的路径选择截然不同:Anthropic坚持设定明确的道德禁令却最终受挫,而OpenAI则选择了更务实、更依赖现有法律框架的柔性策略。

OpenAI表示,其合作基础是“假设政府不会违法”,并在合同中引用了包括《第四修正案》在内的多项相关法律政策。但法律专家指出,该合同并未赋予OpenAI像Anthropic所追求的那种独立的、可禁止政府合法使用其技术的权利。批评者认为,现有法律本身并不足以有效阻止AI自主武器或监控系统的诞生,而仅仅依赖“政府守法”的假设则更为脆弱。

与此同时,坚持道德立场的Anthropic遭到了美国官方的强烈反击。在美国空袭德黑兰前夕,国防部长皮特·赫格塞斯在社交媒体上严厉指责Anthropic“傲慢与背叛”,并宣布将该公司列为供应链风险,禁止任何与美国军方有业务往来的承包商与其进行商业活动。此举被外界视为对Anthropic的“致命打击”,该公司已表示将提起诉讼。

目前,五角大楼给予自身六个月过渡期,以在机密行动中逐步用OpenAI及埃隆·马斯克旗下xAI的模型,替换目前唯一在用的Anthropic模型Claude。然而,有报道称在禁令发布数小时后,美军对伊朗的打击中仍使用了Claude,预示着替换过程不会顺利。

此次事件凸显了在地缘政治紧张局势下,美国军方加速AI部署的战略正迫使科技公司重新审视甚至放弃其先前划定的伦理红线。OpenAI能否在机密且紧迫的军事应用中有效嵌入其承诺的安全护栏,其“法律优先于道德禁令”的立场能否获得内部员工认同,都将面临严峻考验。人工智能军事化应用的伦理与法律边界之争,远未结束。

中文翻译:

OpenAI与五角大楼的"妥协",正是Anthropic所担忧的
Anthropic曾竭力划下道德红线,OpenAI却选择了更宽松的法律边界。如今,随着美军在打击伊朗期间仓促推出政治化的人工智能战略,OpenAI正从中获益。

2月28日,OpenAI宣布已达成协议,允许美国军方在机密场景中使用其技术。首席执行官萨姆·奥尔特曼坦言,该公司是在五角大楼公开谴责Anthropic后才开始推进谈判,整个过程"无疑非常仓促"。

OpenAI在声明中极力强调,并未放任五角大楼随意使用其技术。公司发布博客文章解释称,协议已排除自主武器和大规模国内监控用途。奥尔特曼也表示,公司并未全盘接受Anthropic曾拒绝的条款。

表面看来,OpenAI似乎既赢得了合同又占据了道德高地,但字里行间和法律术语却揭示了另一番景象:Anthropic坚守道德立场虽赢得众多支持者却最终失败,而OpenAI务实且侧重法律合规的路径,实则对五角大楼更为宽松。

在美军急于借打击伊朗之机推行政治化AI战略的背景下,OpenAI能否兑现其承诺的安全防护措施?那些希望公司采取更强硬立场的员工是否会认可这份协议?走好这根钢丝将异常艰难。(截至发稿,OpenAI尚未就协议细节的补充问询作出回应。)

然而魔鬼藏在细节中。奥尔特曼指出,OpenAI能达成协议而Anthropic失败的关键并非边界设定,而是方法论差异。"Anthropic似乎更关注合同中的具体禁令,而我们更倾向于援引现行法律。"

OpenAI表示愿与五角大楼合作的基础之一,是相信政府不会违法。该公司披露的合同节选引用了多项涉及自主武器和监控的法律政策,具体如五角大楼2023年自主武器指令(该指令未禁止但规范了设计与测试),宽泛如宪法第四修正案(该修正案保护美国公民免受大规模监控)。

但乔治·华盛顿大学法学院政府采购法研究副院长杰西卡·蒂利普曼指出,公开的节选"并未赋予OpenAI类似Anthropic那种可独立禁止政府合法使用技术的权利",仅规定五角大楼不得利用OpenAI技术违反现行法律法规。

Anthropic之所以在抗争中获得众多支持者(包括部分OpenAI员工),正是因为他们认为现行法规不足以阻止AI自主武器或大规模监控的出现。而对于那些记得爱德华·斯诺登揭露的监控手段曾被内部机构认定合法、历经漫长诉讼才被判违法的观察者而言(更不用说现行法律允许且可能被AI扩大的监控手段),所谓"政府不会违法"的假设根本不足为信。在这个层面上,我们终究回到了原点:允许五角大楼将AI用于任何合法用途。

诚如OpenAI国家安全合作负责人昨日所言,若不相信政府会守法,那也无须指望其会遵守Anthropic提出的红线。但这并非反对设定红线的理由。执行存在瑕疵不意味着约束失去意义,合同条款仍能规范行为、强化监督并产生政治影响。

OpenAI宣称还有第二道防线:公司保留对其模型安全规则的控制权,不会向军方提供解除安全限制的AI版本。被奥尔特曼授权就此事发声的员工博阿兹·巴拉克在X平台写道:"我们可以将'禁止大规模监控'和'禁止无人操控武器系统'的红线直接嵌入模型行为。"

但该公司未说明军用安全规则与民用规则的差异。在首次涉足机密环境且仅六个月的部署期内,这些防护措施的执行效果尤其难以保证。

更深层的问题是:科技公司是否有权禁止合法但违背道德的行为?政府显然认为Anthropic越界了。在美军空袭德黑兰前八小时,国防部长皮特·赫格塞斯在X平台严厉指责:"Anthropic上演了傲慢与背叛的终极示范",并呼应特朗普总统的命令,在Anthropic阻止其模型Claude用于自主武器或国内监控后,要求政府停止与该AI公司合作。"战争部必须为所有合法目的完全无限制地使用Anthropic模型。"

除非OpenAI公布完整合同,否则很难不将其视为在意识形态跷跷板上摇摆:既承诺将自豪地运用影响力践行正义,又默认法律才是约束五角大楼技术使用的主要底线。

未来需关注三点:首先,OpenAI核心员工是否会接受这种立场?在AI企业激烈争夺人才的当下,奥尔特曼的辩解可能被视作不可饶恕的妥协。

其次,赫格塞斯誓言对Anthropic发起的焦土战役。这远不止取消政府合同,更将其列为供应链风险,宣布"任何与美国军方合作的承包商、供应商或伙伴均不得与Anthropic有商业往来"。这一致命打击的合法性存在巨大争议,Anthropic表示将提起诉讼,OpenAI也公开反对该举措。

最后,在升级对伊朗打击的同时,五角大楼如何替换现役唯一涉密AI模型Claude(该模型曾在委内瑞拉等行动中使用)?赫格塞斯给出六个月过渡期,军方将逐步引入OpenAI及埃隆·马斯克xAI的模型。

但禁令发布数小时后,Claude据称仍被用于伊朗打击行动,表明替换过程绝不简单。即便Anthropic与五角大楼数月来的争端告一段落(笔者对此存疑),我们已目睹五角大楼的AI加速计划正迫使企业放弃原有底线,而中东的新冲突正是其首要试验场。

若您掌握事件进展的相关信息,请通过Signal联系笔者(用户名:jamesodonnell.22)。

深度解析
人工智能
"退出GPT"运动呼吁用户取消ChatGPT订阅
对移民海关执法局的抵制正演变为反对AI企业与特朗普政府合作的广泛运动。
Moltbook成为AI作秀巅峰
这款病毒式传播的机器人社交网络,既揭示了智能体的未来,更折射出当下社会对AI的狂热。
遇见将大语言模型视为外星生物的新世代生物学家
通过将大语言模型当作生命体而非计算机程序来研究,科学家首次揭示了它们的部分奥秘。
2026年AI将走向何方
我们的AI作者对未来一年作出五大预测——这些热点趋势值得关注。

保持联系
获取《麻省理工科技评论》最新动态
探索特别优惠、头条新闻、即将举办的活动等精彩内容。

英文来源:

OpenAI’s “compromise” with the Pentagon is what Anthropic feared
Anthropic pushed for moral boundaries. OpenAI settled for softer legal ones, and now it stands to benefit as the Pentagon rushes out a politicized AI strategy during strikes on Iran.
On February 28, OpenAI announced it had reached a deal that will allow the US military to use its technologies in classified settings. CEO Sam Altman said the negotiations, which the company began pursuing only after the Pentagon’s public reprimand of Anthropic, were “definitely rushed.”
In its announcements, OpenAI took great pains to say that it had not caved to allow the Pentagon to do whatever it wanted with its technology. The company published a blog post explaining that its agreement protected against use for autonomous weapons and mass domestic surveillance, and Altman said the company did not simply accept the same terms that Anthropic refused.
You could read this to say that OpenAI won both the contract and the moral high ground, but reading between the lines and the legalese makes something else clear: Anthropic pursued a moral approach that won it many supporters but failed, while OpenAI pursued a pragmatic and legal approach that is ultimately softer on the Pentagon.
It’s not yet clear if OpenAI can build in the safety precautions it promises as the military rushes out a politicized AI strategy during strikes on Iran, or if the deal will be seen as good enough by employees who wanted the company to take a harder line. Walking that tightrope will be tricky. (OpenAI did not immediately respond to requests for additional information about its agreement.)
But the devil is also in the details. The reason OpenAI was able to make a deal when Anthropic could not was less about boundaries, Altman said, but about approach. “Anthropic seemed more focused on specific prohibitions in the contract, rather than citing applicable laws, which we felt comfortable with,” he wrote.
OpenAI says one basis for its willingness to work with the Pentagon is simply an assumption that the government won’t break the law. The company, which has shared a limited excerpt of its contract, cites a number of laws and policies related to autonomous weapons and surveillance. They are as specific as a 2023 directive from the Pentagon on autonomous weapons (which does not prohibit them but issues guidelines for their design and testing) and as broad as the Fourth Amendment, which has supported protections for Americans against mass surveillance.
However, the published excerpt “does not give OpenAI an Anthropic-style, free-standing right to prohibit otherwise-lawful government use,” wrote Jessica Tillipman, associate dean for government procurement law studies at George Washington University’s law school. It simply states that the Pentagon can’t use OpenAI’s tech to break any of those laws and policies as they’re stated today.
The whole reason Anthropic earned so many supporters in its fight—including some of OpenAI’s own employees—is that they don’t believe these rules are good enough to prevent the creation of AI-enabled autonomous weapons or mass surveillance. And an assumption that federal agencies won’t break the law is little assurance to anyone who remembers that the surveillance practices exposed by Edward Snowden had been deemed legal by internal agencies and were ruled unlawful only after drawn-out battles (not to mention the many surveillance tactics allowed under current law that AI could expand). On this front, we’ve essentially ended up back where we started: allowing the Pentagon to use its AI for any lawful use.
OpenAI could say, as its head of national security partnerships wrote yesterday, that if you believe the government won’t follow the law, then you should also not be confident it would honor the red lines that Anthropic was proposing. But that’s not an argument against setting them. Imperfect enforcement doesn’t make constraints meaningless, and contract terms still shape behavior, oversight, and political consequences.
OpenAI claims a second line of defense. The company says it maintains control over the safety rules governing its models and will not give the military a version of its AI stripped of those safety controls. “We can embed our red lines—no mass surveillance and no directing weapons systems without human involvement—directly into model behavior,” wrote Boaz Barak, an OpenAI employee Altman deputized to speak on the issue about X.
But the company doesn’t specify how its safety rules for the military differ from its rules for normal users. Enforcement is also never perfect, and it is especially unlikely to be when OpenAI is rolling out these protections in a classified setting for the first time and is expected to do so in just six months.
There’s another question beneath all this: Should it be down to tech companies to prohibit things that are legal but that they find morally objectionable? The government certainly viewed Anthropic’s willingness to play this role as unacceptable. On Friday evening, eight hours before the US launched strikes in Tehran, Defense Secretary Pete Hegseth issued harsh remarks on X. “Anthropic delivered a master class in arrogance and betrayal,” he wrote, and echoed President Trump’s order for the government to cease working with the AI company after Anthropic sought to keep its model Claude from being used for autonomous weapons or mass domestic surveillance. “The Department of War must have full, unrestricted access to Anthropic’s models for every LAWFUL purpose,” Hegseth wrote.
But unless OpenAI’s full contract will reveal more, it’s hard not to see the company as sitting on an ideological seesaw, promising that it does have leverage it will proudly use to do what it sees as the right thing while deferring to the law as the main backstop for what the Pentagon can do with its tech.
There are three things to be watching here. One is whether this position will be good enough for OpenAI’s most critical employees. With AI companies spending so heavily on talent, it’s possible that some at OpenAI see in Altman’s justification an unforgivable compromise.
Second, there is the scorched-earth campaign that Hegseth has promised to wage against Anthropic. Going far beyond simply canceling the government’s contract with the company, he announced that it would be classified as a supply chain risk, and that “no contractor, supplier, or partner that does business with the United States military may conduct any commercial activity with Anthropic.” There is significant debate about whether this death blow is legally possible, and Anthropic has said it will sue if the threat is pursued. OpenAI has also come out against the move.
Lastly, how will the Pentagon swap out Claude—the only AI model it actively uses in classified operations, including some in Venezuela—while it escalates strikes against Iran? Hegseth granted the agency six months to do so, during which the military will phase in OpenAI’s models as well as those from Elon Musk’s xAI.
But Claude was reportedly used in the strikes on Iran hours after the ban was issued, suggesting that a phase-out will be anything but simple. Even if the months-long feud between Anthropic and the Pentagon is over (which I doubt it is), we are now seeing the Pentagon’s AI acceleration plan put pressure on companies to relinquish lines in the sand they had once drawn, with new tensions in the Middle East as the primary testing ground.
If you have information to share about how this is unfolding, reach out to me via Signal (username: jamesodonnell.22).
Deep Dive
Artificial intelligence
A “QuitGPT” campaign is urging people to cancel their ChatGPT subscriptions
Backlash against ICE is fueling a broader movement against AI companies’ ties to President Trump.
Moltbook was peak AI theater
The viral social network for bots reveals more about our own current mania for AI as it does about the future of agents.
Meet the new biologists treating LLMs like aliens
By studying large language models as if they were living things instead of computer programs, scientists are discovering some of their secrets for the first time.
What’s next for AI in 2026
Our AI writers make their big bets for the coming year—here are five hot trends to watch.
Stay connected
Get the latest updates from
MIT Technology Review
Discover special offers, top stories, upcoming events, and more.

MIT科技评论

文章目录


    扫描二维码,在手机上阅读