«

Anthropic公司将在法庭上挑战国防部的供应链标签认定。

qimuai 发布于 阅读:5 一手编译


Anthropic公司将在法庭上挑战国防部的供应链标签认定。

内容来源:https://techcrunch.com/2026/03/05/anthropic-to-challenge-dods-supply-chain-label-in-court/

内容总结:

人工智能公司Anthropic将就供应链风险认定起诉美国国防部

当地时间周四,人工智能公司Anthropic联合创始人兼首席执行官达里奥·阿莫代宣布,该公司计划就美国国防部将其列为“供应链风险”的决定提起诉讼,称该决定“缺乏法律依据”。

这一表态源于双方长达数周的核心分歧:五角大楼认为应能出于“所有合法目的”不受限制地使用其AI系统,而阿莫代则划定了明确红线,即Anthropic的AI不得用于对美国民众的大规模监控或开发全自动武器。被列为供应链风险意味着该公司可能被禁止与国防部及其承包商进行业务往来。

阿莫代在声明中强调,国防部的认定范围非常有限,仅适用于客户将Anthropic的AI模型“克劳德”直接用于履行与国防部合同的情况,不影响绝大多数现有客户。他表示,相关法律旨在保护政府供应链,而非惩罚供应商,并要求国防部长采取“限制性最小的必要手段”。

此次纠纷近期因一份内部备忘录泄露而升级。阿莫代为此道歉,称备忘录是在“艰难的一天”内仓促写成,内容已过时且不代表其“审慎观点”。备忘录中,他将竞争对手OpenAI与国防部的合作形容为“安全表演”。据悉,OpenAI已取代Anthropic与国防部签署了合作协议,此举引发了OpenAI内部员工的不满。

尽管面临法律挑战,阿莫代重申,公司的首要任务是确保在重大军事行动期间,美军和国家安全专家仍能使用关键AI工具。他透露,Anthropic目前正以“象征性费用”支持美国在伊朗的部分行动,并将继续提供模型直至国防部完成过渡。

法律专家指出,由于相关法律在国家安全问题上赋予国防部广泛裁量权,并限制了企业对政府采购决定提出异议的常规途径,Anthropic在法庭上面临较高挑战。前白宫人工智能顾问迪恩·鲍尔表示,法院通常不愿质疑政府对国家安全事务的判断,推翻认定“门槛很高,但并非不可能”。

目前,Anthropic很可能在华盛顿特区联邦法院提起诉讼。

中文翻译:

达里奥·阿莫代伊周四表示,Anthropic公司计划就国防部将其列为供应链风险企业的决定提起诉讼,他称这项认定"缺乏法律依据"。

此番声明发布前数小时,国防部正式将Anthropic列为供应链风险企业,此前双方就军方对人工智能系统的控制权问题已争论数周。供应链风险认定可能禁止企业与五角大楼及其承包商合作。阿莫代伊曾明确划出红线,强调Anthropic的人工智能不得用于对美国民众的大规模监控或全自动武器系统,但五角大楼认为应获得"所有合法用途"的无限制使用权。

阿莫代伊在声明中指出,Anthropic绝大多数客户不受供应链风险认定的影响。"就客户而言,该认定明确仅适用于客户将Claude作为战争部合同直接组成部分的情况,而非所有持有此类合同的客户使用Claude的行为。"他表示。

作为对法庭辩论立场的预演,阿莫代伊称国防部认定该公司为供应链风险的公函适用范围有限。"其目的在于保护政府而非惩罚供应商;事实上,法律要求战争部长必须采用限制性最低的必要手段来实现保护供应链的目标。"他补充道,"即使对战争部承包商而言,供应链风险认定也(且不能)限制与特定战争部合同无关的Claude使用或与Anthropic的商业往来。"

阿莫代伊重申,过去几天Anthropic与国防部一直保持着建设性对话,但在他发送给员工的内部备忘录泄露后,外界怀疑对话进程受阻。该备忘录中,阿莫代伊将竞争对手OpenAI与国防部的合作称为"安全表演秀"。

OpenAI已签署协议取代Anthropic与国防部合作,此举引发OpenAI员工强烈反对。阿莫代伊在周四声明中就备忘录泄露事件致歉,称公司既未故意传播也未指示他人泄露。"事态升级不符合我们的利益。"他表示。

阿莫代伊透露,该备忘录是在系列事件密集爆发"数小时内"撰写的——先是总统在Truth Social宣布将把Anthropic移出联邦系统,随后国防部长赫格塞斯作出供应链风险认定,最终五角大楼宣布与OpenAI达成协议。他为备忘录的语气致歉,称那是"公司的艰难时刻",并表示备忘录未体现其"审慎周详的观点"。这份六天前撰写的文件如今已成"过时评估"。

他最后强调,Anthropic的首要任务是确保在持续重大作战行动中,美国士兵和国家安全专家能持续获得重要工具。该公司目前正支持美国在伊朗的部分行动,阿莫代伊承诺将"以名义成本"继续向国防部提供模型,"直至完成过渡所需的时间"。

Anthropic可能在华盛顿联邦法院挑战这项认定,但相关法律依据使诉讼难度加大——该法律既限制企业质疑政府采购决策的常规途径,又赋予五角大楼在国家安全事务上的广泛裁量权。

正如曾公开批评赫格塞斯对待Anthropic方式的前特朗普政府白宫人工智能顾问迪恩·鲍尔所言:"法院通常不愿质疑政府对国家安全事务的判定……挑战者需要跨越极高的门槛,但这并非绝无可能。"

英文来源:

Dario Amodei said Thursday that Anthropic plans to challenge the Defense Department’s decision to label the AI firm a supply chain risk in court, a designation he has called “legally unsound.”
The statement comes a few hours after the Department officially designated Anthropic a supply chain risk following a weeks-long dispute over how much control the military should have over AI systems. A supply chain risk designation can bar a company from working with the Pentagon and its contractors. Amodei drew a firm line that Anthropic’s AI should not be used for mass surveillance of Americans or for fully autonomous weapons, but the Pentagon believed it should have unrestricted access for “all lawful purposes.”
In his statement, Amodei said the vast majority of Anthropic’s customers are unaffected by the supply chain risk designation.
“With respect to our customers, it plainly applies only to the use of Claude by customers as a direct part of contracts with the Department of War, not all use of Claude by customers who have such contracts,” he said.
As a preview of what Anthropic will likely argue in court, Amodei said the Department’s letter labeling the firm a supply chain risk is narrow in scope.
“It exists to protect the government rather than to punish a supplier; in fact, the law requires the Secretary of War to use the least restrictive means necessary to accomplish the goal of protecting the supply chain,” Amodei said. “Even for Department of War contractors, the supply chain risk designation doesn’t (and can’t) limit uses of Claude or business relationships with Anthropic if those are unrelated to their specific Department of War contracts.”
Amodei reiterated that Anthropic had been having productive conversations with the Department over the last several days, conversations that some suspect got derailed when an internal memo he sent to staff was leaked. In it, Amdodei characterized rival OpenAI’s dealings with the Department of Defense as “safety theater.”
Disrupt 2026: The tech ecosystem, all in one room
Your next round. Your next hire. Your next breakout opportunity. Find it at TechCrunch Disrupt 2026, where 10,000+ founders, investors, and tech leaders gather for three days of 250+ tactical sessions, powerful introductions, and market-defining innovation. Register now to save up to $400.
Save up to $300 or 30% to TechCrunch Founder Summit
1,000+ founders and investors come together at TechCrunch Founder Summit 2026 for a full day focused on growth, execution, and real-world scaling. Learn from founders and investors who have shaped the industry. Connect with peers navigating similar growth stages. Walk away with tactics you can apply immediately
Offer ends March 13.
OpenAI has signed a deal to work with the Defense Department in Anthropic’s place, a move that has sparked backlash among OpenAI staff.
Amodei apologized for the leak in his Thursday statement, claiming that the company did not intentionally share the memo or direct anyone else to do so. “It is not in our interest to escalate the situation,” he said.
Amodei said the memo was written within “a few hours” of a series of announcements, including a presidential Truth Social post saying Anthropic would be removed from federal systems, then Defense Secretary Hegseth’s supply chain risk designation, and finally the Pentagon’s deal announcement with OpenAI. He apologized for the tone, calling it “a difficult day for the company” and said the memo didn’t reflect his “careful or considered views.” Written six days ago, he added, it’s now an “out-of-date assessment.”
He finished by saying Anthropic’s top priority is to ensure American soldiers and national security experts maintain access to important tools in the middle of ongoing major combat operations. Anthropic is currently supporting some of the U.S.’s operations in Iran, and Amodei said the company would continue to provide its models to the Defense Department at “nominal cost” for “as long as necessary to make that transition.”
Anthropic could challenge the desingation in federal court, likely in Washington, but the law behind the decision makes it harder to contest because it limits the usual ways companies can challenge government procurement decisions and gives the Pentagon broad discretion on national security matters.
Or as Dean Ball — a former Trump-era White House advisor on AI who has spoken out against Hegseth’s treatment of Anthropic — put it: “Courts are pretty reluctant to second-guess the government on what is and is not a national security issue…There’s a very high bar that one needs to clear in order to do that. But it’s not impossible.”

TechCrunchAI大撞车

文章目录


    扫描二维码,在手机上阅读