Anthropic公司公然违抗五角大楼。特朗普猛烈回击。

内容来源:https://aibusiness.com/ai-ethics/anthropic-defies-pentagon-sparking-an-ai-safety-debate
内容总结:
美国政府与AI公司Anthropic公开决裂,国防AI应用安全与主权之争白热化
2026年2月27日更新
一场围绕人工智能安全、国家主权与供应商控制权的公开对峙,正在美国政府和前沿AI公司之间爆发。在五角大楼设定的最后期限前,总统特朗普于社交媒体上宣布,将立即终止联邦政府与AI模型提供商Anthropic的所有合作,并指责该公司拒绝放松其AI安全护栏。
此次冲突的导火索,是Anthropic首席执行官达里奥·阿莫代此前一天发表的声明。他明确表示,无法“昧着良心”接受政府要求,使其AI政策更具“灵活性”。阿莫代认为,尽管AI有潜力用于国防,但目前其技术若应用于大规模国内监控或全自主武器,可能“损害而非捍卫民主价值观”。
作为回应,Anthropic近期修订了其“负责任扩展政策”,将侧重点转向透明度,而非绝对确保其发布模型对社会无害。这一调整与拒绝政府要求的强硬立场,凸显了AI供应商在保持竞争力与坚守安全伦理之间承受的巨大压力。
业内观察家指出,这场风波远不止于一份价值2亿美元的国防合同存废。RPA2AI研究公司首席执行官卡夏普·康佩拉指出,核心矛盾在于“主权与控制权的谈判”:政府主张对军事应用的合法权威,而AI供应商则试图在售出系统后,仍保留一定程度的规范性治理权。
此事件引发了连锁反应。业界关注其他主要AI供应商,如OpenAI和谷歌,是否会效仿Anthropic的立场。 Constellation Research首席执行官R “Ray” 王指出,其他公司似乎接受了政府要求,这引发了关于合作界限的疑问。同时,他也警告,Anthropic的强硬立场可能令其客户流失,因为“购买软件时,你不希望供应商将其伦理强加于你”。
伊利诺伊大学芝加哥分校的迈克尔·贝内特教授分析认为,特朗普政府的目标是确保美国赢得AI竞赛,而Anthropic可能被视为障碍。另一方面,政府若对一家初创公司让步,可能影响其他供应商应对政府压力的方式。
此外,供应商还面临内部风险。强行遵从政府要求可能导致持不同意见的员工大规模离职。贝内特指出,Anthropic最宝贵的资产并非其模型参数,而是开发与训练它的程序员团队,这或许是CEO立场坚决的重要原因。
尽管面临合同损失和被列为供应链风险的可能,Anthropic与政府的这场公开博弈,标志着AI供应商与政府关系的演变进入新阶段,其复杂动态将深远影响国防AI应用的未来格局。
中文翻译:
由谷歌云赞助
如何选择首个生成式AI应用场景
要启动生成式AI项目,首先应关注能够优化人类信息交互体验的领域。
Anthropic与美国政府之间的拉锯战,凸显了国防应用中AI安全、主权与供应商控制权等更广泛的矛盾。
编者注:本文于2026年2月27日更新
就在五角大楼要求Anthropic放宽部分安全限制的截止期限前夕,唐纳德·特朗普总统宣布政府将停止与这家AI模型供应商合作。
特朗普在Truth Social平台发文称:"我命令美国联邦政府所有机构立即停止使用Anthropic的任何技术。我们不需要它,也不想要它,未来绝不会再与其合作!"
根据该声明,使用Anthropic产品的政府机构将有六个月的过渡期。
昨日,Anthropic首席执行官达里奥·阿莫代在声明中表示,该公司"无法昧着良心接受政府要求"调整其AI政策。
尽管阿莫代相信AI有助于保卫美国,但他强调当前"我们认为AI可能侵蚀而非捍卫民主价值观",尤其在大规模国内监控和全自主武器应用方面。
随着Anthropic与五角大楼划清界限,AI供应商和企业正密切关注事态发展,这引发了更广泛的讨论:谁有权定义AI的安全使用标准?安全的AI究竟应具备何种形态?
周二,Anthropic修订了其"负责任扩展政策",将更侧重于透明度提升,而非单纯确保发布的模型不对社会造成危害。
Anthropic既缩减责任政策又拒绝向政府妥协的对比,凸显了AI供应商在保持竞争力与确保模型无害化之间的双重压力。
感受到政治压力加剧的并非只有Anthropic。OpenAI和谷歌员工也在推动公司效仿Anthropic的立场。
RPA2AI研究公司首席执行官卡夏普·康佩拉指出:"前沿AI公司已不再是中立的基础设施供应商,其模型具有军民两用特性,这使他们成为战略行为体。如同芯片供应商的处境,我们正见证AI供应商作为地缘政治参与者的常态化。问题不在于国防领域是否会使用AI——这已成事实,而在于由谁来制定使用规则。"
星座研究公司首席执行官R·"雷"·王认为,Anthropic与政府的博弈引发了对政府与其他科技供应商合约的质疑。
鉴于Anthropic对人类监控和自主武器的担忧,王提出疑问:"谷歌、xAI和OpenAI采取了哪些不同策略使其能安然合作,而Anthropic却不行?其他公司都能接受的条件,为何Anthropic无法妥协?"
但他同时指出,Anthropic的立场可能影响其盈利——客户往往反感供应商强制规定系统使用方式。王特别强调五角大楼需要灵活性:"他们需要不受束缚的系统。采购软件时,客户绝不希望供应商将道德观强加于人。"
王补充说,其他企业可能也不愿卷入Anthropic与政府的争端。
"每个政府和个体对AI的期待都不同。软件应被设计成能承载使用者价值观的工具,而非由供应商决定赋予何种价值观。"
Anthropic与政府的纠葛表明,AI供应商与政府关系正急速演变,其动态复杂性日益凸显。
康佩拉分析称,表面上看,若五角大楼将Anthropic列为供应链风险,其将损失2亿美元合同及相关机会;但深层博弈远不止于此:"这实质上是主权与控制权的谈判。政府主张对合法军事应用的权威,而AI供应商试图在售出系统后保留一定程度的规范性治理权。"
伊利诺伊大学芝加哥分校数据科学与AI战略协理副校长迈克尔·贝内特指出,特朗普政府旨在确保美国赢得AI竞赛,这意味着Anthropic可能成为实现目标的障碍。反之,若政府向这家初创公司让步,可能影响其他供应商应对政府压力的方式。
供应商若完全顺从政府要求,也可能引发持异议员工大规模离职的风险。
贝内特表示:"Anthropic深知其最宝贵的资产不是Claude的模型参数,而是与之共同工作、训练它的程序员。这很可能是CEO坚持立场的重要原因——他清楚员工对公司的期待。"
然而贝内特认为,特朗普的最新决定将使美国情报界陷入困境:"Claude被普遍认为比多数替代方案更实用。将其从国防情报及其他联邦政府工作流程中移除,几乎必然造成混乱。"
英文来源:
Sponsored by Google Cloud
Choosing Your First Generative AI Use Cases
To get started with generative AI, first focus on areas that can improve human experiences with information.
The back and forth between Anthropic and the U.S. government highlights broader tensions over AI safety, sovereignty and vendor control in defense applications.
Editor's note: This story was updated Feb. 27, 2026
Just before a deadline set by the Pentagon asking for Anthropic to relax some of its safety guardrails, President Donald Trump announced the government would stop working with the AI model provider.
"I am directing EVERY Federal Agency in the United States Government to IMMEDIATELY CEASE all use of Anthropic's technology," President Trump said in a post on Truth Social. "We don't need it, we don't want it, and will not do business with them again!"
Government agencies that use Anthropic products will have a six-month phase out period, according to the post.
Yesterday, Anthropic's CEO Dario Amodei said in a statement that it cannot "in good conscience accede to their request" to make its AI policy more flexible.
While he believes AI could help defend the U.S., Amodei stated that right now, "we believe AI can undermine, rather than defend, democratic values," especially for mass domestic surveillance and to power fully autonomous weapons.
As Anthropic draws a line in the sand with the Pentagon, AI vendors and enterprises are watching the situation closely, triggering a broader discussion about who has the authority to define safe use of AI and what safe AI looks like.
On Tuesday, Anthropic revamped its "Responsible Scaling Policy," in move that would allow it to focus more on transparency and less on ensuring the models it releases are not harmful to society.
The contrast between Anthropic's decision to scale back its RSP while refusing to bow to the government shows the pressure AI vendors are under to remain competitive while ensuring their models are not harmful to society.
Anthropic is not the only vendor feeling a rise in political tension. OpenAI and Google employees are also pushing their employers to follow Anthropic's lead.
"Frontier AI companies are no longer neutral infrastructure providers; they are strategic actors whose models have dual-use military relevance," said Kashyap Kompella, CEO and founder of RPA2AI Research. "Like the chip vendors, we are witnessing the normalization of AI vendors as geopolitical stakeholders. The question is not whether AI will be used in defense contexts; it already is, but who sets the terms of that use?"
For R "Ray" Wang, CEO of Constellation Research, the push-pull between Anthropic and the government raises questions about government contracts with other tech vendors.
"What is Google, xAI and OpenAI doing differently that allows them to be fine, whereas Anthropic is not OK," Wang said, given the AI vendor's concerns around human surveillance and autonomous weapons. "The other companies were fine with it, but Anthropic wasn't?"
But he also believes Anthropic's decision could affect its bottom line with customers being turned off by a vendor forcing them to use systems in a certain way. The Pentagon, in particular, needs flexibility, Wang said.
"They need systems that are unencumbered," he said. "When you buy software from someone, you don't want them to push their ethics on you."
Other enterprises might not want to get in between Anthropic and the government, Wang continued.
"AI is going to be different for every government and every individual," he said. "The software should be set and designed so you can apply your values to the software, not be dictated by the software vendor what values are going to be given to you."
Anthropic's tangle with the government signals how quickly the relationship between AI vendors and the government is evolving — as well as the complexity of these dynamics.
On the surface, Anthropic will lose its $200 million contract with the Pentagon and some opportunities if the Pentagon classifies it as a supply chain risk, but underneath, what's at stake is much more nuanced, Kompella said.
"What lies beneath is a negotiation over sovereignty and control," he said. "Governments assume authority over lawful military application. AI vendors are attempting to retain some degree of normative governance over their systems post-sale."
Michael Bennett, associate vice chancellor for data science and AI strategy at the University of Illinois Chicago, pointed out that the Trump administration aims to ensure the U.S. will win the AI race, which means Anthropic could be standing in the way of those goals. On the other hand, if the administration backs down against Anthropic, a startup, it might affect how other vendors respond to government pressure.
Vendors, too, run the risk of following through on government demands and potentially triggering an exodus of employees who disagree with the decision.
"[Anthropic understands] that its most valuable asset is not the weights of Claude but the programmers who are working with it, who are training it," Bennett said. "It's probably a big part of the reason the CEO is standing firm because he knows that the employees who work for Anthropic expect that."
Nevertheless, the latest decision by President Trump will trouble the U.S. intelligence community, Bennett said.
"Claude is widely considered to be more useful than most alternatives," he said. "Removing it from defense intelligence and other federal government workflows would almost certainly be disruptive."
文章标题:Anthropic公司公然违抗五角大楼。特朗普猛烈回击。
文章链接:https://www.qimuai.cn/?post=3446
本站文章均为原创,未经授权请勿用于任何商业用途