«

五角大楼与Anthropic公司的争议是否会吓退初创企业参与国防项目?

qimuai 发布于 阅读:10 一手编译


五角大楼与Anthropic公司的争议是否会吓退初创企业参与国防项目?

内容来源:https://techcrunch.com/2026/03/08/will-the-pentagons-anthropic-controversy-scare-startups-away-from-defense-work/

内容总结:

近期,美国人工智能行业与政府部门的合作引发系列风波。 Anthropic公司与五角大楼的谈判破裂后,被特朗普政府列为供应链风险企业,该公司表示将通过法律途径抗辩。与此同时,OpenAI迅速宣布与国防部达成合作,却引发用户强烈反弹——ChatGPT卸载量激增,而竞争对手Anthropic的Claude应用则冲上应用商店榜首。OpenAI至少有一名高管因质疑合作条款缺乏安全约束而辞职。

科技媒体TechCrunch在最新播客中探讨了这些事件对初创企业的影响。评论指出,两家公司陷入争议的核心在于其技术“是否以及如何被用于致命行动”,这种涉及生命伦理的敏感性质使事件受到远超常规的关注。与常年低调从事国防业务的通用汽车等企业不同,OpenAI和Anthropic因其大众熟知度而始终处于舆论聚光灯下。

尽管两家公司公开声明都强调要对AI军事化应用设限,但Anthropic在合同条款变更问题上表现出更强硬的抵制姿态。分析认为,此次事件暴露出美国政府机构试图单方面修改既有合同条款的特殊性,这为所有考虑承接政府项目的初创企业敲响了警钟——与传统漫长而稳定的政府合作模式不同,当前政治环境下合同条款的突变风险正在升高。

值得关注的是,此次争议还掺杂了个人因素。据报道,Anthropic首席执行官与国防部首席技术官之间存在个人矛盾,这进一步加剧了双方的谈判僵局。目前局势仍在演变,但事件已清晰揭示出AI企业在涉足国家安全领域时,将同时面临伦理审查、舆论压力与政治风险的多重挑战。

中文翻译:

短短一周多时间里,五角大楼与Anthropic公司关于Claude技术应用的谈判破裂,特朗普政府将Anthropic列为供应链风险企业,这家AI公司随即表示将通过法律途径抗争。

与此同时,OpenAI迅速宣布达成自身合作协议,却引发用户强烈反弹——大量用户卸载ChatGPT,反而将Anthropic的Claude推上应用商店榜首。至少一名OpenAI高管因担忧协议仓促推进缺乏安全护栏而辞职。

在本期TechCrunch Equity播客节目中,我与柯尔斯滕·科罗塞克、肖恩·奥凯恩共同探讨了这场风波对其他寻求与联邦政府(特别是五角大楼)合作初创企业的启示。柯尔斯滕提出疑问:"我们是否会看到合作基调的转变?"

肖恩指出此次事件的多重特殊性:一方面OpenAI与Anthropic的产品具有"现象级讨论度";更关键的是,争议核心在于"其技术是否被用于杀戮任务",这必然引发更严格审视。

柯尔斯滕则认为,此事"值得所有初创企业警醒"。

以下是我们对话的节选内容(经编辑精简):

柯尔斯滕: 我在想其他初创企业是否会重新审视联邦政府(特别是五角大楼与Anthropic)这场博弈,对争取联邦资金产生迟疑?合作基调会改变吗?

肖恩: 短期内可能不会。许多与政府(尤其是国防系统)合作的企业——无论是初创公司还是财富500强——其合作往往处于公众视野之外。例如通用汽车长期为陆军制造军用车辆,研发电动版和自动驾驶版,这类合作持续进行却很少成为舆论焦点。

而OpenAI与Anthropic的问题在于:它们的产品拥有海量用户,且始终处于舆论风暴中心。这种聚光灯效应放大了它们与联邦政府(特别是军事部门)的合作细节,这是多数政府承包商无需面对的。

需要补充的是,此次争议特别聚焦于"AI技术是否参与致命任务"。相较于通用汽车这类传统防务承包商,AI公司面临的道德审视更为直接。像Applied Intuition这类标榜军民两用的公司可能不会退缩,因为公众对其潜在影响缺乏共识。

安东尼: 此事具有高度特殊性。虽然关于"科技/AI在政府中应扮演何种角色"的讨论很有价值,但以当前事件为观察窗口会显得微妙——Anthropic与OpenAI的立场本质并无二致。两家公司都公开主张"限制AI军事用途",区别在于Anthropic在合同条款变更问题上态度更加强硬。

此外还存在个人因素:Anthropic首席执行官与国防部首席技术官埃米尔·迈克尔(TechCrunch读者可能记得他在优步时期的经历)据传关系不睦。

肖恩: 确实存在不应忽视的"人际冲突"成分。

柯尔斯滕: 但影响远不止于此。核心在于五角大楼试图修改现有合同条款——这对任何初创企业都敲响了警钟。政府合同通常需要漫长固化过程,当前国防部表现出的政治运作模式非同寻常。合同条款变更企图本身就是一个危险信号。

英文来源:

In just over a week, negotiations over the Pentagon’s use of Anthropic’s Claude technology fell through, the Trump administration designated Anthropic a supply-chain risk, and the AI company said it would fight that designation in court.
OpenAI, meanwhile, quickly announced a deal of its own, prompting backlash that saw users uninstalling ChatGPT and pushing Anthropic’s Claude to the top of the App Store charts. And at least one OpenAI executive has quit over concerns that the announcement was rushed without appropriate guardrails in place.
On the latest episode of TechCrunch’s Equity podcast, Kirsten Korosec, Sean O’Kane, and I discussed what this means for other startups seeking to work with the federal government, especially the Pentagon, as Kirsten wondered, “Are we going to see a changing of the tune a little bit?”
Sean pointed out that this is an unusual situation in a number of ways, in part because OpenAI and Claude make products that “no one can shut up about.” And crucially, this is a dispute over “how their technologies are being used or not being used to kill people” so it’s naturally going to draw more scrutiny.
Still, Kirsten argued, this is a situation that should “give any startup pause.”
Read a preview of our conversation, edited for length and clarity, below.
Kirsten: I’m wondering if other startups are starting to look at what’s happened with the federal government, specifically the Pentagon and Anthropic, that debate and wrestling match, and [take] pause about whether they want to be going after federal dollars. Are we going to see a changing of the tune a little bit?
Disrupt 2026: The tech ecosystem, all in one room
Your next round. Your next hire. Your next breakout opportunity. Find it at TechCrunch Disrupt 2026, where 10,000+ founders, investors, and tech leaders gather for three days of 250+ tactical sessions, powerful introductions, and market-defining innovation. Register now to save up to $400.
Save up to $300 or 30% to TechCrunch Founder Summit
1,000+ founders and investors come together at TechCrunch Founder Summit 2026 for a full day focused on growth, execution, and real-world scaling. Learn from founders and investors who have shaped the industry. Connect with peers navigating similar growth stages. Walk away with tactics you can apply immediately
Offer ends March 13.
Sean: I wonder about that, too. I think no, to some extent, in the near term, if only because when you really try to think about all the different companies, whether they’re startups or even more established Fortune 500s that do work with the government and in particular with the Department of Defense or the Pentagon, [for] a lot of them, that work flies under the radar.
General Motors makes defense vehicles for the Army and has done [that] for a very long time and has worked on all electric versions of those vehicles and autonomous versions. There’s stuff like that that goes on all the time and it just never really hits the zeitgeist. I think the problem that OpenAI and Anthropic ran into within the last week is like, these are companies that make products that a ton of people use — and also more importantly, [that] no one can shut up about.
So there’s just such a spotlight on them, that naturally highlights their involvement to a level that I think most of the other companies that are contracting with the federal government — and, in particular, any of the war-fighting elements of the federal government — don’t necessarily have to deal with.
The only caveat I’ll add to that is a lot of the heat around this discussion between Anthropic and OpenAI and the Pentagon is very specifically about how their technologies are being used or not being used to kill people, or in parts of the missions that are killing people. It’s not just the attention that’s on them and the familiarity we have with their brands, there is an extra element there that I feel is more abstract when you’re thinking about General Motors as a defense contractor or whatever.
I don’t think we’re going to see, like, Applied Intuition or any of these other companies that have been framing themselves as dual use back off much, just because I don’t see the spotlight on it and there’s just not the sort of shared understanding of what that impact might be.
Anthony: This story is so unique and specific to these companies and personalities in a lot of ways. I mean, there have been a lot of really interesting thought pieces about: What is the role of technology in government? [Of] AI in government? And I think those are all good and worthwhile questions to ask and explore.
I think also, though, that this is a very curious lens through which to examine some of those things because Anthropic and OpenAI are not actually that different in a lot of ways or the stances they’re taking. It’s not like one company is saying, “Hey, I don’t want to work with the government” and one is saying, “Yes, I do.” Or one is saying, “You can do whatever you want.” and [the other is] saying, “No, I want to have restrictions.” Both of them, at least publicly, are saying, “We want restrictions on how our AI gets used.” It just seems like Anthropic is digging in their heels a lot more about: You cannot change the terms in this way.
And then on top of that, there also just seems to be a personality layer where, the CEO of Anthropic and, Emil Michael — who a lot of TechCrunch readers might remember from his Uber days, and is now [chief technology officer for the Department of Defense]. Apparently, they just really don’t like each other. Reportedly.
Sean: Yes, there’s a very big “girls are fighting” element here that we should not overlook.
Kirsten: Yeah, a little bit. There is, but the implications are a little bit stronger than that. Again, to pull back a little bit, what we’re talking about here is the Pentagon and Anthropic coming into a dispute in which Anthropic appears to have lost, although I should say they are still very much being used by the military. They are considered a crucial technology, but OpenAI has kind of stepped in, and this is evolving and will likely change by the time this episode comes out.
The blowback has been interesting for OpenAI, where we’ve seen a lot of uninstalls of ChatGPT I think surged 295% after OpenAI locked in the deal with the Department of Defense.
To me, all of this is noise to the really critical and dangerous thing, which is that the Pentagon was seeking to change existing terms on an existing contract. And that is really important and should give any startup pause because the political machine that’s happening right now, particularly with the DoD, appears to be different. This isn’t normal. Contracts take forever to get baked in at the government level and the fact that they’re seeking to change those terms is a problem.

TechCrunchAI大撞车

文章目录


    扫描二维码,在手机上阅读