«

硅谷令人工智能安全倡导者们忧心忡忡。

qimuai 发布于 阅读:6 一手编译


硅谷令人工智能安全倡导者们忧心忡忡。

内容来源:https://techcrunch.com/2025/10/17/silicon-valley-spooks-the-ai-safety-advocates/

内容总结:

近日,硅谷高管对人工智能安全倡导者的抨击引发行业震荡。白宫人工智能与加密货币事务负责人戴维·萨克斯与OpenAI首席战略官Jason Kwon相继公开质疑部分安全组织的动机,指控其背后存在利益集团操控,此举导致多名安全领域研究者因惧怕报复而要求匿名发声。

事件核心矛盾凸显出硅谷在"负责任开发AI"与"追求商业规模"之间的深刻分歧。萨克斯在社交平台直指 Anthropic 公司借安全担忧推动立法,实为通过监管壁垒打压中小竞争对手。值得注意的是,Anthropic是加州最新通过的人工智能安全法案SB 53的唯一主要支持者,该法案要求大型AI企业履行安全报告义务。

与此同时,OpenAI向包括Encode在内的七家非营利组织发出传票,要求其披露与马斯克及扎克伯格相关的通讯记录。公司首席战略官Kwon称此举是为查明批评者是否存在"协同行动",而内部员工Joshua Achiam则公开质疑该做法"有失妥当"。

业内观察指出,这场风波反映出安全监管与商业扩张的深层博弈。随着2026年关键时间节点临近,AI安全运动正获得实质推进,而硅谷巨头的反击恰恰印证了监管倡导正在产生实效。在人工智能已深度捆绑美国经济的背景下,如何平衡技术创新与安全约束,将成为影响行业走向的关键命题。

中文翻译:

包括白宫人工智能与加密技术顾问大卫·萨克斯和OpenAI首席战略官杰森·权在内的硅谷领袖本周针对倡导AI安全的组织发表言论,在网络上掀起波澜。他们在不同场合指控某些AI安全倡导者并非表面那般高尚,其行为要么是为自身利益,要么是受幕后亿万富翁操纵。

接受TechCrunch采访的AI安全组织表示,萨克斯和OpenAI的指控是硅谷恐吓批评者的最新尝试,但绝非首次。2024年某些风投机构曾散布谣言,称加州AI安全法案SB 1047会将初创企业创始人送入监狱。布鲁金斯学会将此列为对该法案的"曲解"之一,但纽森州长最终仍否决了该法案。

无论萨克斯和OpenAI是否意图震慑批评者,他们的行为已令多位AI安全倡导者深感不安。过去一周TechCrunch联系的多家非营利组织负责人要求匿名发言,以免所在机构遭受报复。

这场争议凸显了硅谷在"负责任开发AI"与"将AI打造为大众消费产品"之间日益加剧的矛盾——我与同事柯尔斯滕、安东尼在本周《Equity》播客中深入探讨了这一主题。节目还解析了加州新出台的聊天机器人监管法案,以及OpenAI在ChatGPT中处理色情内容的策略。

周二,萨克斯在X平台发文指控Anthropic公司——这家曾就AI可能导致失业、网络攻击及社会灾难性危害提出警示的机构——实为散播恐慌以推动有利于自身的立法,用文书工作压垮小型初创企业。Anthropic是唯一支持加州参议院第53号法案(SB 53)的主流AI实验室,该法案要求大型AI企业履行安全报告义务,已于上月正式生效。

萨克斯此番言论是对Anthropic联合创始人杰克·克拉克病毒式传播文章的回应。该文源自克拉克数周前在伯克利Curve AI安全会议上的演讲。现场听众普遍感受到一位技术专家对自身产品的真诚忧虑,但萨克斯持不同看法。

萨克斯称Anthropic正在运作"精妙的监管套利战略",不过值得玩味的是,真正精妙的策略通常不会包含与联邦政府为敌。在后续推文中,萨克斯指出Anthropic"始终将自己定位为特朗普政府的对手"。

限时抢购提醒:10月17日前最高立减624美元
Netflix、微软、Box、Phia、a16z、ElevenLabs、Wayve、Hugging Face、Elad Gil、Vinod Khosla——来自250多位行业领袖的200多场会议,为您提供推动初创企业成长、增强竞争优势的深度洞察。10月17日前购票最高可省624美元。

同样在本周,OpenAI首席战略官杰森·权在X平台发文解释为何向Encode等倡导负责任AI政策的非营利组织发出传票(传票是要求提供文件或证词的法律命令)。权表示,当埃隆·马斯克以OpenAI背离非营利使命为由提起诉讼后,多家机构同时对其重组提出反对令人生疑。Encode曾提交法庭之友简报支持马斯克诉讼,其他非营利组织也公开反对OpenAI重组。

"这引发了关于其资金源头及是否存在协同行动的透明度质疑。"权如是说。

NBC新闻本周报道,OpenAI向Encode及另外六家批评该公司的非营利组织发出广泛传票,要求提交与马斯克和Meta CEO马克·扎克伯格这两位OpenAI主要反对者相关的通讯记录。OpenAI还要求Encode提供其支持SB 53法案的往来信函。

一位知名AI安全领袖向TechCrunch透露,OpenAI政府事务团队与研究部门的分歧日益加剧。尽管OpenAI安全研究人员频繁发布AI系统风险报告,其政策部门却游说反对SB 53法案,声称更倾向联邦层面的统一规则。

OpenAI使命对齐负责人约书亚·阿奇姆本周在X平台就公司向非营利组织发送传票一事发声:"即便可能危及整个职业生涯,我仍要坦言:这实在不妥。"

未收到传票的AI安全非营利组织"安全AI联盟"CEO布伦丹·斯坦因豪泽指出,OpenAI似乎坚信批评者参与了马斯克主导的阴谋。但他强调事实并非如此,AI安全界对xAI的安全实践(或缺失 thereof)同样持严厉批评态度。

"OpenAI此举意在让批评者噤声,震慑他们,并阻止其他非营利组织效仿。"斯坦因豪泽分析,"至于萨克斯,我认为他担忧的是(AI安全)运动日益壮大,民众要求企业承担责任。"

白宫AI高级政策顾问、a16z前普通合伙人斯里拉姆·克里希南本周加入论战,通过社媒帖文称AI安全倡导者脱离实际。他敦促AI安全组织多与"在家庭和机构中使用、销售、采纳AI的现实世界人群"对话。

皮尤研究中心最新研究显示,约半数美国人对AI的担忧大于期待,但具体忧虑尚不明确。另一项详细研究表明,美国选民更关注失业和深度伪造问题,而非AI安全运动重点关注的灾难性风险。

解决这些安全隐患可能以牺牲AI行业快速增长为代价——这种权衡令硅谷许多人忧心忡忡。在AI投资支撑美国经济大半江山之际,对过度监管的恐惧不难理解。

但在经历多年无约束的AI发展后,AI安全运动正积蓄真实动能迈向2026年。硅谷对安全导向组织的反击,或许恰恰证明这些组织正在发挥实效。

英文来源:

Silicon Valley leaders including White House AI & Crypto Czar David Sacks and OpenAI Chief Strategy Officer Jason Kwon caused a stir online this week for their comments about groups promoting AI safety. In separate instances, they alleged that certain advocates of AI safety are not as virtuous as they appear, and are either acting in the interest of themselves or billionaire puppet masters behind the scenes.
AI safety groups that spoke with TechCrunch say the allegations from Sacks and OpenAI are Silicon Valley’s latest attempt to intimidate its critics, but certainly not the first. In 2024, some venture capital firms spread rumors that a California AI safety bill, SB 1047, would send startup founders to jail. The Brookings Institution labeled the rumor as one of many “misrepresentations” about the bill, but Governor Gavin Newsom ultimately vetoed it anyway.
Whether or not Sacks and OpenAI intended to intimidate critics, their actions have sufficiently scared several AI safety advocates. Many nonprofit leaders that TechCrunch reached out to in the last week asked to speak on the condition of anonymity to spare their groups from retaliation.
The controversy underscores Silicon Valley’s growing tension between building AI responsibly and building it to be a massive consumer product — a theme my colleagues Kirsten Korosec, Anthony Ha, and I unpack on this week’s Equity podcast. We also dive into a new AI safety law passed in California to regulate chatbots, and OpenAI’s approach to erotica in ChatGPT.
On Tuesday, Sacks wrote a post on X alleging that Anthropic — which has raised concerns over AI’s ability to contribute to unemployment, cyberattacks, and catastrophic harms to society — is simply fearmongering to get laws passed that will benefit itself and drown out smaller startups in paperwork. Anthropic was the only major AI lab to endorse California’s Senate Bill 53 (SB 53), a bill that sets safety reporting requirements for large AI companies, which was signed into law last month.
Sacks was responding to a viral essay from Anthropic co-founder Jack Clark about his fears regarding AI. Clark delivered the essay as a speech at the Curve AI safety conference in Berkeley weeks earlier. Sitting in the audience, it certainly felt like a genuine account of a technologist’s reservations about his products, but Sacks didn’t see it that way.
Sacks said Anthropic is running a “sophisticated regulatory capture strategy,” though it’s worth noting that a truly sophisticated strategy probably wouldn’t involve making an enemy out of the federal government. In a follow up post on X, Sacks noted that Anthropic has positioned “itself consistently as a foe of the Trump administration.”
DISRUPT FLASH SALE: Save up to $624 until Oct 17
Netflix, Microsoft, Box, Phia, a16z, ElevenLabs, Wayve, Hugging Face, Elad Gil, Vinod Khosla — some of the 250+ heavy hitters leading 200+ sessions designed to deliver the insights that fuel startup growth and sharpen your edge. Grab your ticket before Oct 17 to save up to $624.
DISRUPT FLASH SALE: Save up to $624 until Oct 17
Netflix, Microsoft, Box, Phia, a16z, ElevenLabs, Wayve, Hugging Face, Elad Gil, Vinod Khosla — some of the 250+ heavy hitters leading 200+ sessions designed to deliver the insights that fuel startup growth and sharpen your edge. Grab your ticket before Oct 17 to save up to $624.
Also this week, OpenAI’s chief strategy officer, Jason Kwon, wrote a post on X explaining why the company was sending subpoenas to AI safety nonprofits, such as Encode, a nonprofit that advocates for responsible AI policy. (A subpoena is a legal order demanding documents or testimony.) Kwon said that after Elon Musk sued OpenAI — over concerns that the ChatGPT-maker has veered away from its nonprofit mission — OpenAI found it suspicious how several organizations also raised opposition to its restructuring. Encode filed an amicus brief in support of Musk’s lawsuit, and other nonprofits spoke out publicly against OpenAI’s restructuring.
“This raised transparency questions about who was funding them and whether there was any coordination,” said Kwon.
NBC News reported this week that OpenAI sent broad subpoenas to Encode and six other nonprofits that criticized the company, asking for their communications related to two of OpenAI’s biggest opponents, Musk and Meta CEO Mark Zuckerberg. OpenAI also asked Encode for communications related to its support of SB 53.
One prominent AI safety leader told TechCrunch that there’s a growing split between OpenAI’s government affairs team and its research organization. While OpenAI’s safety researchers frequently publish reports disclosing the risks of AI systems, OpenAI’s policy unit lobbied against SB 53, saying it would rather have uniform rules at the federal level.
OpenAI’s head of mission alignment, Joshua Achiam, spoke out about his company sending subpoenas to nonprofits in a post on X this week.
“At what is possibly a risk to my whole career I will say: this doesn’t seem great,” said Achiam.
Brendan Steinhauser, CEO of the AI safety nonprofit Alliance for Secure AI (which has not been subpoenaed by OpenAI), told TechCrunch that OpenAI seems convinced its critics are part of a Musk-led conspiracy. However, he argues this is not the case, and that much of the AI safety community is quite critical of xAI’s safety practices, or lack thereof.
“On OpenAI’s part, this is meant to silence critics, to intimidate them, and to dissuade other nonprofits from doing the same,” said Steinhauser. “For Sacks, I think he’s concerned that [the AI safety] movement is growing and people want to hold these companies accountable.”
Sriram Krishnan, the White House’s senior policy advisor for AI and a former a16z general partner, chimed in on the conversation this week with a social media post of his own, calling AI safety advocates out of touch. He urged AI safety organizations to talk to “people in the real world using, selling, adopting AI in their homes and organizations.”
A recent Pew study found that roughly half of Americans are more concerned than excited about AI, but it’s unclear what worries them exactly. Another recent study went into more detail and found that American voters care more about job losses and deepfakes than catastrophic risks caused by AI, which the AI safety movement is largely focused on.
Addressing these safety concerns could come at the expense of the AI industry’s rapid growth — a trade-off that worries many in Silicon Valley. With AI investment propping up much of America’s economy, the fear of over-regulation is understandable.
But after years of unregulated AI progress, the AI safety movement appears to be gaining real momentum heading into 2026. Silicon Valley’s attempts to fight back against safety-focused groups may be a sign that they’re working.

TechCrunchAI大撞车

文章目录


    扫描二维码,在手机上阅读