«

加州里程碑式的人工智能透明度法案SB 53已正式生效。

qimuai 发布于 阅读:29 一手编译


加州里程碑式的人工智能透明度法案SB 53已正式生效。

内容来源:https://www.theverge.com/ai-artificial-intelligence/787918/sb-53-the-landmark-ai-transparency-bill-is-now-law-in-california

内容总结:

经过数月激烈讨论,美国加利福尼亚州州长加文·纽森于本周一正式签署《前沿人工智能透明法案》(即参议院第53号法案),标志着这项具有里程碑意义的AI监管法案正式生效。该法案要求大型人工智能企业建立安全透明度框架,但最终版本在监管力度与企业诉求之间达成了微妙平衡。

与去年因"限制创新"遭否决的旧版法案相比,新法案作出重要调整:不再强制要求所有AI开发者进行风险测试,而是聚焦于大型企业,强制其公开安全协议框架并建立30天内更新通报机制。法案同时创设了面向公众的安全事件举报通道,并对揭露重大风险的内部举报人提供法律保护。

值得注意的是,法案未采纳第三方评估等争议条款,且安全框架主要参照行业自愿标准,被业界观察人士视为"缺乏惩罚条款的指导性原则"。这种设计既回应了去年纽森否决法案时对创新环境的担忧,也解释了为何Anthropic公司在经过数周谈判后转而公开支持,而Meta、OpenAI等企业仍通过政治行动委员会或游说方式表达异议。

作为全球科技重镇,加州此举将为全美AI监管提供重要范本。根据法案规定,加州技术部将每年结合技术发展与国际标准对法规进行动态调整,这预示着人工智能监管将进入持续演进的常态化阶段。

中文翻译:

加利福尼亚州具有里程碑意义的人工智能透明度法案——参议院第53号法案,在引发AI公司立场分歧并持续数月登上新闻头条后,现已正式成为法律。

这项备受争议的立法为大型人工智能公司设立了安全报告制度。当地时间周一,州长加文·纽森正式签署了由州参议员斯科特·维纳提出的《前沿人工智能透明度法案》。这是该法案的第二版草案——去年纽森以"限制过严可能扼杀本州AI创新"为由,否决了首版草案SB 1047。该版本曾要求所有AI开发者,特别是训练成本超过1亿美元的模型制造商,必须进行特定风险测试。在行使否决权后,纽森指派AI研究人员拟定替代方案,最终形成52页报告并成为SB 53法案的立法基础。

研究人员的部分建议被纳入SB 53法案,例如要求大型AI公司公开安全防护流程、为举报员工提供保护机制、以及为保障透明度直接向公众披露信息。但第三方评估等条款未出现在最终版本中。

根据公告,作为法案组成部分,大型AI开发者须"在官网上公开披露其前沿AI框架如何融入国家标准、国际标准及行业共识的最佳实践"。任何对安全协议进行的更新都需在30天内公布并说明理由。但值得关注的是,这部分内容对AI举报者和监管支持者而言未必是胜利。许多游说反对监管的AI公司提倡自愿性框架和最佳实践——这些更接近指导方针而非硬性规定,几乎不附带处罚条款。

该法案确实开创了双向通报机制:AI公司和公众均可"向加州应急服务办公室报告潜在重大安全事件",同时"对披露前沿模型重大健康安全风险的举报人实施保护,并由州检察长办公室对违规行为执行民事处罚"。公告还指出加州技术局将每年"基于多方意见、技术发展和国际标准"向立法机构提出修法建议。

尽管多数AI公司最初公开或私下反对该法案,声称将迫使企业撤离加州,但行业内部对SB 53立场出现分化。企业深知其中利害:拥有近四千万人口和多个AI中心的加州,对人工智能行业及其监管走向具有举足轻重的影响力。

经过数周条款磋商,Anthropic公司最终公开支持SB 53法案。而Meta于八月在州层面组建超级政治行动委员会以影响加州AI立法。OpenAI则在八月进行反立法游说,其全球事务主管克里斯·莱恩致信纽森称:"当加州的技术监管与有效的全球及联邦安全生态系统形成互补时,其领导力才能最大化发挥。"

莱恩建议AI公司可通过签署联邦或全球协议来规避加州要求,他在信中写道:"为使加州成为全球、全国及州级AI政策领导者,我们建议当企业签署[欧盟行为准则]等平行监管框架,或与相关美国联邦机构达成安全导向协议时,即应被视为符合本州要求。"

其他要闻

英文来源:

Senate Bill 53, the landmark AI transparency bill that has divided AI companies and made headlines for months, is now officially law in California.
SB 53, the landmark AI transparency bill, is now law in California
The hotly debated legislation establishes safety reporting requirements for large AI companies.
The hotly debated legislation establishes safety reporting requirements for large AI companies.
On Monday, California Gov. Gavin Newsom signed the “Transparency in Frontier Artificial Intelligence Act,” which was authored by Sen. Scott Wiener (D-CA). It’s the second draft of such a bill, as Newsom vetoed the first version — SB 1047 — last year due to concerns it was too strict and could stifle AI innovation in the state. It would have required all AI developers, especially makers of models with training costs of $100 million or more, to test for specific risks. After the veto, Newsom tasked AI researchers with coming up with an alternative, which was published in the form of a 52-page report — and formed the basis of SB 53.
Some of the researchers’ recommendations made it into SB 53, like requiring large AI companies to reveal their safety and security processes, allowing for whistleblower protections for employees at AI companies, and sharing information directly with the public for transparency purposes. But some aspects didn’t make it into the report — like third-party evaluations.
As part of the bill, large AI developers will need to “publicly publish a framework on [their] website describing how the company has incorporated national standards, international standards, and industry-consensus best practices into its frontier AI framework,” per a release. Any large AI developer that makes an update to its safety and security protocol will also need to publish the update, and its reasoning for it, within 30 days. But it’s worth noting this part isn’t necessarily a win for AI whistleblowers and proponents of regulation. Many AI companies that lobby against regulation propose voluntary frameworks and best practices — which can be seen as guidelines rather than rules, with few, if any, penalties attached.
The bill does create a new way for both AI companies and members of the public to “report potential critical safety incidents to California’s Office of Emergency Services,” per the release, and “protects whistleblowers who disclose significant health and safety risks posed by frontier models, and creates a civil penalty for noncompliance, enforceable by the Attorney General’s office.” The release also said that the California Department of Technology would recommend updates to the law every year “based on multistakeholder input, technological developments, and international standards.”
AI companies were divided on SB 53, though most were initially either publicly or privately against the bill, saying it would drive companies out of California. They knew the stakes: With nearly 40 million residents of California and a handful of AI hubs, the state has outsized influence on the AI industry and how it will be regulated.
SB 53 had been publicly endorsed by Anthropic after weeks of negotiations on the bill’s wording, but Meta in August launched a state-level super PAC to help shape AI legislation in California. And OpenAI had lobbied against such legislation in August, with its chief global affairs officer, Chris Lehane, writing to Newsom that “California’s leadership in technology regulation is most effective when it complements effective global and federal safety ecosystems.”
Lehane suggested that AI companies should be able to get around California state requirements by signing onto federal or global agreements instead, writing, “In order to make California a leader in global, national and state-level AI policy, we encourage the state to consider frontier model developers compliant with its state requirements when they sign onto a parallel regulatory framework like the [EU Code of Practice] or enter into a safety-oriented agreement with a relevant US federal government agency.”
Most Popular

ThevergeAI大爆炸

文章目录


    扫描二维码,在手机上阅读