Anthropic点赞SB 53
内容来源:https://www.anthropic.com/news/anthropic-is-endorsing-sb-53
内容总结:
Anthropic力挺加州SB 53人工智能法案,力求平衡创新与安全
【加州,某月某日】—— 在人工智能技术飞速发展的当下,如何有效规制这一颠覆性力量,正成为全球关注的焦点。近日,知名前沿人工智能公司Anthropic公开发声,表态全力支持加州参议院第53号法案(SB 53),旨在为加州最强大的人工智能系统建立一套“信任但验证”的监管框架。此举不仅彰显了科技企业在AI治理中的积极姿态,也为行业如何负责任地发展提供了新的范本。
Anthropic公司方面表示,尽管他们认为前沿AI安全应在联邦层面进行统一G规制,而非 fragmented 的州级法规,但考虑到华盛顿在达成共识上的迟滞,以及人工智能技术本身的快速演进,州层面的立法尝试变得尤为重要。SB 53的出台,正是吸取了加州此前AI立法尝试(如SB 1047)的经验教训,力求以更具操作性的方式,保障公共安全。
据悉,加州州长纽森此前召集了一个由学者和行业专家组成的“加州联合政策工作组”,为AI治理提出建议。该工作组提出“信任但验证”的监管理念,而SB 53法则正是这一理念的具象化体现。与去年倾向于规定具体技术标准的做法不同,SB 53强调信息披露,旨在通过透明化来实现对强大AI系统的有效监督。
SB 53法案的核心要义包括:
- 构建并公布安全框架:要求开发强大AI系统的企业,披露其管理、评估、缓解灾难性风险(可能导致大规模伤亡或巨额经济损失)的方法。
- 发布透明度报告:在部署新的强大模型前,需发布公开透明度报告,概述其灾难性风险评估及应对措施。
- 及时上报安全事件:在15天内向州政府报告关键安全事件,并可根据需要,秘密披露内部部署模型潜在灾难性风险的评估摘要。
- 完善告密者保护机制:为举报违反法案要求,或对公共健康/安全构成重大威胁的行为提供明确的保护。
- 确保公开问责:企业对其在安全框架中的承诺负有法律责任,违反者将面临经济处罚。
Anthropic指出,SB 53所提出的这些要求,很多已是Anthropic及其他前沿AI公司(如Google DeepMind、OpenAI、Microsoft)的既有实践。通过此次立法,这些行业最佳实践将转化为法律标准,确保所有符合条件的模型都能达到相同的安全门槛。值得一提的是,该法案将监管重点放在开发最强大AI系统的大型企业,同时对初创公司和规模较小的企业予以豁免,避免了不必要的监管负担。
Anthropic认为,SB 53的透明度要求将对前沿AI安全产生深远影响。它有助于创建一个公平竞争的环境,确保即使在激烈的市场竞争中,AI开发者也能继续履行其披露义务,而非为了抢占先机而降低安全标准。
展望未来,Anthropic表示SB 53虽然奠定了坚实的监管基础,但在一些领域仍有提升空间,例如:应更精确地定义需要监管的AI系统(目前基于计算力FLOPS衡量);开发者应提供更详细的测试、评估和缓解措施信息;以及监管框架应具备随AI技术演进而动态调整的能力。
Anthropic赞扬了参议员斯科特·维纳和州长纽森在负责任AI治理方面的领导力。他们强调,问题不是是否需要AI治理,而是我们选择今日深思熟虑地构建,还是明日被动应对。SB 53为前者提供了一个可靠的路径。Anthropic呼吁加州通过此法案,并期待与华盛顿及全球范围的政策制定者合作,共同构建全面的监管方案,在保障公共利益的同时,维持美国在全球AI领域的领先地位。
英文原文:
Anthropic is endorsing SB 53
Anthropic is endorsing SB 53, the California bill that governs powerful AI systems built by frontier AI developers like Anthropic. We’ve long advocated for thoughtful AI regulation and our support for this bill comes after careful consideration of the lessons learned from California's previous attempt at AI regulation (SB 1047). While we believe that frontier AI safety is best addressed at the federal level instead of a patchwork of state regulations, powerful AI advancements won’t wait for consensus in Washington.
Governor Newsom assembled the Joint California Policy Working Group—a group of academics and industry experts—to provide recommendations on AI governance. The working group endorsed an approach of 'trust but verify’, and Senator Scott Wiener’s SB 53 implements this principle through disclosure requirements rather than the prescriptive technical mandates that plagued last year's efforts.
What SB 53 achieves
SB 53 would require large companies developing the most powerful AI systems to:
- Develop and publish safety frameworks, which describe how they manage, assess, and mitigate catastrophic risks—risks that could foreseeably and materially contribute to a mass casualty incident or substantial monetary damages.
- Release public transparency reports summarizing their catastrophic risk assessments and the steps taken to fulfill their respective frameworks before deploying powerful new models.
- Report critical safety incidents to the state within 15 days, and even confidentially disclose summaries of any assessments of the potential for catastrophic risk from the use of internally-deployed models.
- Provide clear whistleblower protections that cover violations of these requirements as well as specific and substantial dangers to public health/safety from catastrophic risk.
- Be publicly accountable for the commitments made in their frameworks or face monetary penalties.
These requirements would formalize practices that Anthropic and many other frontier AI companies already follow. At Anthropic, we publish our Responsible Scaling Policy, detailing how we evaluate and mitigate risks as our models become more capable. We release comprehensive system cards that document model capabilities and limitations. Other frontier labs (Google DeepMind, OpenAI, Microsoft) have adopted similar approaches while vigorously competing at the frontier. Now all covered models will be legally held to this standard. The bill also appropriately focuses on large companies developing the most powerful AI systems, while providing exemptions for startups and smaller companies that are less likely to develop powerful models and should not bear unnecessary regulatory burdens.
SB 53’s transparency requirements will have an important impact on frontier AI safety. Without it, labs with increasingly powerful models could face growing incentives to dial back their own safety and disclosure programs in order to compete. But with SB 53, developers can compete while ensuring they remain transparent about AI capabilities that pose risks to public safety, creating a level playing field where disclosure is mandatory, not optional.
Looking ahead
SB 53 provides a strong regulatory foundation, but we can and should build upon this progress in the following areas and we look forward to working with policymakers to do so: - The bill currently decides which AI systems to regulate based on how much computing power (FLOPS) was used to train them. The current threshold (10^26 FLOPS) is an acceptable starting point but there’s always a risk that some powerful models may not be covered.
- Similarly, developers should be required to provide greater detail about the tests, evaluations, and mitigations they undertake. When we share our safety research, document our red team testing, and explain our deployment decisions—as we have done alongside industry players via the Frontier Model Forum —it strengthens rather than weakens our work.
- Lastly, regulations need to evolve as AI technology advances. Regulators should have the ability to update rules as needed to keep up with new developments and maintain the right balance between safety and innovation.
We commend Senator Wiener and Governor Newsom for their leadership on responsible AI governance. The question isn't whether we need AI governance—it's whether we'll develop it thoughtfully today or reactively tomorrow. SB 53 offers a solid path toward the former. We encourage California to pass it, and we look forward to working with policymakers in Washington and around the world to develop comprehensive approaches that protect public interests while maintaining America's AI leadership.