扰乱人工智能的恶意使用:2025年10月
内容来源:https://openai.com/global-affairs/disrupting-malicious-uses-of-ai-october-2025
内容总结:
科技巨头持续打击滥用AI行为,维护数字安全与公共利益
北京,[发稿日期] 近日,一家致力于推动通用人工智能(AGI)造福全人类的科技公司发布最新报告,详细介绍了其在打击恶意滥用AI方面的最新进展。自今年2月启动公开威胁报告机制以来,该公司已成功识别并中断了40多个违反其使用政策的网络活动,旨在确保AI技术在民主原则和常识规则的框架内运行,有效保护公众免受现实危害。
据报告披露,被中断的恶意活动类型广泛,涵盖了威权政权利用AI控制民众或胁迫他国,以及诈骗、恶意网络攻击和隐蔽性影响力行动等多种滥用行为。该公司强调,其核心使命是确保AGI能够普惠大众,并通过不断创新,帮助人们解决面临的难题,同时构建以常识为基础、能够有效防范真实危害的“民主AI”。
报告进一步指出,在过去一个季度中,该公司通过分享具体案例,展现了其在检测和遏制模型恶意使用方面的能力。值得注意的是,目前观察到的威胁行为者主要将AI技术“嫁接”到其既有操作模式上,以图提升行动效率,而非通过AI获取全新的攻击能力。对于违反政策的活动,该公司会立即封禁相关账户,并酌情与合作伙伴分享洞察信息。
此次报告的发布,连同其持续的政策执行和与同行的紧密合作,旨在提高公众对AI滥用风险的认识,并不断完善对普通用户的保护机制。该科技巨头表示,将持续致力于维护AI领域的健康生态,确保这项前沿技术能够真正服务于人类福祉。
英文原文:
Our mission is to ensure that artificial general intelligence benefits all of
humanity. We advance this mission by deploying innovations that help people
solve difficult problems and by building democratic AI grounded in common-sense
rules that protect people from real harms.
Since we began our public threat reporting in February 2024
[/index/disrupting-malicious-uses-of-ai-by-state-affiliated-threat-actors/],
we’ve disrupted and reported over 40 networks that violated our usage policies.
This includes preventing uses of AI by authoritarian regimes to control
populations or coerce other states, as well as abuses like scams, malicious
cyber activity, and covert influence operations.
In this update, we share case studies from the past quarter and how we’re
detecting and disrupting malicious use of our models. We continue to see threat
actors bolt AI onto old playbooks to move faster, not gain novel offensive
capability from our models. When activity violates our policies, we ban accounts
and, where appropriate, share insights with partners. Our public reporting,
policy enforcement, and collaboration with peers aim to raise awareness of abuse
while improving protections for everyday users.
- Read the full report(opens in a new window)
[https://cdn.openai.com/threat-intelligence-reports/7d662b68-952f-4dfd-a2f2-fe55b041cc4a/disrupting-malicious-uses-of-ai-october-2025.pdf]