«

AI生成勒索软件的时代已经到来

qimuai 发布于 阅读:2 一手编译


AI生成勒索软件的时代已经到来

内容来源:https://www.wired.com/story/the-era-of-ai-generated-ransomware-has-arrived/

内容总结:

随着全球网络犯罪激增,最新研究表明,勒索软件正因生成式人工智能工具的普及而加速进化。网络安全研究人员发现,攻击者不仅利用AI编写更具威胁性的勒索信函,更开始全面借助该技术开发恶意软件并构建勒索即服务(RaaS)模式。

生成式AI公司Anthropic最新威胁情报报告显示,勒索软件团伙已使用其大语言模型Claude及编程专用模型Claude Code进行恶意软件开发。与此同时,网络安全公司ESET本周也披露了首个完全由本地大语言模型驱动的勒索软件概念验证(PoC)。两项研究共同表明,生成式AI正在降低网络犯罪的技术门槛,使不具备专业技术的攻击者也能实施高危害性攻击。

Anthropic威胁情报团队指出:"这不仅是新型勒索变种的出现,更是人工智能推动的范式转变——它消除了传统恶意软件开发的技术壁垒。"据统计,2025年初全球勒索软件攻击量已达历史峰值,犯罪分子年收益持续超过数亿美元。前美国国家安全局局长保罗·中曾根近日在Defcon安全会议上坦言:"我们在对抗勒索软件方面并未取得进展。"

研究显示,英国网络犯罪组织GTG-5004利用Claude开发出具备高级规避功能的勒索软件,并以400至1200美元的价格在暗网兜售服务套餐。值得注意的是,该组织技术能力有限,其加密算法、反分析技术等核心功能完全依赖AI实现。Anthropic表示已封禁相关账户,并启用YARA规则等新型检测手段防范平台滥用。

尽管Recorded Future公司分析师艾伦·利斯卡认为目前大多数勒索组织尚未大规模采用AI技术,但ESET公司发现的名为"PromptLock"的AI驱动勒索软件证实了发展趋势。该软件能实时生成恶意Lua脚本,实现文件扫描、数据窃取和加密部署一体化操作。研究人员强调,虽然当前仍存在计算资源限制,但网络犯罪分子必将持续突破技术瓶颈。

更令人担忧的是,Anthropic追踪的另一个犯罪组织GTG-2002已使用Claude Code实现全流程自动化攻击:从目标定位、网络渗透到数据窃取与分析,最终自动生成勒索信函。近一个月内,该团伙已侵袭至少17家政府、医疗、应急服务和宗教机构。

Anthropic研究人员警告:"这标志着AI辅助网络犯罪的危险进化——人工智能既充当技术顾问,又作为活跃操作者,使原本需要高技能门槛的复杂攻击变得触手可及。"随着AI技术与网络犯罪的深度融合,全球网络安全防御体系正面临前所未有的挑战。

中文翻译:

随着全球网络犯罪激增,最新研究日益表明,勒索软件正在因生成式人工智能工具的广泛普及而加速演变。攻击者开始利用AI编写更具威慑力和胁迫性的勒索信,实施更高效的敲诈攻击。但网络罪犯对生成式AI的运用正快速走向高阶阶段——生成式AI公司Anthropic的研究人员今日披露,攻击者正日益依赖(有时完全依托)生成式AI来开发实际恶意软件,并向其他犯罪分子提供勒索软件服务。

根据该公司新发布的威胁情报报告,近期发现有勒索软件罪犯在开发过程中使用Anthropic的大型语言模型Claude及其专用编程模型Claude Code。这项发现与安全公司ESET本周发布的另一项研究形成呼应:ESET研究人员发现一种完全由本地大语言模型驱动的勒索软件攻击概念验证,该模型运行于恶意服务器上。

两项研究共同揭示了生成式AI如何推动网络犯罪升级,即便不具备技术背景或勒索软件经验者也能更轻松地发起攻击。Anthropic威胁情报团队在报告中指出:"我们的调查不仅发现了新型勒索软件变种,更揭示了人工智能正在消除传统技术壁垒,彻底变革恶意软件的开发模式。"

过去十年间,勒索软件始终是难以根治的顽疾。攻击者手段日趋残忍与创新,迫使受害者持续支付赎金。据估算,2025年初全球勒索软件攻击量已达到历史峰值,犯罪分子年获利仍持续数亿美元。正如美国国家安全局前局长、网络司令部前司令保罗·中曾根本月在拉斯维加斯Defcon安全会议上所言:"我们在对抗勒索软件方面并未取得进展。"

AI技术与本已危害巨大的勒索软件结合,进一步扩展了黑客的能力边界。Anthropic研究显示,某个被追踪为GTG-5004、自今年初开始活动的英国网络犯罪组织,使用Claude开发"具备高级规避功能的勒索软件,并进行市场推广和分发"。

根据Anthropic的调查,该组织在网络犯罪论坛以400至1200美元不等的价格提供分级勒索软件服务。尽管其产品包含多种加密功能、软件可靠性工具及反检测手段,但开发者实际技术能力有限。研究人员指出:"若无Claude辅助,该运营者显然无法独立实现加密算法、反分析技术或Windows内部机制操控。"

Anthropic表示已封禁相关账户,并推出检测和阻止恶意软件生成的"新方法",包括使用YARA规则进行模式识别,筛查可能上传至其平台的恶意软件及哈希值。尽管此类行为目前尚未成为勒索软件生态的普遍现象,但相关发现已敲响警钟。

专注勒索软件领域的安全分析师艾伦·利斯卡指出:"确实有犯罪组织利用AI开发勒索软件和恶意软件模块,但据Recorded Future观察,目前尚未成为主流。AI更广泛的应用领域其实在于初始攻击入口的突破。"

网络安全公司ESET的研究人员本周宣称发现"首个已知的AI驱动勒索软件"PromptLock。该恶意软件主要在本机运行,采用OpenAI开源模型,能"实时生成恶意Lua脚本",用以探查黑客目标文件、窃取数据并实施加密。ESET认为这段代码仅是概念验证,尚未实际部署,但强调其证明了网络罪犯正开始将大语言模型纳入工具集。

发现该勒索软件的ESET研究员在邮件中表示:"部署AI辅助勒索软件存在模型体积庞大、算力要求高等挑战,但犯罪分子很可能找到规避限制的方法。几乎可以肯定威胁组织正在积极探索这一领域,未来必将出现更多复杂威胁。"

尽管PromptLock尚未投入实际使用,但Anthropic的发现进一步证实网络罪犯正快速将大语言模型整合进攻击体系。该公司还观察到另一个代号GTG-2002的犯罪组织使用Claude Code自动寻找攻击目标、渗透受害者网络、开发恶意软件,继而窃取数据、分析所得信息并撰写勒索信。

过去一个月中,该攻击已波及至少17个政府、医疗、应急服务和宗教机构。Anthropic在报告中警示:"此操作标志着AI辅助网络犯罪的危险进化——AI既担任技术顾问又作为活跃操作者,使单打独斗的罪犯能实施原本难以手动完成的复杂攻击。"

英文来源:

As cybercrime surges around the world, new research increasingly shows that ransomware is evolving as a result of widely available generative AI tools. In some cases, attackers are using AI to draft more intimidating and coercive ransom notes and conduct more effective extortion attacks. But cybercriminals’ use of generative AI is rapidly becoming more sophisticated. Researchers from the generative AI company Anthropic today revealed that attackers are leaning on generative AI more heavily—sometimes entirely—to develop actual malware and offer ransomware services to other cybercriminals.
Ransomware criminals have recently been identified using Anthropic’s large language model Claude and its coding-specific model, Claude Code, in the ransomware development process, according to the company’s newly released threat intelligence report. Anthropic’s findings add to separate research this week from the security firm ESET that highlights an apparent proof of concept for a type of ransomware attack executed entirely by local LLMs running on a malicious server.
Taken together, the two sets of findings highlight how generative AI is pushing cybercrime forward and making it easier for attackers—even those who don’t have technical skills or ransomware experience—to execute such attacks. “Our investigation revealed not merely another ransomware variant, but a transformation enabled by artificial intelligence that removes traditional technical barriers to novel malware development,” researchers from Anthropic’s threat intelligence team wrote.
Over the last decade, ransomware has proven an intractable problem. Attackers have become increasingly ruthless and innovative so victims will keep paying out. By some estimates, the number of ransomware attacks hit record highs at the start of 2025, and criminals continue to make hundreds of millions of dollars per year. As former US National Security Agency and Cyber Command chief Paul Nakasone put it at the Defcon security conference in Las Vegas earlier this month: “We are not making progress against ransomware.”
Adding AI into the already hazardous ransomware cocktail only increases what hackers may be able to do. According to Anthropic’s research, a cybercriminal threat actor based in the United Kingdom, which is tracked as GTG-5004 and has been active since the start of this year, used Claude to “develop, market, and distribute ransomware with advanced evasion capabilities.”
On cybercrime forums, GTG-5004 has been selling ransomware services ranging from $400 to $1,200, with different tools being provided for different package levels, according to Anthropic’s research. The company says that while GTG-5004’s products include a range of encryption capabilities, different software reliability tools, and methods designed to help the hackers avoid detection, it appears the developer is not technically skilled. “This operator does not appear capable of implementing encryption algorithms, anti-analysis techniques, or Windows internals manipulation without Claude’s assistance,” the researchers write.
Anthropic says it banned the account linked to the ransomware operation and introduced “new methods” for detecting and preventing malware generation on its platforms. These include using pattern detection known as YARA rules to look for malware and malware hashes that may be uploaded to its platforms.
While such activity so far does not appear to be the norm across the ransomware ecosystem, the findings represent a stark warning.
“There are definitely some groups that are using AI to aid with the development of ransomware and malware modules, but as far as Recorded Future can tell, most aren’t,” says Allan Liska, an analyst for the security firm Recorded Future who specializes in ransomware. “Where we do see more AI being used widely is in initial access.”
Separately, researchers at the cybersecurity company ESET this week claimed to have discovered the “first known AI-powered ransomware,” dubbed PromptLock. The researchers say the malware, which largely runs locally on a machine and uses an open source AI model from OpenAI, can “generate malicious Lua scripts on the fly” and uses these to inspect files the hackers may be targeting, steal data, and deploy encryption. ESET believes the code is a proof-of-concept that has seemingly not been deployed against victims, but the researchers emphasize that it illustrates how cybercriminals are starting to use LLMs as part of their toolsets.
“Deploying AI-assisted ransomware presents certain challenges, primarily due to the large size of AI models and their high computational requirements. However, it’s possible that cybercriminals will find ways to bypass these limitations,” ESET malware researchers Anton Cherepanov and Peter Strycek, who discovered the new ransomware, wrote in an email to WIRED. “As for development, it is almost certain that threat actors are actively exploring this area, and we are likely to see more attempts to create increasingly sophisticated threats.”
Although PromptLock hasn’t been used in the real world, Anthropic’s findings further underscore the speed with which cybercriminals are moving to building LLMs into their operations and infrastructure. The AI company also spotted another cybercriminal group, which it tracks as GTG-2002, using Claude Code to automatically find targets to attack, get access into victim networks, develop malware, and then exfiltrate data, analyze what had been stolen, and develop a ransom note.
In the last month, this attack impacted “at least” 17 organizations in government, health care, emergency services, and religious institutions, Anthropic says, without naming any of the organizations impacted. “The operation demonstrates a concerning evolution in AI-assisted cybercrime,” Anthropic’s researchers wrote in their report, “where AI serves as both a technical consultant and active operator, enabling attacks that would be more difficult and time-consuming for individual actors to execute manually.”

连线杂志AI最前沿

文章目录


    扫描二维码,在手机上阅读