国防官员透露人工智能聊天机器人如何应用于目标锁定决策。

内容总结:
据美国防部知情官员透露,美军正探索将生成式人工智能聊天机器人应用于军事目标选定流程。该系统可对潜在目标清单进行分析和优先级排序,并向人类操作员提供打击顺序建议,最终决策仍由人工审核确认。
这一进展标志着美军在已有大数据项目“马文”(Maven)的基础上,为目标判定流程增加了新的“解读层”。“马文”系统长期依赖计算机视觉等传统人工智能技术处理海量侦察数据,而生成式AI的引入使得操作人员能够通过自然语言交互快速获取分析结果,从而加速目标锁定进程。
目前,美国防部已批准少数生成式AI模型用于机密环境。除较早接入的Claude模型外,OpenAI与xAI公司近期也先后与五角大楼达成协议,允许其ChatGPT和Grok模型在保密场景中部署。不过,军方强调这些系统仅作为辅助工具,人类始终保留最终决策权。
该技术的军事化应用正引发广泛关注。此前美军对伊朗一所女校的袭击导致百余名儿童死亡,多家属媒体指出该行动可能涉及AI辅助目标判定。尽管五角大楼表示事件仍在调查中,且初步报告将原因归咎于过时目标数据,但此次披露仍加剧了公众对自主武器系统伦理风险的担忧。
值得关注的是,生成式AI的“黑箱”特性使其输出结果更易获取却难以验证。虽然军方承认该系统可大幅缩短目标处理时间,但未说明人工复核环节是否会抵消其效率优势。随着人工智能在军事领域的加速渗透,如何在技术创新与责任控制之间取得平衡,已成为亟待解决的全球性议题。
中文翻译:
美国国防部官员透露,人工智能聊天机器人或将参与目标打击决策
尽管美军的大数据项目"专家"系统多年来已显著加速打击行动规划,相关言论表明生成式人工智能正在为这类决策增添新的分析维度。
一位了解内情的国防部官员透露,美军可能利用生成式人工智能系统对目标清单进行优先级排序,并提出优先打击建议,最终由人类进行审核。这一关于军方可能使用AI聊天机器人的信息披露之际,五角大楼正因伊朗学校遭袭事件面临审查,该事件仍在调查中。
五角大楼正在机密环境中部署的生成式AI系统可能会接收潜在目标清单。该官员表示,操作人员可要求系统分析信息并依据战机当前位置等因素对目标进行优先级排序,随后由人类负责核查评估系统输出的结果与建议。理论上,OpenAI的ChatGPT和xAI的Grok未来都可能应用于此类场景——这两家公司近期已与五角大楼达成协议,允许其模型在机密环境中使用。
该官员强调这仅是展示系统运作方式的示例场景,既未确认也未否认AI系统当前是否已实际投入使用。
此前有媒体报道称,Anthropic公司的Claude已被整合进现有军事AI系统,并应用于伊朗和委内瑞拉的军事行动。而这位官员的陈述进一步揭示了聊天机器人在加速目标搜索方面的具体作用,同时披露了美军部署两种各具局限性的AI技术的实施路径。
自2017年起,美军持续推进名为"专家"的大数据项目。该系统采用传统AI技术(特别是计算机视觉)分析五角大楼收集的海量数据与图像。例如,该系统可通过算法处理数千小时的无人机航拍画面以识别目标。乔治城大学2024年报告显示,士兵利用该系统筛选审核目标,显著缩短了目标核准流程。士兵通过战场地图仪表板界面操作系统,不同颜色会标注潜在目标与友军位置。
该官员指出,生成式AI正作为对话式聊天机器人层被引入系统,军方将借此在确定目标优先级等决策过程中更快速地查找分析数据。
支撑ChatGPT、Claude和Grok的生成式AI系统,与"专家"系统主要采用的传统AI存在本质区别。基于大语言模型构建的生成式AI在战争中的应用起步较晚且缺乏实战检验。传统"专家"系统界面要求用户直接在地图上核查解析数据,而生成式AI模型的输出结果虽更易获取却难以验证。
该官员补充说,生成式AI的应用正在缩短目标锁定流程所需时间,但被问及人类复核模型输出会额外增加多少时间成本时,其未提供具体数据。
近期伊朗女校遇袭事件导致上百名儿童丧生后,军事AI系统的应用正受到日益严格的公众监督。多家媒体报道称袭击源自美国导弹,五角大楼则表示仍在调查中。《华盛顿邮报》曾报道Claude与"专家"系统参与过伊朗境内的目标决策,但尚无证据表明生成式AI系统在其中发挥的作用。《纽约时报》周三报道称,初步调查显示过时的目标数据是导致袭击的部分原因。
近几个月来,五角大楼正加速推进AI技术在各类行动中的应用。去年12月通过"GenAI.mil"项目,已向数百万军人开放生成式AI模型的非机密用途(如分析合同、撰写简报)。但目前仅有少数生成式AI模型获准用于机密场景。
首个获批的是Anthropic公司的Claude,除应用于伊朗外,据称今年1月抓捕委内瑞拉领导人尼古拉斯·马杜罗的行动中也曾使用。但由于五角大楼与Anthropic就军方AI使用限制问题产生分歧,美国国防部已将其列为供应链风险,特朗普总统在社交媒体要求政府六个月内停用其AI产品。Anthropic公司正在法庭对此认定提出异议。
OpenAI于2月28日宣布与军方达成机密环境技术使用协议。埃隆·马斯克的xAI公司也达成类似协议,允许五角大楼在机密环境中使用Grok模型。OpenAI表示其协议包含使用限制条款,但具体限制的实际效力尚不明确。
若您掌握军方使用AI的相关信息,可通过Signal加密通讯平台(用户名jamesodonnell.22)安全提供。
深度解析
人工智能领域动态
"退出ChatGPT"运动呼吁用户取消订阅
针对ICE的抵制行动正演变为反对AI公司与特朗普总统关联的更广泛社会运动。
Moltbook成为AI作秀巅峰
这款病毒式传播的机器人社交网络,既揭示了智能体的未来图景,更折射出当前社会对AI的狂热追捧。
杨立昆的新创企是对大语言模型的逆向押注
这位AI先驱在独家专访中,分享了其巴黎新公司AMI实验室的发展规划。
这是AI界最被误解的图表
对某些人而言,METR的"时间地平线图"预示着AI乌托邦——或末日——已近在咫尺,但真相远为复杂。
保持联系
获取《麻省理工科技评论》最新动态
探索特别优惠、头条新闻、近期活动等精彩内容。
英文来源:
Defense official reveals how AI chatbots could be used for targeting decisions
Though the US military's big data initiative Maven has sped up the planning of strikes for years, the comments suggest generative AI is now adding a new interpretative layer to such deliberations.
The US military might use generative AI systems to rank lists of targets and make recommendations about which to strike first, which would then be vetted by humans, according to a Defense official with knowledge of the matter. The disclosure about how the military may use AI chatbots comes as the Pentagon faces scrutiny over a strike on an Iranian school, which it is still investigating.
A list of possible targets might be fed into a generative AI system that the Pentagon is fielding for classified settings. Then, said the official, who requested to speak on background with MIT Technology Review to discuss sensitive topics, humans might ask the system to analyze the information and rank which targets are a priority, while accounting for factors like where aircraft are currently located. Humans would then be responsible for checking and evaluating the results and recommendations. OpenAI’s ChatGPT and xAI’s Grok could, in theory, be the models used for this type of scenario in the future, as both companies recently reached agreements for their models to be used by the Pentagon in classified settings.
The official described this as an example use case of how things might work, but would not confirm or deny whether it represents how AI systems are currently being used.
Other outlets have reported that Anthropic’s Claude has been integrated into existing military AI systems and used in operations in Iran and Venezuela, but the official’s comments add insight into the specific role chatbots may play, particularly in accelerating the search for targets. They also shed light on the way it’s deploying two different AI technologies, each with distinct limitations.
Since at least 2017, the US military has been working on a "big data" initiative called Maven. It uses older types of AI, particularly computer vision, to analyze the oceans of data and imagery collected by the Pentagon. Maven might take thousands of hours of aerial drone footage, for example, and algorithmically identify targets. A 2024 report from Georgetown showed soldiers using the system to select targets and vet them, which sped up the process to get approval for these targets. Soldiers interacted with Maven through an interface with a battlefield map and dashboard, which might highlight potential targets in one color and friendly forces in another.
Now, the official's comments suggest that generative AI is being added as a conversational, chatbot layer—one which the military would use to more quickly find and analyze the data as it makes decisions like which targets to prioritize.
Generative AI systems, like those that underpin ChatGPT, Claude, and Grok, are a fundamentally different technology than the AI that has primarily powered Maven. Built on large language models, their use in war is much more recent and less battle-tested. And while the old interface of Maven forced users to directly inspect and interpret data on the map, the outputs given by generative AI models are easier to access but harder to verify.
The use of generative AI for such decisions is reducing the time required in the targeting process, the official added, but did not provide detail when asked how much additional speed is possible if humans are required to spend time double checking a model’s outputs.
The use of military AI systems is under increased public scrutiny following the recent strike on a girls school in Iran in which more than one hundred children died. Multiple news outlets have reported the strike was from a US missile, though the Pentagon has said it is still under investigation. And while the Washington Post has reported that Claude and Maven have been involved in targeting decisions in Iran, there is no evidence yet to explain what role generative AI systems played, if any. The New York Times reported on Wednesday that a preliminary investigation found outdated targeting data to be partly responsible for the strike.
The Pentagon has been ramping up its use of AI across operations in recent months. It started offering non-classified use of generative AI models, like for analyzing contracts or writing presentations, to millions of service members back in December through an effort called GenAI.mil. But only those few generative AI models have been approved by the Pentagon for classified use.
The first was Anthropic’s Claude, which in addition to its use in Iran was reportedly used in the operations to capture Venezuelan leader Nicolas Maduro in January. But following recent disagreements between the Pentagon and Anthropic over whether Anthropic could restrict the military’s use of its AI, the Defense Department designated it a supply chain risk and President Trump on social media demanded the government to stop using its AI products within six months. Anthropic is fighting the designation in court.
OpenAI announced an agreement on February 28 for the military to use its technologies in classified settings. Elon Musk's company xAI has also reached a deal for the Pentagon to use its model Grok in such settings. OpenAI has said its agreement with the Pentagon came with limitations, though the practical effectiveness of those limitations is not clear.
If you have information about the military’s use of AI, you can share it securely via Signal (username jamesodonnell.22).
Deep Dive
Artificial intelligence
A “QuitGPT” campaign is urging people to cancel their ChatGPT subscriptions
Backlash against ICE is fueling a broader movement against AI companies’ ties to President Trump.
Moltbook was peak AI theater
The viral social network for bots reveals more about our own current mania for AI as it does about the future of agents.
Yann LeCun’s new venture is a contrarian bet against large language models
In an exclusive interview, the AI pioneer shares his plans for his new Paris-based company, AMI Labs.
This is the most misunderstood graph in AI
To some, METR’s “time horizon plot” indicates that AI utopia—or apocalypse—is close at hand. The truth is more complicated.
Stay connected
Get the latest updates from
MIT Technology Review
Discover special offers, top stories, upcoming events, and more.
文章标题:国防官员透露人工智能聊天机器人如何应用于目标锁定决策。
文章链接:https://www.qimuai.cn/?post=3559
本站文章均为原创,未经授权请勿用于任何商业用途