«

人工智能如何将伊朗冲突搬上舞台

qimuai 发布于 阅读:4 一手编译


人工智能如何将伊朗冲突搬上舞台

内容来源:https://www.technologyreview.com/2026/03/09/1134063/how-ai-is-turning-the-iran-conflict-into-theater/

内容总结:

人工智能正将伊朗冲突变成一场信息"剧场"。近期涌现出数十个由AI驱动的开源情报仪表盘,它们整合卫星图像、船舶追踪等开源数据,并搭配聊天功能、新闻推送和预测市场链接,宣称能为公众提供"比传统媒体更快的真相"。然而调查发现,这些工具在重塑战争观察方式的同时,也带来了新的信息风险。

这些仪表盘多数由AI工具快速生成,甚至获得Palantir创始人的关注。它们通过预测市场吸引用户对军事事件下注,例如伊朗下一任最高领袖人选或美军空袭目标,将战争观察异化为带有赌博性质的娱乐活动。尽管开发者宣称AI能打破情报垄断,实现信息民主化,但专家指出这些未经筛选的数据流缺乏专业分析和历史背景,反而制造了"掌控全局"的错觉。

更严重的是,AI技术同时助长了虚假信息的传播。《金融时报》发现多张伪造的卫星图像在伊朗冲突期间流传,这类看似可信的虚假证据会进一步削弱公众对关键信息的判断力。情报专家克雷格·西尔弗曼警告,当海量数据与赌博机制、虚假内容相互交织,最终形成的不是更清晰的战况图景,而是令人更难辨别真相的信息漩涡。

这场由AI技术催生的战时信息狂欢,暴露出在缺乏专业验证和伦理约束的情况下,技术赋能可能扭曲信息传播的本质。当战争成为被实时下注的娱乐项目,当伪造图像与真实数据共同滚动显示,公众获得的或许不是民主化的情报,而是被算法重新编排的现代战争剧场。

中文翻译:

人工智能如何将伊朗冲突变成一场"战争秀"

人工智能驱动的信息仪表盘,结合预测市场和虚假图像,正在重塑人们观察战争的方式。

"有没有人想在旧金山搞个聚会,把这场面投到100英寸大电视上?"这条X平台帖文所指的,正是一个实时追踪美以袭击伊朗的在线情报仪表盘。该仪表盘由风险投资公司安德森·霍洛维茨基金的两位成员搭建,整合了卫星图像、船舶追踪等开源数据,并配有聊天功能、新闻推送和预测市场链接——人们甚至可以在上面押注伊朗下任"最高领袖"人选(近期穆杰塔巴·哈梅内伊的当选就让部分投注者获得了回报)。

过去一周我审阅了十多个类似仪表盘。其中许多显然是借助AI工具在几天内"速成"的,有个仪表盘甚至引起了情报巨头帕兰蒂尔创始人的注意——美军正是通过该平台在战争期间使用Claude等AI模型。部分仪表盘在伊朗冲突前就已存在,但几乎所有创建者都宣称其产品能"直达战场真相",打破传统媒体的迟缓低效。针对伊朗空域在袭击前关闭的可视化图表,有用户在领英写道:"看这张地图30秒获得的信息,比阅读任何主流新闻都多。"

当前关于AI与伊朗冲突的讨论,大多聚焦于Claude等模型如何协助美军确定打击目标。但这些情报仪表盘及其生态系统,揭示了AI在战时扮演的新角色:信息中介——且往往带来负面影响。

多重因素在此交汇:AI编程工具降低了开源情报整合的技术门槛;聊天机器人能提供快速(虽存疑)的分析;虚假内容的泛滥让观察者渴望获得通常只有情报机构才能接触的原始精准分析;实时预测市场以经济回报刺激着人们对仪表盘的需求;而美军在冲突中使用Anthropic公司Claude模型(尽管其被列为供应链风险)的举动,更向观察者传递出"AI已成为专业情报工具"的信号。这些趋势共同造就了一种新型的AI战时剧场,在澄清信息的同时也扭曲着信息流向。

作为记者,我认为这类情报工具潜力巨大。虽然许多人知道航运路线或停电的实时数据存在,但亲眼看到所有信息汇聚一处仍具冲击力(不过边吃爆米花边看战争直播并下注的行为,将战争扭曲成了病态娱乐)。但我们有充分理由相信,这类原始数据流的信息价值可能低于表面感知。

数字调查专家克雷格·西尔弗曼持续追踪这类仪表盘(已记录20个),他指出:"问题在于人们会产生掌控全局的错觉,实际上只是接收了大量信号,未必理解所见内容,也无法提炼真正洞见。"

信息质量是首要问题。许多仪表盘的"情报流"采用AI生成摘要来概括复杂多变的新闻事件,可能引入错误信息。这些数据未经特别筛选,往往将伊朗打击地点地图与冷门加密货币价格并列展示。

相比之下,情报机构会为数据流配备提供专业知识和历史背景的分析人员,当然还能接触未公开的专有信息。

此类伊朗冲突信息管道的构建者和推广者暗示:AI能成为强大的民主化力量。过去只有精英掌握的秘密信息流,如今AI能将其带给所有人——无论是获取资讯还是押注核打击。但AI擅长汇聚的海量信息,并不附带真正理解所需的准确性与背景。情报机构通过内部机制处理这些,而优质新闻则为公众完成同样的工作。

值得注意的是,这一切与博彩市场密切相关。安德森·霍洛维茨基金创建的仪表盘滚动显示着预测平台Kalshi(该公司已投资)上的投注项目,其他仪表盘则链接至Polymarket,提供"美国是否打击伊拉克"或"伊朗网络何时恢复"等投注选项。

AI长久以来降低虚假内容的传播成本,这个问题在伊朗冲突中暴露无遗:《金融时报》上周发现大量AI生成的卫星图像在网络传播。西尔弗曼警告:"经过篡改或完全伪造的卫星图像令人担忧。"公众往往对此类图像高度信任,虚假图像的传播可能侵蚀战争关键证据的可信度。

最终形成的是由AI驱动的信息海洋——仪表盘、博彩市场、真假难辨的照片——这让理解这场战争变得更为困难,而非更加清晰。

深度阅读

人工智能

"退出GPT"运动呼吁用户取消ChatGPT订阅
对ICE的抵制正在引发更广泛的运动,反对AI公司与特朗普总统的联系。

Moltbook成为AI剧场巅峰
这个病毒式传播的机器人社交网络,既揭示了智能体的未来,更折射出当前社会对AI的狂热。

认识将大语言模型视为外星生物的新派生物学家
通过将大语言模型当作生命体而非计算机程序来研究,科学家首次发现了它们的一些秘密。

杨立昆的新企业是对大语言模型的逆向押注
这位AI先驱在独家采访中分享了其巴黎新公司AMI实验室的计划。

保持联系
获取《麻省理工科技评论》最新动态
发现特别优惠、头条新闻、即将举办的活动等更多内容。

英文来源:

How AI is turning the Iran conflict into theater
AI-enabled dashboards, combined with prediction markets and fake imagery, are reshaping how war is observed.
This story originally appeared in The Algorithm, our weekly newsletter on AI. To get stories like this in your inbox first, sign up here.
“Anyone wanna host a get together in SF and pull this up on a 100 inch TV?”
The author of that post on X was referring to an online intelligence dashboard following the US-Israel strikes against Iran in real time. Built by two people from the venture capital firm Andreessen Horowitz, it combines open-source data like satellite imagery and ship tracking with a chat function, news feeds, and links to prediction markets, where people can bet on things like who Iran’s next “supreme leader” will be (the recent selection of Mojtaba Khamenei left some bettors with a payout).
I’ve reviewed over a dozen other dashboards like this in the last week. Many were apparently “vibe-coded” in a couple of days with the help of AI tools, including one that got the attention of a founder of the intelligence giant Palantir, the platform through which the US military is accessing AI models like Claude during the war. Some were built before the conflict in Iran, but nearly all of them are being advertised by their creators as a way to beat the slow and ineffective media by getting straight to the truth of what’s happening on the ground. “Just learned more in 30 seconds watching this map than reading or watching any major news network,” one commenter wrote on LinkedIn, responding to a visualization of Iran’s airspace being shut down before the strikes.
Much of the spotlight on AI and the Iran conflict has rightfully been on the role that models like Claude might be playing in helping the US military make decisions about where to strike. But these intelligence dashboards and the ecosystem surrounding them reflect a new role that AI is playing in wartime: mediating information, often for the worse.
There’s a confluence of factors at play. AI coding tools mean people don’t need much technical skill to assemble open-source intelligence anymore, and chatbots can offer fast, if dubious, analysis of it. The rise in fake content leaves observers of the war wanting the sort of raw, accurate analysis normally accessible only to intelligence agencies. Demand for these dashboards is also driven by real-time prediction markets that promise financial rewards to anyone sufficiently informed. And the fact that the US military is using Anthropic’s Claude in the conflict (despite its designation as a supply chain risk) has signaled to observers that AI is the intelligence tool the pros use. Together, these trends are creating a new kind of AI-enabled wartime circus that can distort the flow of information as much as it clarifies it.
As a journalist, I believe these sorts of intelligence tools have a lot of promise. While many of us know that real-time data on shipping routes or power outages exist, it’s a powerful thing to actually see it all assembled in one place (though using it to watch a war unfold while you munch on popcorn and place bets turns the war into perverse entertainment). But there are real reasons to think that these sorts of raw data feeds are not as informative as they may feel.
Craig Silverman, a digital investigations expert who teaches investigative techniques, has been keeping a log of these dashboards (he’s up to 20). “The concern,” he says, “is there’s an illusion of being on top of things and being in control, where all you’re really doing is just pulling in a ton of signals and not necessarily understanding what you’re seeing, or being able to pull out true insights from it.”
One problem has to do with the quality of the information. Many dashboards feature “intel feeds” with AI-generated summaries of complex, ever-changing news events. These can introduce inaccuracies. By design, the data is not especially curated. Instead, the feeds just display everything at once, with a map of strike locations in Iran next to the prices of obscure cryptocurrencies.
Intelligence agencies, on the other hand, pair data feeds with people who can offer expertise and historical context. They also, of course, have access to proprietary information that doesn’t show up on the open web.
The implicit promise from the people building and selling this sort of information pipeline about the Iran conflict is that AI can be a great democratizing force. There’s a secret feed of information that only the elites have had access to, the thinking goes, but now AI can bring it to everyone to do with what they wish, whether that’s simply to be more informed or to make bets on nuclear strikes. But an abundance of information, which AI is undeniably good at assembling, does not come with the accuracy or context required for real understanding. Intelligence agencies do this in-house; good journalism does the same work for the rest of us.
It is, by the way, hard to overstate the connection this all has with betting markets. The dashboard created by the pair at Andreessen Horowitz has a scrolling list of bets being made on the prediction platform Kalshi (which Andreessen Horowitz has invested in). Other dashboards link to Polymarket, offering bets on whether the US will strike Iraq or when Iran’s internet will return.
AI has also long made it cheaper and easier to spread fake content, and that problem is on full display during the Iran conflict: last week the Financial Times found a slew of AI-generated satellite imagery spreading online.
"The emergence of manipulated or outright fake satellite imagery is really concerning,” Silverman says. The average person tends to see such imagery as very trustworthy. The spread of such fakes could erode confidence in one of the most important pieces of evidence used to show what’s actually happening in the war.
The result is an ocean of AI-enabled content—dashboards, betting markets, photos both real and fake—that makes this war harder, not easier, to comprehend.
Deep Dive
Artificial intelligence
A “QuitGPT” campaign is urging people to cancel their ChatGPT subscriptions
Backlash against ICE is fueling a broader movement against AI companies’ ties to President Trump.
Moltbook was peak AI theater
The viral social network for bots reveals more about our own current mania for AI as it does about the future of agents.
Meet the new biologists treating LLMs like aliens
By studying large language models as if they were living things instead of computer programs, scientists are discovering some of their secrets for the first time.
Yann LeCun’s new venture is a contrarian bet against large language models
In an exclusive interview, the AI pioneer shares his plans for his new Paris-based company, AMI Labs.
Stay connected
Get the latest updates from
MIT Technology Review
Discover special offers, top stories, upcoming events, and more.

MIT科技评论

文章目录


    扫描二维码,在手机上阅读