《恐怖谷》:人工智能时代的伊朗战争、预测市场伦理,以及派拉蒙如何击败网飞

内容总结:
本周,美国与伊朗的紧张局势持续升级,人工智能行业与虚假信息战意外成为冲突的核心焦点。在美军与以色列对伊朗发动协同打击后,社交平台X(原推特)上迅速涌现大量虚假信息,包括AI生成的图像、游戏画面冒充真实战况等,这些内容传播极广且缺乏有效核查,凸显了平台内容审核机制在突发新闻中的失灵。与此同时,人工智能公司与美国国防部的合作引发巨大争议:OpenAI虽与国防部达成协议,却因舆论压力迅速改口;Anthropic则因试图在合同中限制AI用于监控及自主武器而遭军方公开批评。这场博弈不仅牵涉技术伦理,也影响着科技公司的人才争夺与公共形象。
另一方面,预测市场平台在重大事件中暴露出的伦理与监管问题日益尖锐。随着中东局势动荡,Polymarket等平台上出现了针对伊朗领导人命运的高额投注,金额高达数千万美元,被批评者指责为“将人命视作赌注”。同时,多家科技公司内部员工涉嫌利用内幕信息在预测市场交易,例如OpenAI已因此开除员工,但平台自身的监管措施仍显薄弱。值得注意的是,特朗普家族及其关联资本正深度布局预测市场领域,这引发了公众对政策可能被投机利益裹挟的担忧。
传媒行业亦迎来震荡。派拉蒙以高达1100亿美元的价格击败Netflix,成功竞购华纳兄弟,这意味着传媒大亨拉里·埃里森及其家族将掌控包括CNN、HBO、DC漫画等在内的庞大媒体与IP帝国。业内普遍担忧,这笔交易可能导致旗下新闻机构面临裁员与编辑立场转向,进一步加剧媒体行业的整合与不确定性。
展望未来,地缘冲突可能影响美国国内政治走向,甚至被用作选举筹码;开源AI模型的快速发展可能对头部AI公司构成威胁;而预测市场的监管缺失恐将催生更为极端的投机平台。在技术、信息与资本交织的当下,这些动态正悄然塑造着冲突、商业与社会的未来图景。
中文翻译:
本周,团队深入探讨了虚假信息与人工智能行业博弈如何迅速成为美伊持续冲突的核心议题。他们同时剖析了Polymarket和Kalshi等预测市场为何日益面临内幕交易指控与伦理质疑。此外,派拉蒙为何能在竞购华纳兄弟的角逐中击败网飞?节目中,主持人佐伊·希弗、布莱恩·巴雷特与莉亚·费格还分享了他们对未来的预测。
本期提及文章:
- 《美以袭击伊朗后,X平台深陷虚假信息泥潭》
- 《断网之下:记者如何在伊朗开展报道》
- 《遭美国军方列为"供应链风险",Anthropic强势回应》
- 《特朗普政府前高官对预测市场展开追责》
- 《若派拉蒙收购华纳兄弟,拉里与大卫·埃里森将掌控的庞大版图》
您可以在Bluesky平台关注布莱恩·巴雷特(@brbarrett)、佐伊·希弗(@zoeschiffer)和莉亚·费格(@leahfeiger)。欢迎来信至uncannyvalley@wired.com。
收听方式
您可通过本页音频播放器收听本期节目,若想免费订阅获取每期更新,请参考以下方式:
使用iPhone或iPad的用户可打开"播客"应用,或直接点击此链接。您也可下载Overcast、Pocket Casts等应用,搜索"uncanny valley"订阅。节目亦在Spotify同步更新。
文字记录
注:本文为自动生成字幕,可能存在误差。
布莱恩·巴雷特:大家好,我是布莱恩。过去几周,佐伊、莉亚和我非常荣幸担任节目新任主持人,我们期待听到您的反馈。若您喜欢本期节目,请在您常用的播客平台为我们留下评分,这将帮助节目触达更多听众。如有任何问题或建议,欢迎随时发送邮件至uncannyvalley@WIRED.com。感谢收听,节目现在开始。
今天能和莉亚·费格在纽约进行线下录制,我特别兴奋。佐伊,你还在屏幕那端呢。
佐伊·希弗:是啊。布莱恩,要不要说说你来纽约的原因?
布莱恩·巴雷特:当然要隆重宣布。昨晚莉亚在"头版新闻奖"颁奖礼上荣获"年度记者"称号。
佐伊·希弗:没错,她拿到了实体奖杯。
布莱恩·巴雷特:不仅如此——佐伊你当时不在现场,但我亲眼见证了——我们总编辑凯蒂·德拉蒙德亲自为莉亚致辞,播放了回顾她去年成就的视频,莉亚也发表了动人的获奖感言。
佐伊·希弗:太棒了。
莉亚·费格:真的很荣幸。需要说明的是,这个奖项属于整个WIRED团队。我在获奖视频中完全没有提及个人,因为这份荣誉属于WIRED,属于我们所有人。
布莱恩·巴雷特:但功劳首先归于莉亚,这是你应得的。
佐伊·希弗:欢迎收听WIRED《诡异谷》播客。我是商业与产业板块总监佐伊·希弗。
布莱恩·巴雷特:我是执行主编布莱恩·巴雷特。
莉亚·费格:我是政治新闻高级编辑莉亚·费格。
佐伊·希弗:本周我们将聚焦中东持续冲突,尤其是人工智能行业如何深度介入美国国防部体系。同时探讨预测市场的现状,以及派拉蒙与华纳兄弟可能达成的历史性并购。
莉亚·费格:首先关注伊朗局势。自上周六美以发动协同军事打击以来,事件持续发酵。伊朗已对美军基地及海湾多国目标实施反击,局势升级速度惊人。
资料音频:伊朗最高领袖哈梅内伊在今日美以联合袭击中身亡。
资料音频:伊朗官员称周六空袭击中一所小学,造成逾160人死亡,其中多为儿童。
资料音频:美国驻中东各使馆正通知公民就地避难——
莉亚·费格:虽然周末我们都在跟进报道,但虚假信息迅速成为冲突核心仍令我震惊。WIRED核查了X平台上数百条帖文,其中部分获得数千万浏览量,散布关于袭击地点与规模的误导性信息。我们的同事大卫·吉尔伯特报道了若干典型案例,其形式五花八门:包括AI生成图像、伪装成真实影像的游戏画面、国家间张冠李戴等。在我看来,这既是虚假信息泛滥的体现,也源于局势本身的混乱。
布莱恩·巴雷特:确实。相比虚假信息本身,更令我讶异的是平台缺乏整治紧迫性——虽然这或许也不意外。每次重大事件发生,几乎都会出现游戏画面冒充实况的套路——
莉亚·费格:简直成了固定剧本。
布莱恩·巴雷特:没错。X平台裁撤了大部分公共安全团队,仅通过"社区注释"功能对部分帖文添加标注。但等到注释生效,帖文早已获得数百万浏览量。况且注释位于帖文下方,用户早已接收错误信息,这种机制根本无法阻止虚假内容传播。
佐伊·希弗:这是多年产品与政策决策积累的恶果。当平台对记者群体充满敌意,裁撤事实核查与内容审核团队,依赖在突发新闻中严重失效的"社区注释",又通过流量激励催生脱离现实的煽动性言论时,现状便不可避免。我们应当持续报道此类现象,但不必再为此感到惊讶。
莉亚·费格:完全正确。WIRED关于伊朗记者、活动人士及普通民众如何突破封锁获取并传递信息的报道令人震撼。这让我思考:虚假信息的受众究竟是谁?是为了制造混乱、混淆视听吗?无论如何,X平台已成信息沼泽。在连网络接入都极度受限(当前伊朗网络连通率仅4%)的情况下,要核实数据与事实本就困难重重。
布莱恩·巴雷特:我认为部分动机源于经济利益:许多虚假信息来自付费蓝V账号,这些账号可通过内容变现。虽然"追逐点击量"的说法常被用于指责媒体,但在此情境下确实适用。你的观点很关键:记者正在撤离X平台,但政客仍活跃于此。已有数起知名政客将虚假信息当作事实评论的案例,这直接影响舆论走向。在当下这场未经国会授权的战争中,舆论关乎冲突是否会升级为更大灾难。
莉亚·费格:国防部长近期言论暗示冲突不会很快结束。据CNN报道,中东战事已造成超千人死亡,包括多名美军人员。而WIRED关注的贸易、石油、数据中心等核心议题都将受此波及。伊朗问题显然将持续占据舆论中心。
布莱恩·巴雷特:我们的气候专栏作家莫莉·塔夫特曾撰文指出油价气价飙升现象——霍尔木兹海峡虽未正式关闭,但伊朗军方警告已使其实际停运。这产生连锁反应:化肥价格飞涨(中东供应全球大量化肥),而当下正值美国春耕施肥季。
莉亚·费格:影响层面之多令人咋舌。CNN国际团队每分钟都在更新惊人细节:不仅是军事打击,还有黎巴嫩出现18小时交通拥堵等具体困境。当旅行博主开始讨论"今夏能否前往欧洲",当"第三次世界大战"成为社交热词,甚至引发《欲望都市》台词改编潮时,可见危机已快速渗入公共语境。
佐伊·希弗:必须探讨人工智能维度。冲突爆发前夕,美国国防部刚与顶尖AI公司达成(又可能撕毁)协议。上周五,OpenAI与国防部签约之际,Anthropic正因技术使用限制条款(包括禁止监控美国公民、禁止开发全自主武器)与该部门僵持。周六伊朗遇袭当天,萨姆·奥尔特曼在X平台发起问答,承认与五角大楼的协议仓促,观感不佳,但辩称此举旨在缓和AI行业与军方的紧张关系。
莉亚·费格:上周的讨论如今显得极具预见性。袭击前我们曾说"情况可能很糟",现在局势更恶劣。奥尔特曼的问答很有意思:Anthropic再次占据道德高地,在人才争夺战中塑造理性AI公司形象;而OpenAI显得笨拙,缺乏稳固价值观。这对吸引顶尖研究人员至关重要。
布莱恩·巴雷特:公众认知很有趣:Anthropic被贴上"觉醒AI"标签,但其技术已用于初期打击且将持续六个月。我们或许高估了其道德立场,他们只是要求"杀人时需人类按键"。
佐伊·希弗:Anthropic能签约得益于其模型搭载于亚马逊安全服务器(已通过联邦合规审查),从而获准接入机密系统。OpenAI内部人士则反驳称,公司从小型实验室时期的全面禁用军事,逐步演变为讨论"是否允许五角大楼用ChatGPT汇总邮件",政策调整必然引发"背叛人类福祉"的批评。
莉亚·费格:谷歌和OpenAI员工曾联署要求限制与军方合作。这类事件会影响企业招聘与人才留存吗?
佐伊·希弗:绝对会。许多来自学术界的理想主义研究者坚决反对军事应用,厌恶尖端研究沦为杀人工具。虽然所有前沿AI实验室都在争取政府合同,但大批人才对此深感厌恶。
莉亚·费格:他们会因此离职吗?
佐伊·希弗:已有人从OpenAI转投Anthropic。
布莱恩·巴雷特:这个市场很特殊:顶尖人才积累的财富已实现财务自由,他们完全有资本为理念跳槽。
佐伊·希弗:需要强调:这些人离职后仍能获得数百万年薪,选择空间极大。
布莱恩·巴雷特:回到奥尔特曼的言论:OpenAI早期"强于核弹将改变社会"的宣传,注定会引来政府与军方关注。就像宣称在车库造核弹的人必然会接到国防部长电话。
佐伊·希弗:全自主武器系统早已存在(如乌克兰战场的非LLM驱动无人机)。对于更先进但不可靠的系统, Pentagon要求承包商开放所有权限。
莉亚·费格:五角大楼的反应堪称疯狂,他们视AI技术为国有财产。前特朗普政府官员埃米尔·迈克尔甚至在X平台咆哮——
佐伊·希弗:埃米尔,你该关闭领英的"动态可见"功能。
布莱恩·巴雷特:给收听节目的埃米尔提个醒。这位与科技界关系密切的官员正领导针对Anthropic的斗争,其公开宣战姿态值得警惕——这不仅是向Anthropic传递信号,更是警告任何质疑政策者。
布莱恩·巴雷特:聊聊预测市场吧。各位最近查看Kalshi或Polymarket账户了吗?
莉亚·费格:我还没押注《幸存者》冠军呢。
布莱恩·巴雷特:预测市场已渗透生活各领域——当然我指广义的"我们"。围绕伊朗战争的投注尤为活跃:Polymarket上"伊朗政权是否在6月30日前垮台"的赌注总额已达700万美元。
莉亚·费格:这令人不安。虽然股市运作也类似,但将人命如此游戏化实在过分。
布莱恩·巴雷特:本周争议焦点是Kalshi对5400万美元"伊朗最高领袖是否失势"赌局的裁决——该领袖被导弹炸死,但平台禁止死亡赌注,最终以"技术性失势"含糊结算。用户"MAGAMyMan"借此获利55.3万美元。无论是否涉及内幕交易,这都令人作呕。
莉亚·费格:"令人作呕"这个词很准确,这是将生命价值极端游戏化的丑陋视角。
佐伊·希弗:更轻量级的消息是:预测市场出现大量疑似企业内部数据相关的赌局,如GPT-5发布时间、奥尔特曼早年是否被罢免等。OpenAI上周以"利用机密信息参与外部预测市场"为由开除员工,但此类事件在各科技公司屡见不鲜。著名的"谷歌鲸鱼"匿名账户就通过谷歌相关事件获利超百万美元。
布莱恩·巴雷特:而且是多次操作。Kalshi最近处罚两名内幕交易者(一名竞选公职者、一名YouTube关联者),但这类象征性处罚收效甚微。
佐伊·希弗:目前只能指望平台与企业自查,因为特朗普政府对预测市场的态度比拜登政府宽松得多。
莉亚·费格:特朗普家族与预测市场的关联令人想起其加密货币投资。Truth Social平台计划推出"Truth Predict"预测市场,小唐纳德·特朗普已是Kalshi和Polymarket顾问,其风投公司投资了Polymarket。他们虽未直接参与伊朗局势赌博,却从中获利,并视此为扩张良机——而制定政策者正是其父亲。
布莱恩·巴雷特:内幕交易隐患在此更可怕:虽无证据表明有人利用全球事件交易,但既得利益者完全可能这么做。仅"政策可能为快速牟利制定"的观感就极具破坏性。
佐伊·希弗:插播一条商业新闻:派拉蒙即将以1100亿美元收购华纳兄弟,并承诺若监管未通过将支付70亿美元终止费。拉里和大卫·埃里森的媒体版图将进一步扩张。
莉亚·费格:他们已掌控CBS,此举将获得CNN。别忘了拉里·埃里森在TikTok持有大量股份。媒体正加速集中到特朗普盟友手中。
布莱恩·巴雷特:CBS和CNN记者们惶惶不安:巴里·韦斯会成为新老板吗?埃里森会开除所有批评过特朗普的CNN主播吗?两家机构采编团队高度重合,并购必然导致裁员。
佐伊·希弗:巴里·韦斯在CBS推行的"去复兴党化"意识形态清洗已令人心惊。他们试图"去左倾化",结果却是大规模失业。
莉亚·费格:派拉蒙本非首选买家,网飞原本几乎敲定交易。但沙特资金背景、埃里森家族支付能力等问题使交易悬而未决。
布莱恩·巴雷特:网飞退出或因:派拉蒙溢价过高;网飞注重投资回报;特朗普曾向网飞CEO施压;即便交易通过,特朗普监管环节也可能受阻。作为影迷,我对此感到难过。
莉亚·费格:所有话题最终都绕回特朗普:他的施压、家族涉足市场、无处不在的烙印正重塑美国商业与媒体格局。
布莱恩·巴雷特:列举埃里森家族将掌控的部分IP:CBS、CNN、HBO、DC漫画、哈利波特、星际迷航、乐一通,还有几十个你母亲爱看的有线频道。
莉亚·费格:他们可能要"去觉醒化"《星际迷航》。至于哈利波特,这个系列本就令人失望。
布莱恩·巴雷特:这让人想起《查理和巧克力工厂》的维露卡·索尔特——顺便说,这部电影版权也归华纳兄弟。
莉亚·费格:这下我们得支付百万版权费了。
布莱恩·巴雷特:广告之后的新环节"未来预言",我们将分享科技等领域预测。
莉亚·费格:本周推出"未来预言"环节,邀请大家预测近期趋势。谁先开始?
佐伊·希弗:我认为开源模型可能威胁OpenAI和Anthropic。当前核心问题是"缩放定律"是否持续有效?虽然GPT-5未达预期,但模型仍在快速进化。然而通过海量数据提取参数,已能复现类似Claude的前沿模型。若能用极低成本通过DeepSeek等渠道获得类Claude能力,开源模型将在几年内构成生存威胁。
布莱恩·巴雷特:Meta曾力推开源模型Llama以压制对手,但现已放弃。区别何在?
佐伊·希弗:关键在于是自主开发还是蒸馏现有模型——后者在算力成本与迭代速度上优势显著。
布莱恩·巴雷特:我的预测是:将出现允许对暴力犯罪甚至亲身实施犯罪下注的极端预测市场。因其注册于偏远离岸地区,监管将形同虚设。
莉亚·费格:这题材够拍电影了。
布莱恩·巴雷特:这就是我的剧本灵感。
莉亚·费格:我的预测比较沉重:伊朗冲突时间线模糊,而历史表明战时领导人更易连任。当前共和党中期选举形势不利,但战争状态(尤其若美军伤亡增加)可能成为维持权力的借口。
布莱恩·巴雷特:尽管战争目前在美国不得人心,但政府可能借此宣布紧急状态,推行压制反对票的措施。
莉亚·费格:战时政府将完全改变政治生态。从农民化肥需求到联邦政府运作,共和党必然会利用这一切。
布莱恩·巴雷特:结束前说点轻松的吧?
莉亚·费格:布莱恩能来纽约真好。
布莱恩·巴雷特:莉亚获奖也值得庆祝。本期节目到此结束,提及文章已附于简介。文字稿可在WIRED官网查阅讨论。《诡异谷》由Kaleidoscope Content制作,本期由阿德里亚娜·塔皮亚制作,阿马尔·拉尔在Macro Sound完成混音,事实核查由马特·贾尔斯负责,纽约录音工程师普兰·班迪,数字制作高级经理金伯利·蔡,执行制片人凯特·奥斯本,全球编辑总监凯蒂·德拉蒙德。
英文来源:
This week, the team dives into why disinformation and the AI industry battles have quickly positioned themselves at the center of the ongoing conflict between the US and Iran. They also discuss how prediction markets like Polymarket and Kalshi are increasingly facing insider trading accusations and ethical questions. Also, how did Paramount beat Netflix in its bid for Warner Bros? Plus: Hosts Zoë Schiffer, Brian Barrett, and Leah Feiger share their predictions for the future.
Articles mentioned in this episode:
- X Is Drowning in Disinformation Following US and Israeli Attack on Iran
- How Journalists Are Reporting From Iran With No Internet
- Anthropic Hits Back After US Military Labels It a ‘Supply Chain Risk’
- A Former Top Trump Official Is Going After Prediction Markets
- Everything Larry and David Ellison Will Control If Paramount Buys Warner Bros.
You can follow Brian Barrett on Bluesky at @brbarrett, Zoë Schiffer on Bluesky at @zoeschiffer, and Leah Feiger on Bluesky at @leahfeiger. Write to us at uncannyvalley@wired.com.
How to Listen
You can always listen to this week's podcast through the audio player on this page, but if you want to subscribe for free to get every episode, here's how:
If you're on an iPhone or iPad, open the app called Podcasts, or just tap this link. You can also download an app like Overcast or Pocket Casts and search for “uncanny valley.” We’re on Spotify too.
Transcript
Note: This is an automated transcript, which may contain errors.
Brian Barrett: Hey, it's Brian. Zoë, Leah, and I have really enjoyed being your new hosts these past few weeks, and we want to hear from you. If you like the show and have a minute, please leave us a review in the podcast or app of your choice. It really helps us reach more people. And for any questions and comments, you can always reach us at uncannyvalley@WIRED.com. Thank you for listening—on to the show.
I'm so excited that I am in New York doing this in-person with Leah Feiger. Zoë, you're still on the screen.
Zoë Schiffer: I know I am. Brian, should we brag about why you're in New York?
Brian Barrett: Yes, we absolutely should. Leah was honored last night at the Front Page Awards as Journalist of the Year.
Zoë Schiffer: Correct. She got a literal physical award.
Brian Barrett: She did. Not only that, Zoë, you don't know this, but I do because I was there.
Zoë Schiffer: Wow.
Brian Barrett: She got introduced by our editor in chief, Katie Drummond. She made a video about her achievements last year, and she gave a lovely speech.
Zoë Schiffer: I love this.
Leah Feiger: It was really nice. To be clear, to me, this is an award for all of WIRED.com. And in my video, I did not mention myself once because this is about WIRED and all about WIRED.
Brian Barrett: It's about Leah, and it should be.
Zoë Schiffer: Welcome to WIRED's Uncanny Valley. I'm Zoë Schiffer, director of business and industry.
Brian Barrett: I'm Brian Barrett, executive editor.
Leah Feiger: And I'm Leah Feiger, senior politics editor.
Zoë Schiffer: This week, we're diving into the ongoing conflict in the Middle East, particularly as the AI industry has been entrenching itself with the Department of Defense. We'll also discuss what's going on with prediction markets and what we make of the potential Paramount and Warner Bros' historic merger.
Leah Feiger: Let's jump right into what's going on with Iran. It has been nonstop since the US and Israel began a coordinated military strike on Iran on Saturday. Iran has responded with their own attacks on US bases and countries across the Gulf. Things have escalated really, really quickly.
Archival audio: Iran Supreme Leader, Ayatollah Ali Khamenei, has been killed in today's joint attack by the US and Israel.
Archival audio: Iranian officials say airstrikes hit an elementary school Saturday, killing more than 160 people, mostly children.
Archival audio: US embassies across the region are now telling Americans to shelter and place—
Leah Feiger: I know we were all working this weekend on this, but I was sort of stunned how quickly disinformation became the center of this conflict. WIRED reviewed hundreds of posts on X, some of which racked up millions and millions of views, that promote misleading claims about the locations and scale of the attacks. Our colleague, David Gilbert, reported on some of these very specific examples, and the range was wild. They included AI-generated images to video game scenes being passed off as real footage, to countries getting mistaken for each other. To me, it's a combination of—obviously, there's a lot of disinformation out there, but it's also because this is just chaos.
Brian Barrett: Yeah. I think to me, the disinformation itself is maybe less surprising than the lack of urgency around fixing it or doing something about it—which, I guess I shouldn't be surprised at that either. But I feel like every time anything happens, you get sort of—it's almost the same lineup of the video game footage and—
Leah Feiger: No, the blog writes itself.
Brian Barrett: Yeah, it really does, as does the part of—and also X got rid of most of its public safety team. They've got community notes in there that they append to some of these, but by the time a community note gets on there, it's already been viewed 4 million times. And also, it's below the post anyway, so you've already seen it, and doesn't really seem to stop them from getting distributed.
Zoë Schiffer: This is the culmination of years of product and policy decisions. It's what happened when you make the platform hostile to journalists. You get rid of most of your fact-checking team and content moderators. You rely on community notes, which have proven time and time again that they're really effective for certain things, but during breaking news, they're woefully inadequate. And you pay people for traffic, which incentivizes people to have quick hot takes, whether or not those takes are actually grounded in reality. So yeah, I mean, we should continue to report the story. We should not continue to be surprised by this story.
Leah Feiger: And that's absolutely right. I mean, I'm looking at, there's been some amazing WIRED coverage of how Iranian journalists and activists and just ordinary citizens are trying to get information on the ground and actually get it out then from the country. So it makes me think, who is this disinfo for, right? Is it just to really create more chaos, muddy the waters? Whatever it is, X is an absolute cesspool. It already is so difficult to trust numbers and facts and figures about what's coming out. Who's actually able to get internet access? It's constant.
Brian Barrett: I think one thing it's for, I think a lot of this comes from accounts with blue checks and blue checks can monetize content. So a lot of it is really—I hate this phrase because it's targeted at journalists a lot, but I think in this case, true for the clicks. But your point is right: Iran has 4 percent internet connectivity right now. So there is all this narrative happening around the country that is being: The journalists left X, but the politicians are still there. I think there were a couple of high profile instances of legit politicians free posting, commenting on things that were fake as though they were real. And that shapes public opinion. And public opinion really matters in a time like this. We're in a war that is not authorized by Congress, that threatens to spill out into a much bigger conflagration.
Leah Feiger: I mean, it doesn't really seem like this is ending anytime soon if our defense secretary, Pete Hegseth's recent comments about how this doesn't seem to be ending anytime soon or to be believed. I mean, there's over a thousand people, I believe at this point CNN has been reporting that have been killed during the fighting in the Middle East from strikes. A number of those are US service members, and WIRED's own core interests as well: We're talking about trade. We're talking about oil. We're talking about data centers, like how all of this is going to be swept up. I don't know. I don't really see a world where we're not going to be talking about Iran anytime soon.
Brian Barrett: It's interesting. Molly Taft, who's our great climate writer, they've written about how oil and gas prices spiked, which I think you would assume the Strait of Hormuz is not officially closed, but it's basically closed because the Iranian military has said, "Don't go in there." But it has downstream effects too. Fertilizer prices are going through the roof. They have a story about this on WIRED on Wednesday because the Middle East supplies a huge amount of the world's fertilizer. Now, you may have noticed that it's also just about springtime, which is kind of when US farmers need fertilizer the most.
Leah Feiger: Great.
Brian Barrett: Yeah.
Leah Feiger: This is good. It's just great all around. I mean, there's so many knock-on effects, I have to also shout out CNN's world team has been doing the most unbelievable updates minute to minute, hour by hour. It's just this little corner on the side of my screen, and they're getting really into the nitty-gritty of what this means. It's not just these different strikes. The idea that there's, for example, right now, an 18-hour traffic jam in Lebanon as people are trying to get out. These are very, very specific things. But yeah, Brian, I think about the fertilizer. I think about all of these knock-on effects and just the full, full spiral that the entire world is being pulled into right now. Already this morning on a bunch of different travel accounts, people were starting to talk like, "Can we be in Europe this summer?" And I'm like, "Whoa, this has hit the influencer spaces." People are having a conversation about World War III. They're sharing that Sex in the City, Sarah Jessica Parker, "What do you mean World War III? And that made me think, what about World War Me?" The fact that this has made it into the public lexicon so quickly, I just don't see it ending anytime soon.
Zoë Schiffer: I mean, I think within this conversation, we have to talk about the AI angle because the conflict is happening on the heels of the Department of Defense making, and then potentially breaking deals with top AI companies. So this past Friday, OpenAI struck a deal with the Department of Defense right as Anthropic was going head-to-head with that same department over concerns about how its technology would be used. It wanted a couple of conditions, including a ban on surveillance of American citizens and a ban on using its technology to build fully autonomous weapons. The DOD was not a fan of putting those conditions in the contract. And then on Saturday evening, the day the Iran strikes began, Sam Altman started an Ask Me Anything thread on X saying basically that the deal that he'd cut with the Pentagon was rushed. The optics didn't look great, to say the least. But ultimately he defended the company's decision by explaining that their goal was to deescalate things between the AI industry and obviously, Anthropic and the DOD.
Leah Feiger: I mean, our conversations from last week's episode feel so prescient now. We were like, "This is it, guys. This could be real bad." Days before the strikes. It's so much worse now. It is so much worse. I mean, can we talk about Sam Altman's AMA?
Zoë Schiffer: Yeah. I mean, so it's so interesting because I feel like during our last conversation, we were talking about the optics and the branding. And I feel like yet again, Anthropic has really come out on top. I was thinking of these events in terms of recruiting, which sounds so dumb, but it's like there is such an intense talent war taking place among the major AI labs. And Anthropic, I feel like continues to position itself as the good, the level-headed AI firm. And OpenAI continues to kind of blunder in these moments. And whether or not you believe Sam Altman, it comes out looking a little sloppier and a little less like it has a firm set of values it's following. And I think that's actually really going to matter in terms of who is able to get top research talent to join their labs.
Brian Barrett: The public perception thing is really interesting to me because I feel like—and I'm curious what you guys are seeing too, it feels a little bit off. I think there's this sense that Anthropic is the—if not the woke AI—it's coded blue now, right? But first of all, Anthropic products were used extensively in the initial strikes on around and continue to be, right? It's a six-month phase out.
Leah Feiger: This feels a little bit like we're giving them almost too much credit and maybe they've won this culture perception war here, but in the long term, they were part of this. They said, "Please don't use our robots to kill people without a human clicking the button." That's the conversation here.
Brian Barrett: But I think that conversation too—isn't it more around because they're not ready yet, then it can never do this?
Zoë Schiffer: Yeah. They're basically saying, "Look, it's not fully reliable." But it's interesting because the reason that Anthropic was able to cut these deals in the first place was because their models were being run on Amazon secure servers. Amazon was already Fed ramped. And so, they basically had access to contracts that would allow their models to be run on classified systems, which OpenAI did not have. I also was having conversations with people inside OpenAI who were kind of pushing back at this idea that this moment is a real sea change for the company because as you might remember, OpenAI famously had kind of a blanket ban on military use, and then they started to redefine that policy over time. But people at the company were like, "Look, we were a very small AI lab. We had a blanket ban." But then we started to have these conversations internally of like, is it a blanket ban on all uses or could you allow the Pentagon to use ChatGPT to summarize their emails, but not build autonomous weapons? And they started to need to really define that. But because they'd come out so strong by being like, "We're building AI for the benefit of all of humanity." As soon as they started to tweak their policies, people were really calling them out for being hypocrites, which I think they're getting more and more sensitive to over time.
Leah Feiger: We had talked earlier about how tech workers at Google and OpenAI were circulating tons of letters calling for clearer limits on how their employers were working with the military and DOD after the US's strikes on Iran. When it comes to recruitment, when it comes to actually keeping people in house, do things like this matter?
Zoë Schiffer: Yes, 100 percent.
Leah Feiger: OK. OK.
Zoë Schiffer: I think it's a really good question, but I think that if you talk to researchers—and not everyone feels this way—but there are a ton of researchers, and I think it makes sense because a lot of these people come from academia, they tend to be a little more idealistic. They do not want anything to do with military use. And they really want their company to say firmly, "We're not going to be involved in autonomous weapons." You can no longer join a frontier AI lab and say, "I'm not going to work with the military at all." Because they're all trying to get these government contracts. But I think that there is a pretty large contingent of people who are very disgusted with the idea that their cool, nerdy, cutting edge AI research could be used in—basically could be used to kill people. They just don't want that. And I think that that actually matters.
Leah Feiger: Do you think they'll quit over it though?
Zoë Schiffer: I mean, I think we're already seeing some people quit OpenAI and join Anthropic.
Brian Barrett: Yeah. Yeah. I think that's absolutely true. And I think you have to remember too, it's a weird market in the sense that anywhere you go, if you're there for a decent—let's say you're vested, you're going to have enough money that you are never going to have to work again. So you can bake in more—so you're not really trapped. There's so many places that are going to give you so much money that I feel like that's less of a consideration as well.
Zoë Schiffer: Yeah. I mean, to be clear, I think we should underline that point. We're not talking about people quitting and giving everything up. They've made generational wealth. They can now quit their current job that's paying them millions and millions of dollars and then go to another job that will also pay them millions and millions of dollars. So the world is their oyster.
Brian Barrett: I want to go back to one point really quickly too, Zoë, that you made a little bit ago, which in terms of the messaging, how OpenAI's messaging early got them backed them into this corner where now they're kind of unwinding it. I think messaging has a lot to do with this too in terms of the US military's interest in this stuff. If you go around saying from the very start, "I'm building something that is more powerful than nukes and is going to irrevocably change society." The US government and the US military and other governments and other militaries are going to be interested. I think if you think of—if someone were to say, "Hey, by the way, I'm building a nuclear bomb in my garage," they are also going to get a call from Pete Hegseth, right? So I think they have built it and right or wrong, however strong you think AI is going to be at a certain point, if that's the message you're projecting, it should not be that surprising that it's come to the head like this.
Zoë Schiffer: Yeah. I mean, and it's also worth saying really clearly that we're talking about fully autonomous systems. We already have fully autonomous weapons. They're being deployed in Ukraine. They're little drones and they're not run with LLMs. The models are simpler, but it makes sense that—especially when we're talking about a system that is more advanced and less reliable, there's a little hesitation to just say, "Carte blanche, do whatever you want." But that's in fact what the Pentagon is saying is required for a government contractor at this point.
Leah Feiger: I have to say that watching the Pentagon respond to all of this has been wild, really, really wild. They very much view this technology as theirs. The idea of this ownership, the idea that, no, this was made in America, you made it for you, but it's for us. This is very much for us. And watching that reaction that played out in press conferences on X, I mean, it was really wild to see Emil Michael, for example, just go on rants. Zoë, are you going to say it? Are you going to say—
Zoë Schiffer: Emil Michael, you should turn off your views on LinkedIn because people can see when they look at your phone.
Brian Barrett: Yeah, that's a tip for Emil Michael if you're listening. And then, if you listen to Uncanny Valley, which you should.
Leah Feiger: And this is really the Trump official that's kind of leading the war against Anthropic here. He has these deep ties to the tech world and he's the Pentagon's public enemy number one of anyone who's trying to cross the Pentagon. So the fact that, one, he has not maybe the best op sec in the world? But two, is comfortable making this such a public battle, is very much something to take note because it's not just a message to Anthropic in my mind. It's a message to anyone else who would dare question these policies.
Brian Barrett: I want to go to something top of mind. Have you all checked your Kalshi or Polymarket portfolios lately? How are we doing?
Leah Feiger: Well, I haven't invested yet in our Survivor winners, but soon, soon.
Brian Barrett: No, in terms of things that are surprising but shouldn't be anymore, prediction markets obviously have sort of taken over so much of our lives in so many ways by—I mean the general “our,” not the three of us.
Leah Feiger: The three of us actually text every single morning and go, "What'd you make on Kalshi last night?”
Brian Barrett: Yeah. But surprising, not surprising, there's so much betting going on around the Iran war to continue that thread. Right now, one of the top bets on Polymarket is “Will the Iran regime fall by June 30th?” Total bets around 7 million dollars in that market alone.
Leah Feiger: That's so upsetting, Brian.
Brian Barrett: Yeah.
Leah Feiger: These are people's lives. I don't know. I understand that so much of this has become a gamified version of itself. I understand that the stock market and the way that we do so much of all of this, but this feels extra gamified to me.
Brian Barrett: Well, and in terms of people's lives, I mean, there was a big controversy just earlier this week about how Kalshi settled a bet or resolved a market. There was a 54 million dollar market on the fate of Iran's supreme leader. I believe that they phrased it—Leah, correct me if I'm wrong—they phrased it as, like, he'll be out of power.
Leah Feiger: That was exactly what happened.
Brian Barrett: And then he was blown up by a missile.
Leah Feiger: So technically out of power, but that wasn't the bet.
Brian Barrett: Because you can't bet, and they're like, "We don't allow you to bet on deaths here." So he's out of power, but not. So they are invariably betting on whether people will die, just finding cute ways around it and then having a hard time resolving these markets, so that's a problem. The fact that there was a 54 million dollar bounty out on this guy collectively from betters, including one user called MAGAMyMan, who had won $553,000 on the timing of all this. It is outrageous. And I think whether or not any of this is insider trading, it's grotesque.
Leah Feiger: It's grotesque. That's the word. That's the word that I was looking for, for sure. It's the gamified grotesque, like a very, very narrow look at the value of people's lives.
Zoë Schiffer: On slightly lighter Polymarket and Kalshi news, we are seeing a lot of markets that have to do with what looks like internal company data. So like the launch of GPT5 or Sam Altman being ousted as CEO years ago. And last week, Kate Knibbs reported a story that OpenAI had actually fired an employee for insider trading on prediction market platforms like Polymarket. OpenAI's CEO of applications, Fidji Simo, disclosed this in a note to staff. She said that the employee in question, quote, used confidential OpenAI information in connection with external prediction markets, but we're seeing this at a bunch of other firms. Kate Knibbs reached out to a lot of big tech companies. Very few would actually comment on the record, but data suggests that this is far from the first time this has happened. There's a pretty famous example of the so-called Google Whale, which was a pseudonymous account on Polymarket that made over a million dollars trading on Google related events.
Brian Barrett: And like multiple, right?
Zoë Schiffer: Yes.
Brian Barrett: It's not just they took a random shot. And there's just no real interest. I mean, Kalshi has recently taken action against two people who had—they found two instances of insider trading, one person who was running for office, one person who was tied to a YouTube account—and they suspended their accounts for a couple of years or so. But those are tiny enforcement actions and I think sort of meant to show, "Hey, we're doing something," but really they're not.
Zoë Schiffer: I think we're also like, we're looking to the platforms and the companies to crack down themselves at this point because the Trump administration has taken a much friendlier stance toward prediction markets than the Biden administration did. And why might that be?
Leah Feiger: I mean, look, the Trump family has these ties to the prediction market world that are—it really reminds me of their investments in crypto world too. In so many ways, it's these things that don't necessarily have the government regulations that they could or should have, but Truth Social, the social media platform that is majority owned by Trump and his family. They're planning their own prediction market offering. It's going to be called Truth Predict. But Trump Jr.—Donald Trump Jr. is an advisor to both Kalshi and Polymarket already. His venture capital firm has invested in Polymarket. They are very much in this. The exact same conversations that we were all having last year or even just a couple of months ago about Trump family being in, in crypto world, they are—I wouldn't go out on a limb and say they're personally gamifying everything and that's in Iran, but they're benefiting from it, they're profiting off of it, or at the very least that they're seeing that this isn't a ripe market for expansion and going, "I want in on that." Forget the fact that their family—their father—is the one that's helping make these world decisions.
Brian Barrett: And this is where that insider trading thing gets potentially even scarier, is no indication that these people are insider trading off of big global events yet, but there are people who have a vested interest in doing that, who would know. And even just the perception that policy could be made based off of looking to score a quick buck, that's damaging in and of itself. And I think that's sort of where we're headed.
Zoë Schiffer: Before we go to break, I want to give listeners a little insight into a very different business story. This one is about a transaction that we're keeping a very close eye on, which seems to be very near the finish line. So late last week, Warner Brothers agreed to be acquired by Paramount Skydance in a 110 billion dollar deal with Paramount agreeing to pay a seven billion dollar termination fee if federal regulators don't approve the merger. So Larry and David Ellison are continuing to become even bigger media moguls than they already are.
Leah Feiger: I mean, let's spell this out. They already are running CBS. This would give them control of CNN.
Brian Barrett: Well, and let's not forget that Larry Ellison has a huge stake in TikTok. No, it is sort of this ongoing consolidation of media in the hands of Trump allies, which will surely be fine.
Leah Feiger: I'm talking to friends that are reporters at CBS, at CNN who are freaking out, you guys. They're looking at this and forget even the fact that, is Bari Weiss soon to be their boss? Are the Ellisons going to be canning anyone who's ever spoken out against Trump on CNN? Jake Tapper, are you out of here, buddy? But people are talking a lot in these newsrooms about the overlap already between CBS and CNN. These are very similar products here. You look—like there's an incredible financial team at CBS, there's an amazing biz team at CNN who are constantly breaking news, who've done just amazing reporting over the years. But if you're looking at them on a side to side thing, do you need to have all of this in your portfolio at the exact same time? You just doubled the amount of journalists you have and the journalists there are flipping out. Everyone is really, really scared about what's to come.
Zoë Schiffer: Right, because we're not even talking about the seemingly ideological purge that Bari Weiss has already kind of overseen at CBS. I mean, I'm thinking of that Clare Malone article in The New Yorker where I think the quote was like the “de-Baathification of CBS,” which will never leave my head. But I think that they are really trying to remake these news companies supposedly in their mind to make them less lefty, I guess. But I think the result is that lots of people are losing their jobs and the potential for that is only going to spike.
Leah Feiger: It's very scary. I also—I mean, Paramount wasn't even the preferred partner here for the deal, guys. This was supposed to be Netflix. People thought that was a done deal at a certain point. WBD, the parent company of CNN, they were questioning if the Saudi financing backing part of the deal, if the Ellisons could really guarantee that they could even put up the billions of dollars for this. There were a lot of questions before we got to this point.
Brian Barrett: I think if you're Netflix and you're looking at it, I think what seems to happen is one, Paramount overpaid by miles and miles—
Leah Feiger: My God.
Brian Barrett: And say, what about Netflix, they are relatively concerned—they know how to spend their money. But then also, Susan Rice saw the Netflix board, Trump had a meeting with the Netflix CEO prior to this, had put pressure reportedly on him to remove—I'm not saying that that was the sticking point. I think at a certain point, if you're Netflix, you're saying, "Well, if I do own this, am I going to get past a Trump regulatory approval process?" And even if I do, what kind of pressures am I going to see on the other end of it? Is the juice just not worth the squeeze? And I think either way, as someone who enjoys seeing movies in a theater, I'm broken up about this no matter what. I think Netflix would have been a tough situation for other reasons. But yeah, I think there was no real path given how much money Ellisons have and how much Trump did not want the Netflix part to happen.
Leah Feiger: The pressure was real. Honestly, we've talked about so many diverse topics this week. I keep getting back to all of this as just the world of Trump. This is Trump's pressure. This is Trump's family getting in on the markets. It's endless. I can't imagine what the US business landscape—what the US media landscape—looks without Trump's fingerprints in everything right now.
Brian Barrett: In terms of everything, can I list off some of the IP that the Ellison's will now own.
Leah Feiger: Don't do it. God.
Brian Barrett: This isn't everything, but it is—so CBS and CNN, we talked about. HBO, DC Comics, Harry Potter, Star Trek, Looney Tunes, two dozen cable networks that your mom watches.
Leah Feiger: I mean, they're going to de-woke Star Trek. Shout out to my mom. She'll be really, really upset if anything happens to that IP.
Zoë Schiffer: They can have Harry Potter. I felt like that franchise was very disappointing.
Leah Feiger: It's kind of mind-boggling though. It's really giving Veruca Salt in, right?
Brian Barrett: Yeah. Willy Wonka, also a Warner Bros discovery property.
Leah Feiger: Fantastic. I'm going to have to pay a million dollars now that we've made that reference.
Brian Barrett: Well, Leah, save some of that energy for after the break when we are inaugurating a new segment called Futurecast. We're going to share some of our predictions related to tech and beyond. Stay with us.
Leah Feiger: So guys, this week we're switching it up and doing our first Futurecast segment. This is our time to bring predictions to the table. What do we think is coming down the pipeline next week, next year, next month? Who wants to go first?
Zoë Schiffer: OK. I'm going to think through this in real time because I had one and then I changed my mind. But I think open models are potentially an existential threat for OpenAI and Anthropic. What we're seeing right now is this question around are scaling laws going to hold. If you throw more compute at these frontier models, are you able to train smarter and smarter models that leap forward in terms of their intelligence, every time there's a new release? Despite what happened with OpenAI's GPT5, where it was kind of hailed as the coming of AGI. And then it was pretty disappointing, the models are continuing to advance really rapidly. I think Anthropic's coding models are a really good example of that. But we also know that if you throw enough tokens at a model, you can extract the model parameters and essentially build a frontier model that looks a lot like Claude, essentially. We're seeing places around the world, I'm not going to name any names, don't come at me, do this. And so their open models are getting really, really advanced really quickly. And if you can essentially access Claude without paying Anthropic, and instead you're paying, I don't know, DeepSeek, like a nominal sum.
Brian Barrett: Not that we're naming names, definitely not naming names.
Zoë Schiffer: Sorry, that just slipped out. I think that you're going to do that. And if Anthropic is able to keep advancing, that might not be an existential threat yet, but I feel like in the next couple of years, it might change.
Brian Barrett: So can I ask a quick follow-up? Because this was Meta's strategy for a long time, right? They were saying, "We're going to go all in on Llama, we're going to have an open weight model," and that's going to undercut the businesses of these other folks. I don't mean trying to rip off the model, they were building it themselves, but they have since abandoned that. So what's different?
Zoë Schiffer: Yeah, I think that that's a really good point. I mean, I think one difference like you alluded to is if you're trying to build the open model yourself versus say distilling someone else's frontier model, that's a really big difference in terms of the cost of compute that you're throwing at it in terms of how quickly you can advance and surpass the model you trained on originally. And so, I think if you're willing to take a different tact than Meta did, you can move much more rapidly.
Leah Feiger: Interesting. That's a good one. I'm going to think about that.
Brian Barrett: And I have a Futurecast.
Leah Feiger: Take it away.
Brian Barrett: Do we use Futurecast noun, verb, or both?
Leah Feiger: I think both. Yeah.
Brian Barrett: Can OK. Well, I'm going to futurecast something now. No, I think that they're—there's probably already somewhere. I think there will be a well-known, highly visible, red pilled prediction market out there that will let you bet on violent crimes and let you bet on yourself doing violent crimes. And no one will regulate it because it will be in some random island somewhere that's mostly inhabited by gazelles or penguins. And so people are just going to make a ton of money off themselves doing horrible things.
Zoë Schiffer: My God.
Leah Feiger: At the very least, you should get the IP for that.
Brian Barrett: I know.
Leah Feiger: This is a movie.
Brian Barrett: This is also my spec script. This is also my side project.
Leah Feiger: Yikes. Yikes, Brian.
Brian Barrett: Futurecast.
Leah Feiger: Futurecast. OK. Mine is actually kind of a little sad too. I'll be totally honest. I've been thinking a lot this week about Iran. I'm thinking a lot about the timeline here and how chaotic and unclear the ending of all of this is. And obviously, the US has gone into all of this with their partner, Israel and Benjamin Netanyahu. Bibi wins consistently when he is putting the country at war. And throughout history, that is very much like you're a country at war, you're going to win your election. People keep the incumbent. It's trustworthy. It's like just stay the course. I'm looking at this. I'm looking at the midterms. Stats are bad for Republicans right now.
Brian Barrett: Yeah.
Leah Feiger: They're theoretically going to get crushed in the midterms. Again, I say theoretically, we're many months out and a lot can change. To be a country at war, regardless of how it started and who started it, right now there's only a couple of US service members that have died, but each death is a tragedy. I'm waiting for that number to go up. I'm waiting for the US to say, now this is ... We have to. We have to do it to honor the military. We have to do it to protect US interests abroad in a very, very scary way. So my Futurecast is that this is going to be used to keep Trump in power, to keep Republicans in power for a little bit longer.
Brian Barrett: I think that's probably right. Despite how unpopular this war is in the US right now.
Leah Feiger: Yep.
Brian Barrett: I also think it creates a pretext for declaring national emergency of some way or the other—
Leah Feiger: Absolutely.
Brian Barrett: That would potentially unlock illegal, but whatever, it would lock efforts to make it harder to vote in areas that would not vote for Trump.
Leah Feiger: But a government at wartime is— this is an entirely different situation right now. We have to look at our midterms polls and these close races way closer. We're looking at folks that are going to be pitching their experience, their knowledge, their age in some ways. "No, I know how this goes. I am right in there with the president. I know what we need." Brian, you mentioned US farmers needing fertilizer. This all ramps up to the federal government. And I don't know if I see a world where the GOP is not going to be taking advantage of that.
Brian Barrett: Anyone have anything nice to say, before we sign off?
Leah Feiger: It's so nice having you in New York, Brian.
Brian Barrett: It's great to be in New York and Leah won a big award. OK. That's our show for today. We'll link to all the stories we spoke about today in the show notes. If you have any comments, you can find the episode transcripts at WIRED.com to discuss. Uncanny Valley is produced by Kaleidoscope Content. Adriana Tapia produced this episode. It was mixed by Amar Lal at Macro Sound. It was fact checked by Matt Giles. Pran Bandi is our New York studio engineer. Kimberly Chua is our digital production senior manager. Kate Osborn is our executive producer, and Katie Drummond is WIRED's global editorial director.
文章标题:《恐怖谷》:人工智能时代的伊朗战争、预测市场伦理,以及派拉蒙如何击败网飞
文章链接:https://www.qimuai.cn/?post=3494
本站文章均为原创,未经授权请勿用于任何商业用途