«

AI周报——AI界内战?Anthropic遭抨击,OpenAI用户流失——2026年3月11日

qimuai 发布于 阅读:3 一手编译


AI周报——AI界内战?Anthropic遭抨击,OpenAI用户流失——2026年3月11日

内容来源:https://aiweekly.co/issues/471

内容总结:

人工智能领域进入地缘博弈深水区,产业震荡与监管挑战并存

近期全球人工智能领域风波不断,产业竞争、地缘政治与伦理监管等多重矛盾集中爆发,标志着AI发展已从技术探讨全面进入现实博弈阶段。

企业动态与地缘角力

内容安全与监管困局

安全风险与职业失范

当前,人工智能已远非单纯的技术议题,而是深度交织着大国竞争、企业生存、伦理边界与安全治理的复杂博弈。产业内部在技术路径与安全准则上分歧公开化,各国监管步伐加快但模式各异,而AI滥用带来的社会风险正以虚假信息、网络攻击等形式持续显现。这场深刻变革已无任何理论假设空间,其现实影响正全面展开。

中文翻译:

事态已相当严峻。人工智能如今完全成为地缘政治议题——现在获得十亿美元种子轮融资的AI公司越来越多。

Anthropic被列入黑名单。用户抗议让OpenAI沦为众矢之的。LeCun带着十亿美元离开Meta,并发表论文宣称其他人都错了。甲骨文直言不讳:我们裁员是为了建设数据中心。这一切都已不再是理论推演。

赞助商
若你仅用AI重写邮件,那可就大错特错。
八周掌握AI实战技能。哥伦比亚商学院高管教育推出的《商业与金融AI认证课程》专为非技术背景人士设计,传授日常实用AI技能,结业可获得哥大证书。3月16日开课。

新闻动态
儿童安全在线年龄验证工具正在监控成年人
CNBC · 3月8日
美国半数州强制所有用户通过AI身份验证门禁以保护儿童——这套大规模监控基础设施导致实施地区的网站流量暴跌。

卸载ChatGPT:250万用户逃离;Claude登顶应用商店榜首

TechCrunch · 3月1日
OpenAI与五角大楼合作引发史上最大消费者AI抵制潮——ChatGPT卸载量激增295%,Claude日注册量翻两番,Anthropic应用首次在美国App Store超越ChatGPT。

Anthropic称与五角大楼争端或致损失数十亿美元
TechCrunch · 3月9日
Anthropic提起两项联邦诉讼,挑战其"供应链风险"定性——该标签从未用于美国本土企业——指控五角大楼因其限制军事AI使用而实施报复。

英国拟扩大科技监管权力
Computing.co.uk · 3月9日
英国政府正通过《儿童福祉法案》寻求广泛行政权以监管网络危害——法律专家警告这些权力可能被未来民粹政府武器化。

Yann LeCun的AMI实验室融资10.3亿美元构建"世界模型"
TechCrunch · 3月10日
图灵奖得主LeCun离开Meta,称大语言模型是实现真正智能的"完全谬误",其巴黎初创公司以35亿美元估值融资10亿,致力于开发理解物理现实的AI——获贝索斯、英伟达、施密特及库班支持。

X平台充斥伊朗战争虚假AI内容
CNN · 3月10日
AI生成的虚假战争影像在X平台获得数千万观看,盈利账号借此牟利——而X自家聊天机器人Grok更将伪造内容确认为真实,加剧事态恶化。

甲骨文拟裁员2-3万人以资助AI数据中心
雅虎财经/彭博社 · 3月5日
甲骨文将裁员高达18%以腾出80-100亿美元投入AI基础设施——这是企业明确为AI裁员的迄今最典型案例,同期美国银行正收缩扩张融资。

明尼苏达州AI法案拟禁止儿童使用聊天机器人
CBS新闻 · 3月10日
该州两党AI法案禁止未成年人使用聊天机器人、阻止监控定价、限制健康保险AI应用——表明在联邦行动缺失时,各州正成为AI实际监管者。

OpenAI与谷歌员工为Anthropic提交法庭之友简报
TechCrunch · 3月9日
来自竞争公司的30余名员工(含谷歌首席科学家Jeff Dean)向联邦法院陈情:若五角大楼能因企业设定安全边界而将其列入黑名单,所有AI开发者都将岌岌可危。

微软:黑客在网攻各阶段滥用AI
微软安全博客 · 3月7日
威胁行为体现将生成式AI用于完整攻击周期——某俄语黑客五周内利用AI攻破600多个防火墙,这种规模以往需要整个团队才能实现。

司法部律师在因AI生成文书受训斥前辞职
彭博法律 · 3月10日
从业30年的联邦检察官提交含AI伪造引用的法律文书后辞职,称这是其职业生涯最糟糕决定,凸显即使资深专业人士也难逃AI幻觉陷阱。

Anthropic寻求撤销"供应链风险"定性
NPR · 3月9日
供应链风险标签首次用于美国本土公司,可能重塑所有AI企业与军方的谈判方式——只因Anthropic在监控与自主武器上划出两条红线。

英文来源:

Things are pretty serious. AI is totally a geopolitical issue now - and more AI companies have 1 billion seed rounds now
Anthropic got blacklisted. OpenAI is the bad guy as users revolted. LeCun walked away from Meta with a billion dollars and a thesis that everyone else is wrong. Oracle said it out loud: we're firing people to build data centres. None of this is theoretical anymore.
Sponsor
If you’re only using AI to rewrite emails, you’re doing it wrong.
Become AI-proficient in 8 weeks. The AI for Business & Finance Certificate Program teaches practical, everyday AI for nontechnical professionals—and earns you a certificate from Columbia Business School Exec Ed. Starts March 16.
In the News
Online Age-Verification Tools for Child Safety Are Surveilling Adults
CNBC · Mar 8
Half of U.S. states now force every user through AI-powered identity gates to protect children — creating a mass surveillance infrastructure that caused site traffic to collapse in states where it was enforced.

QuitGPT: 2.5M Users Flee ChatGPT; Claude Hits #1 on App Store

TechCrunch · Mar 1
OpenAI's Pentagon deal triggered the largest consumer AI backlash ever — ChatGPT uninstalls surged 295%, Claude's daily signups quadrupled, and Anthropic's app overtook ChatGPT for the first time in the U.S. App Store.
Anthropic Claims Pentagon Feud Could Cost It Billions
TechCrunch · Mar 9
Anthropic filed two federal lawsuits challenging its "supply chain risk" designation — a label never before used against a domestic company — arguing the Pentagon is retaliating for its refusal to allow unrestricted military AI use.
UK Eyes Sweeping Powers to Regulate Tech
Computing.co.uk · Mar 9
The UK government is seeking broad executive authority to regulate online harms through the Children's Wellbeing Bill — and legal experts warn those same powers could be weaponized by future populist governments.
Yann LeCun's AMI Labs Raises $1.03B to Build 'World Models'
TechCrunch · Mar 10
Turing Award winner LeCun left Meta, called LLMs "complete nonsense" as a path to real intelligence, and raised $1B at a $3.5B valuation for a Paris-based startup building AI that understands physical reality — backed by Bezos, Nvidia, Schmidt, and Cuban.
Fake AI Content About the Iran War Is All Over X
CNN · Mar 10
AI-generated fake war footage is racking up tens of millions of views on X while monetized accounts profit from it — and X's own chatbot Grok made it worse by confirming fabricated content as real.
Oracle Plans 20,000–30,000 Layoffs to Fund AI Data Centres
Yahoo Finance / Bloomberg · Mar 5
Oracle is cutting up to 18% of its workforce to free $8–10B for AI infrastructure — the starkest example yet of a company explicitly firing humans to pay for AI, while U.S. banks pull back from financing the expansion.
Kids Would Be Banned from Using Chatbots in Minnesota AI Bills
CBS News · Mar 10
Minnesota's bipartisan AI bills — banning minors from chatbots, blocking surveillance pricing, and restricting AI in health insurance — signal that states are becoming AI's de facto regulators in the absence of federal action.
OpenAI and Google Workers File Amicus Brief for Anthropic
TechCrunch · Mar 9
Over 30 employees from rival firms — including Google's chief scientist Jeff Dean — told a federal court that if the Pentagon can blacklist a company for setting safety boundaries, no AI developer is safe.
Microsoft: Hackers Abusing AI at Every Stage of Cyberattacks
Microsoft Security Blog · Mar 7
Threat actors are now using generative AI across the entire attack lifecycle — one Russian-speaking hacker breached 600+ firewalls in five weeks using AI, a scale previously requiring a full team.
DOJ Lawyer Resigns Before Judicial Scolding for AI-Filled Brief
Bloomberg Law · Mar 10
A 30-year federal prosecutor resigned after filing a brief with AI-fabricated citations — calling it the worst decision of his career and underscoring that even experienced professionals are falling for AI hallucinations.
Anthropic Seeks to Undo 'Supply Chain Risk' Designation
NPR · Mar 9
The first-ever use of a supply chain risk designation against a domestic U.S. company sets a precedent that could reshape how every AI firm negotiates with the military — all because Anthropic drew two red lines on surveillance and autonomous weapons.

AI周刊

文章目录


    扫描二维码,在手机上阅读