«

2026年人工智能将走向何方?

qimuai 发布于 阅读:8 一手编译


2026年人工智能将走向何方?

内容来源:https://www.technologyreview.com/2026/01/05/1130662/whats-next-for-ai-in-2026/

内容总结:

《麻省理工科技评论》展望2026:五大AI趋势将如何塑造未来

随着人工智能技术持续高速演进,其对商业、科研乃至社会治理的影响正日益深化。《麻省理工科技评论》的“未来展望”系列基于对行业动态的持续追踪,为2026年梳理出五大值得关注的关键趋势。

趋势一:中国大模型将更深度融入硅谷产品生态
过去一年,以深度求索(DeepSeek)的R1模型为代表的中国开源大模型取得突破性进展,其卓越性能与开源策略在全球AI社区获得广泛认可。阿里巴巴的通义千问(Qwen)系列等模型下载量已达数千万次,成为全球最受欢迎的预训练模型之一。智谱GLM、月之暗面Kimi等也紧随开源潮流。这种开放、可定制的模式与美国科技巨头的“封闭”策略形成鲜明对比,正吸引越来越多美国初创公司采用中国开源模型作为其应用底层。预计2026年,将有更多硅谷产品“悄然”构建于中国大模型之上,且中国模型与西方前沿技术的发布间隔将持续缩短。

趋势二:美国将陷入更激烈的AI监管拉锯战
美国联邦政府与各州在AI监管权上的矛盾预计在2026年进一步激化。特朗普总统此前签署行政令,旨在限制各州自行制定严格的AI法规,主张联邦主导的轻监管模式。然而,加州等已率先立法要求AI公司进行安全测试的州政府可能诉诸法律对抗。与此同时,AI公司将通过强大的政治行动委员会(超级PAC)大力游说,以“避免扼杀创新、落后于中国”为由抵制监管。另一方面,公众对AI引发青少年心理健康、能源消耗等问题的担忧也在增加,将推动各州在允许范围内继续尝试立法。一场涉及白宫、国会、各州及产业界的复杂监管博弈将持续上演,难见分晓。

趋势三:AI购物助手将重塑消费体验
AI驱动的“智能购物代理”正从概念走向大规模商用。谷歌Gemini、OpenAI的ChatGPT等已集成购物功能,能根据预算和需求推荐商品、比价甚至代为联系商家。咨询公司麦肯锡预测,到2030年,此类“代理商务”年规模可能达3至5万亿美元。随着用户与AI交互时间持续增长,而传统搜索引擎和社交媒体的流量下滑,预计未来一年将有更多零售巨头与AI平台达成合作,将无缝购物体验深度嵌入对话式AI中,彻底改变人们的消费方式。

趋势四:大语言模型将助力取得重要新发现
尽管大语言模型自身存在“幻觉”问题,但其与特定算法结合后,正成为拓展人类知识边界的新工具。例如,谷歌DeepMind的AlphaEvolve系统将大模型与进化算法结合,已成功设计出提升数据中心能效的新算法。此后,开源版本及改进模型相继出现。全球数百家公司正投入巨资,探索利用AI解决未解数学难题、加速计算、发现新药物与材料。随着方法论逐渐成熟,预计2026年AI辅助科研将加速推进,并可能取得更具影响力的突破性发现。

趋势五:AI法律纠纷升级,进入“深水区”
针对AI公司的诉讼正从相对清晰的版权争议,转向更复杂、更棘手的责任认定领域。核心法律难题包括:AI公司是否需为聊天机器人鼓励用户实施的自伤等行为负责?若聊天机器人传播明显虚假的诽谤信息,其开发者能否被起诉?相关案件(例如青少年自杀受害者家属起诉OpenAI案)将于2026年陆续进入庭审阶段,开始提供初步判例。与此同时,特朗普的联邦监管行政令将进一步搅动法律环境。无论结果如何,一场多方向、高强度的法律诉讼浪潮即将到来,法院系统甚至可能借助AI工具以应对案件激增。

(本文内容综合自《麻省理工科技评论》“What's Next for AI in 2026”系列预测报道。)

中文翻译:

2026年人工智能将走向何方
我们的AI撰稿人对未来一年做出了大胆预测——以下是值得关注的五大热门趋势。

《麻省理工科技评论》的"未来展望"系列纵览行业、趋势与技术,为您率先揭示未来图景。您可在此处阅读本系列其他文章。

在一个瞬息万变的行业中,冒险预测未来趋势或许显得鲁莽。(AI泡沫?哪来的泡沫?)但过去几年我们正是这样做的——今年也不例外。

我们上次的预测表现如何?我们曾为2025年选出五大AI趋势,包括:我们称之为生成式虚拟游乐场的世界模型(验证:从Google DeepMind的Genie 3到World Labs的Marble,能够即时生成逼真虚拟环境的技术正日益精进);所谓的推理模型(验证:还需多言吗?推理模型已迅速成为解决顶尖难题的新范式);AI赋能科研的爆发(验证:OpenAI正效仿Google DeepMind,组建专门团队聚焦于此);与国家安全关系更紧密的AI公司(验证:OpenAI转变了其技术用于战争的立场,与国防科技初创公司Anduril签署协议,协助击落战场无人机);以及英伟达面临的实质竞争(部分验证:中国正全力研发先进AI芯片,但英伟达的主导地位目前看来仍难以撼动)。

那么2026年将会如何?以下是我们对未来12个月的重要预测。

更多硅谷产品将基于中国大语言模型构建
过去一年堪称中国开源模型的爆发之年。1月,深度求索公司发布了开源推理模型R1,以一家资源相对有限的中国公司所能取得的成就震惊世界。到年底,"深度求索时刻"已成为AI创业者、观察者和开发者频繁提及的短语——某种程度上成为一种令人向往的标杆。

这是许多人首次意识到,无需依赖OpenAI、Anthropic或谷歌,也能获得顶尖的AI性能体验。

像R1这样的开放权重模型允许任何人下载模型并在自有硬件上运行。它们也更具可定制性,团队可通过蒸馏、剪枝等技术调整模型。这与美国大公司发布的"封闭"模型形成鲜明对比——后者的核心能力仍属专有,且使用成本高昂。

因此,中国模型已成为便捷之选。CNBC和彭博社的报道显示,美国初创公司日益认识到并接纳中国模型所能提供的价值。

由阿里巴巴(中国最大电商平台淘宝的母公司)创建的"通义千问"模型系列备受青睐。仅Qwen2.5-1.5B-Instruct模型的下载量就达885万次,成为使用最广泛的预训练大语言模型之一。通义千问系列涵盖多种模型规模,并针对数学、编程、视觉及指令遵循等任务推出专用版本,这种广度助力其成为开源领域的重要力量。

其他曾对开源持观望态度的中国AI公司正效仿深度求索的策略,突出代表包括智谱AI的GLM和月之暗面的Kimi。竞争也推动美国公司至少部分开放了技术:8月,OpenAI发布了其首个开源模型;11月,西雅图非营利组织艾伦人工智能研究所发布了最新开源模型Olmo 3。

即使在中美对抗加剧的背景下,中国AI公司近乎一致拥抱开源的姿态,为其在全球AI社区赢得了好感与长期信任优势。2026年,预计将有更多硅谷应用悄然构建于中国开源模型之上,中国模型发布与西方前沿技术的差距将持续缩小——从数月缩短至数周,有时甚至更短。

——Caiwei Chen

美国将面临又一年监管拉锯战
人工智能监管之争即将迎来摊牌时刻。12月11日,唐纳德·特朗普总统签署行政命令,旨在削弱各州的AI法规,此举意在束缚各州对这个蓬勃发展的行业进行管控的手脚。2026年,预计将出现更多政治角力。白宫与各州将就谁有权监管这项迅猛发展的技术展开博弈,而AI公司将凭借"各州法规拼凑将扼杀创新、使美国在与中国AI军备竞赛中处于劣势"的论调,发动激烈游说攻势以扼杀监管。

根据特朗普的行政命令,各州若与其轻监管的愿景冲突,可能面临被起诉或联邦资金被切断的风险。加州等民主党主政的大州(该州刚颁布美国首部前沿AI法律,要求公司公布AI模型安全测试结果)将诉诸法庭,主张只有国会才能推翻州法律。但那些无法承受失去联邦资金或惧怕成为特朗普靶子的州可能会妥协。不过,预计各州仍将在热点议题上推进立法,尤其是在特朗普命令为州级立法开绿灯的领域。随着聊天机器人被指控引发青少年自杀、数据中心能耗持续攀升,各州将面临日益增长的公众压力,要求设立防护栏。

特朗普承诺将与国会合作制定联邦AI法律以替代州法规。别抱太大期望。国会曾在2025年两次未能通过暂停州立法的法案,我们对今年能通过自家法案也不乐观。

OpenAI和Meta等AI公司将继续动用强大的超级政治行动委员会,支持拥护其议程的政治候选人,并打击阻碍者。另一方面,支持AI监管的超级政治行动委员会也将筹集资金进行对抗。关注它们在明年中期选举中的激烈交锋。

AI发展越深入,人们争夺其主导权的斗争就越激烈。2026年将是监管拉锯战的又一年——且看不到尽头。

——Michelle Kim

聊天机器人将改变购物方式
想象这样一个世界:您拥有一位24小时待命的私人购物专家,能即刻为最难挑选礼物的亲友推荐礼物,或根据您有限的预算网罗最佳书架清单。更妙的是,他们能分析厨房电器的优劣,与看似相同的竞品比较,并为您找到最划算的交易。当您满意其建议后,他们还会处理购买和配送细节。

但这位知识渊博的购物者并非真人——而是聊天机器人。这也不是遥远的预测。Salesforce近期表示,预计今年假日季AI将驱动2630亿美元的在线销售额,约占订单总量的21%。专家押注AI增强购物将在未来几年成为更大产业。咨询公司麦肯锡研究显示,到2030年,智能体商业每年将创造3万亿至5万亿美元价值。

不出所料,AI公司已投入巨资,力求在其平台上实现无缝购物。谷歌的Gemini应用现已能接入其强大的Shopping Graph产品与商家数据集,甚至能使用智能体技术代您致电商家。与此同时,OpenAI于11月宣布推出ChatGPT购物功能,可快速编制购物指南,并与沃尔玛、塔吉特和Etsy达成协议,允许用户在聊天交互中直接购买商品。

随着消费者与AI聊天时间持续增长,来自搜索引擎和社交媒体的网络流量不断下滑,预计未来一年此类合作将大量涌现。

——Rhiannon Williams

大语言模型将取得重要新发现
我要先打个预防针。众所周知,大语言模型常会输出大量无意义内容。除非撞上"无限猴子打字"般的运气,否则大语言模型无法独立发现任何东西。但它们确实有潜力拓展人类知识的边界。

5月,我们窥见了这种可能性:Google DeepMind公布了AlphaEvolve系统,该系统使用其Gemini大语言模型为未解难题设计新算法。其突破在于将Gemini与进化算法结合,由后者检查建议、择优筛选,并反馈给大语言模型进行优化。

Google DeepMind使用AlphaEvolve找到了管理数据中心和谷歌TPU芯片能耗的更高效方法。这些发现虽重要但尚未颠覆格局。然而,Google DeepMind的研究人员正持续推进该方法,探索其潜力边界。

其他团队迅速跟进。AlphaEvolve发布一周后,新加坡AI工程师Asankhaya Sharma分享了开源版本OpenEvolve。9月,日本公司Sakana AI发布了名为SinkaEvolve的软件版本。11月,一支中美研究团队公布了AlphaResearch,声称其改进了AlphaEvolve已超越人类的数学解决方案之一。

还有其他探索路径。例如,丹佛科罗拉多大学的研究人员试图通过调整所谓推理模型的工作方式,使大语言模型更具创造性。他们借鉴认知科学对人类创造性思维的理解,推动推理模型提出比其典型稳妥建议更具突破性的解决方案。

数百家公司正投入数十亿美元,寻找利用AI破解未解数学难题、加速计算机运算、研发新药物与材料的方法。既然AlphaEvolve已展示了大语言模型的潜力,预计这方面的探索将快速升温。

——Will Douglas Heaven

法律战火升温
一段时间以来,针对AI公司的诉讼相当可预测:作者或音乐人等权利持有人会起诉使用其作品训练AI模型的公司,而法院通常支持科技巨头。AI即将面临的法律斗争将远为复杂。

争议焦点在于棘手且悬而未决的问题:当聊天机器人协助青少年策划自杀时,AI公司是否需对其鼓励行为负责?如果聊天机器人传播关于您的明显虚假信息,其创建者是否可被控诽谤?如果公司败诉,保险公司会否将AI公司拒之门外?

2026年,我们将开始看到这些问题的答案,部分原因是一些备受关注的案件将进入庭审(一名自杀青少年的家属将于11月起诉OpenAI)。

与此同时,特朗普总统12月的行政命令将使法律环境进一步复杂化——关于酝酿中的监管风暴,详见上文Michelle的报道。

无论如何,我们将看到令人眼花缭乱的多方位诉讼(更不用说一些法官在案件激增中甚至开始求助AI)。

——James O’Donnell

深度阅读
人工智能
OpenAI新大语言模型揭示AI运作奥秘
这款实验模型虽无法与最顶尖模型竞争,但能告诉我们它们为何行为怪异——以及其真实可信度究竟如何。

量子物理学家压缩并"去审查"了深度求索R1模型
他们成功将该AI推理模型体积缩减超一半——并声称其现在能回答中国AI系统曾受限的政治敏感问题。

AI聊天机器人比政治广告更能影响选民
与聊天机器人的对话可改变人们的政治观点——但说服力最强的模型也传播最多错误信息。

保持联系
获取《麻省理工科技评论》最新动态
发现特别优惠、头条新闻、即将举办的活动等更多内容。

英文来源:

What’s next for AI in 2026
Our AI writers make their big bets for the coming year—here are five hot trends to watch.
MIT Technology Review’s What’s Next series looks across industries, trends, and technologies to give you a first look at the future. You can read the rest of them here.
In an industry in constant flux, sticking your neck out to predict what’s coming next may seem reckless. (AI bubble? What AI bubble?) But for the last few years we’ve done just that—and we’re doing it again.
How did we do last time? We picked five hot AI trends to look out for in 2025, including what we called generative virtual playgrounds, a.k.a world models (check: From Google DeepMind’s Genie 3 to World Labs’s Marble, tech that can generate realistic virtual environments on the fly keeps getting better and better); so-called reasoning models (check: Need we say more? Reasoning models have fast become the new paradigm for best-in-class problem solving); a boom in AI for science (check: OpenAI is now following Google DeepMind by setting up a dedicated team to focus on just that); AI companies that are cozier with national security (check: OpenAI reversed position on the use of its technology for warfare to sign a deal with the defense-tech startup Anduril to help it take down battlefield drones); and legitimate competition for Nvidia (check, kind of: China is going all in on developing advanced AI chips, but Nvidia’s dominance still looks unassailable—for now at least).
So what’s coming in 2026? Here are our big bets for the next 12 months.
More Silicon Valley products will be built on Chinese LLMs
The last year shaped up as a big one for Chinese open-source models. In January, DeepSeek released R1, its open-source reasoning model, and shocked the world with what a relatively small firm in China could do with limited resources. By the end of the year, “DeepSeek moment” had become a phrase frequently tossed around by AI entrepreneurs, observers, and builders—an aspirational benchmark of sorts.
It was the first time many people realized they could get a taste of top-tier AI performance without going through OpenAI, Anthropic, or Google.
Open-weight models like R1 allow anyone to download a model and run it on their own hardware. They are also more customizable, letting teams tweak models through techniques like distillation and pruning. This stands in stark contrast to the “closed” models released by major American firms, where core capabilities remain proprietary and access is often expensive.
As a result, Chinese models have become an easy choice. Reports by CNBC and Bloomberg suggest that startups in the US have increasingly recognized and embraced what they can offer.
One popular group of models is Qwen, created by Alibaba, the company behind China’s largest e-commerce platform, Taobao. Qwen2.5-1.5B-Instruct alone has 8.85 million downloads, making it one of the most widely used pretrained LLMs. The Qwen family spans a wide range of model sizes alongside specialized versions tuned for math, coding, vision, and instruction-following, a breadth that has helped it become an open-source powerhouse.
Other Chinese AI firms that were previously unsure about committing to open source are following DeepSeek’s playbook. Standouts include Zhipu’s GLM and Moonshot’s Kimi. The competition has also pushed American firms to open up, at least in part. In August, OpenAI released its first open-source model. In November, the Allen Institute for AI, a Seattle-based nonprofit, released its latest open-source model, Olmo 3.
Even amid growing US-China antagonism, Chinese AI firms’ near-unanimous embrace of open source has earned them goodwill in the global AI community and a long-term trust advantage. In 2026, expect more Silicon Valley apps to quietly ship on top of Chinese open models, and look for the lag between Chinese releases and the Western frontier to keep shrinking—from months to weeks, and sometimes less.
—Caiwei Chen
The US will face another year of regulatory tug-of-war
The battle over regulating artificial intelligence is heading for a showdown. On December 11, President Donald Trump signed an executive order aiming to neuter state AI laws, a move meant to handcuff states from keeping the growing industry in check. In 2026, expect more political warfare. The White House and states will spar over who gets to govern the booming technology, while AI companies wage a fierce lobbying campaign to crush regulations, armed with the narrative that a patchwork of state laws will smother innovation and hobble the US in the AI arms race against China.
Under Trump’s executive order, states may fear being sued or starved federal funding if they clash with his vision for light-touch regulation. Big Democratic states like California—which just enacted the nation’s first frontier AI law requiring companies to publish safety testing for their AI models—will take the fight to court, arguing that only Congress can override state laws. But states that can’t afford to lose federal funding, or fear getting in Trump’s crosshairs, might fold. Still, expect to see more state lawmaking on hot-button issues, especially where Trump’s order gives states a green light to legislate. With chatbots accused of triggering teen suicides and data centers sucking up more and more energy, states will face mounting public pressure to push for guardrails.
In place of state laws, Trump promises to work with Congress to establish a federal AI law. Don’t count on it. Congress failed to pass a moratorium on state legislation twice in 2025, and we aren’t holding out hope that it will deliver its own bill this year.
AI companies like OpenAI and Meta will continue to deploy powerful super-PACs to support political candidates who back their agenda and target those who stand in their way. On the other side, super-PACs supporting AI regulation will build their own war chests to counter. Watch them duke it out at next year’s midterm elections.
The further AI advances, the more people will fight to steer its course, and 2026 will be another year of regulatory tug-of-war—with no end in sight.
—Michelle Kim
Chatbots will change the way we shop
Imagine a world in which you have a personal shopper at your disposal 24-7—an expert who can instantly recommend a gift for even the trickiest-to-buy-for friend or relative, or trawl the web to draw up a list of the best bookcases available within your tight budget. Better yet, they can analyze a kitchen appliance’s strengths and weaknesses, compare it with its seemingly identical competition, and find you the best deal. Then once you’re happy with their suggestion, they’ll take care of the purchasing and delivery details too.
But this ultra-knowledgeable shopper isn’t a clued-up human at all—it’s a chatbot. This is no distant prediction, either. Salesforce recently said it anticipates that AI will drive $263 billion in online purchases this holiday season. That’s some 21% of all orders. And experts are betting on AI-enhanced shopping becoming even bigger business within the next few years. By 2030, between $3 trillion and $5 trillion annually will be made from agentic commerce, according to research from the consulting firm McKinsey.
Unsurprisingly, AI companies are already heavily invested in making purchasing through their platforms as frictionless as possible. Google’s Gemini app can now tap into the company’s powerful Shopping Graph data set of products and sellers, and can even use its agentic technology to call stores on your behalf. Meanwhile, back in November, OpenAI announced a ChatGPT shopping feature capable of rapidly compiling buyer’s guides, and the company has struck deals with Walmart, Target, and Etsy to allow shoppers to buy products directly within chatbot interactions.
Expect plenty more of these kinds of deals to be struck within the next year as consumer time spent chatting with AI keeps on rising, and web traffic from search engines and social media continues to plummet.
—Rhiannon Williams
An LLM will make an important new discovery
I’m going to hedge here, right out of the gate. It’s no secret that large language models spit out a lot of nonsense. Unless it’s with monkeys-and-typewriters luck, LLMs won’t discover anything by themselves. But LLMs do still have the potential to extend the bounds of human knowledge.
We got a glimpse of how this could work in May, when Google DeepMind revealed AlphaEvolve, a system that used the firm’s Gemini LLM to come up with new algorithms for solving unsolved problems. The breakthrough was to combine Gemini with an evolutionary algorithm that checked its suggestions, picked the best ones, and fed them back into the LLM to make them even better.
Google DeepMind used AlphaEvolve to come up with more efficient ways to manage power consumption by data centers and Google’s TPU chips. Those discoveries are significant but not game-changing. Yet. Researchers at Google DeepMind are now pushing their approach to see how far it will go.
And others have been quick to follow their lead. A week after AlphaEvolve came out, Asankhaya Sharma, an AI engineer in Singapore, shared OpenEvolve, an open-source version of Google DeepMind’s tool. In September, the Japanese firm Sakana AI released a version of the software called SinkaEvolve. And in November, a team of US and Chinese researchers revealed AlphaResearch, which they claim improves on one of AlphaEvolve’s already better-than-human math solutions.
There are alternative approaches too. For example, researchers at the University of Colorado Denver are trying to make LLMs more inventive by tweaking the way so-called reasoning models work. They have drawn on what cognitive scientists know about creative thinking in humans to push reasoning models toward solutions that are more outside the box than their typical safe-bet suggestions.
Hundreds of companies are spending billions of dollars looking for ways to get AI to crack unsolved math problems, speed up computers, and come up with new drugs and materials. Now that AlphaEvolve has shown what’s possible with LLMs, expect activity on this front to ramp up fast.
—Will Douglas Heaven
Legal fights heat up
For a while, lawsuits against AI companies were pretty predictable: Rights holders like authors or musicians would sue companies that trained AI models on their work, and the courts generally found in favor of the tech giants. AI’s upcoming legal battles will be far messier.
The fights center on thorny, unresolved questions: Can AI companies be held liable for what their chatbots encourage people to do, as when they help teens plan suicides? If a chatbot spreads patently false information about you, can its creator be sued for defamation? If companies lose these cases, will insurers shun AI companies as clients?
In 2026, we’ll start to see the answers to these questions, in part because some notable cases will go to trial (the family of a teen who died by suicide will bring OpenAI to court in November).
At the same time, the legal landscape will be further complicated by President Trump’s executive order from December—see Michelle’s item above for more details on the brewing regulatory storm.
No matter what, we’ll see a dizzying array of lawsuits in all directions (not to mention some judges even turning to AI amid the deluge).
—James O’Donnell
Deep Dive
Artificial intelligence
OpenAI’s new LLM exposes the secrets of how AI really works
The experimental model won't compete with the biggest and best, but it could tell us why they behave in weird ways—and how trustworthy they really are.
Quantum physicists have shrunk and “de-censored” DeepSeek R1
They managed to cut the size of the AI reasoning model by more than half—and claim it can now answer politically sensitive questions once off limits in Chinese AI systems.
AI chatbots can sway voters better than political advertisements
A conversation with a chatbot can shift people's political views—but the most persuasive models also spread the most misinformation.
Stay connected
Get the latest updates from
MIT Technology Review
Discover special offers, top stories, upcoming events, and more.

MIT科技评论

文章目录


    扫描二维码,在手机上阅读