我们于十二月发布的最新人工智能资讯。

内容来源:https://blog.google/technology/ai/google-ai-updates-december-2025/
内容总结:
谷歌于去年12月发布多项人工智能技术进展,致力于将前沿智能技术融入日常生活。其最新推出的Gemini 3 Flash模型以高速响应和增强推理能力为核心,已作为默认模型集成至Gemini应用及搜索的AI模式,并面向全球开发者与企业用户开放。同期,Gemini应用新增视频AI内容识别功能,用户可通过上传视频快速检测内容是否经AI生成或编辑。
为提升浏览效率,谷歌实验室推出Disco浏览体验,其中GenTabs功能可自动整合用户打开的标签页与聊天记录,生成定制化交互界面。语音交互方面,Gemini 2.5 Flash Native音频模型实现多平台升级,支持更自然的对话交互;Google Translate应用同步推出实时语音翻译测试版,覆盖超70种语言。
在开发者生态层面,谷歌通过Interactions API开放更强大的Gemini深度研究代理功能,并开源DeepSearchQA评估基准。面向消费端,美国用户可使用升级版虚拟试穿工具,仅需上传自拍即可生成个人数字化身试穿海量商品。此外,Gemini 3 Pro与Nano Banana Pro模型已在近120个国家和地区的谷歌搜索AI模式中面向高级用户开放。
内容生态方面,YouTube发布2025年度趋势报告并首次推出个人年度回顾功能;Google Photos年度回顾新增隐私控制与创意模板,支持一键分享至社交平台。谷歌年度搜索报告显示,2025年用户搜索行为因AI技术普及更趋近自然对话模式,体现技术正逐步适应人类思维习惯。
中文翻译:
我们在12月发布的最新AI动态
二十多年来,我们持续投入机器学习与人工智能的研究、工具及基础设施建设,致力于打造让更多人日常生活更美好的产品。谷歌各团队正积极探索AI在医疗健康、危机应对、教育等广泛领域的应用潜力。为及时同步我们的进展,我们将定期汇总谷歌最新的AI动态。
以下是我们在12月发布的部分AI成果回顾。
12月通常是回顾过往、展望未来的时刻。正因如此,本月我们专注于将前沿智能技术带出实验室,以切实改善日常生活的方式交付到您手中。无论是助您秒速处理任务的Gemini 3 Flash疾速响应、Gemini应用新增的视频验证工具,还是通过GenTabs轻松管理浏览器标签页的贴心功能——这些更新都指向同一个目标:让科技适应您的需求,而非相反。在推进技术边界的同时,我们始终秉持负责任的态度,推出全新工具助您验证AI生成内容,让您能安心探索这片新疆域。
我们发布了专为速度而生的前沿智能模型Gemini 3 Flash。该模型以前沿智能覆盖谷歌生态的几乎每个角落,融合顶级模型的响应速度与增强的推理能力,助力日常任务处理,同时显著降低成本。它已作为默认模型登陆Gemini应用及搜索的AI模式,全球用户都能直接在消费级产品中体验前沿模型的卓越推理能力。我们还将该模型推广至全球开发者社区,包括通过新型智能体开发平台API Antigravity构建应用的开发者,以及Vertex AI平台的企业客户。
我们在Gemini应用中新增视频AI验证工具。视频验证功能现已直接集成至Gemini应用。用户可上传不超过100MB或90秒的视频,直接询问内容是否由谷歌AI生成或编辑。Gemini通过难以察觉的SynthID数字水印技术分析音视频轨道,精准定位包含AI生成元素的片段。
我们宣布推出改善浏览体验、管理复杂在线任务的全新实验。我们都曾为同时打开数十个标签页查阅资料或规划行程而感到困扰。为此,谷歌实验室推出全新浏览体验Disco,旨在化解这种复杂局面。Disco搭载实验性功能GenTabs,能主动整合您已打开的标签页与聊天记录,生成定制化的交互式网络应用——将零散的浏览过程转化为高效的任务处理工具。
我们升级Gemini音频模型以实现强大的语音交互。全新升级的Gemini 2.5 Flash Native Audio专为处理复杂工作流和自然对话设计,带来更流畅的交流、更高准确度及更佳指令响应能力。该模型现已登陆AI Studio、Vertex AI、Gemini Live,并首次接入Search Live。此外,谷歌翻译应用新增实时语音翻译测试版,支持70余种语言的实时耳机传译,保留原始语调与语速,真正打破全球沟通壁垒。
我们推出全新的Gemini深度研究智能体。通过Interactions API,我们为开发者带来了更强大的Gemini深度研究功能。开发者现可使用谷歌AI Studio的Gemini API密钥,将深度研究能力(如梳理复杂议题、整合研究发现)直接嵌入自有应用。我们同时开源了全新的DeepSearchQA基准测试,为衡量研究智能体在网络任务中的综合效能提供透明评估体系。此外,我们还展示了开发者如何构建移动优先方案解决现实问题,包括为视障人士设计的AI助手,以及助力认知障碍群体提升自主能力的工具。
我们为美国用户推出虚拟试衣工具的全新使用方式。美国消费者现可通过升级版虚拟试衣工具以更个性化的方式寻觅心仪穿搭。无需上传全身照,仅需一张简单自拍,Nano Banana即可生成逼真的全身数字形象。选定心仪的影棚级形象与服装尺码后,您可立即预览穿戴我们购物图谱中数十亿商品的虚拟效果。
我们在搜索中扩展Gemini 3 Pro与Nano Banana Pro的应用范围。我们将最智能的Gemini 3模型引入谷歌搜索AI模式,覆盖近120个国家和地区的英文用户。谷歌AI Pro和Ultra订阅用户可在AI模式中点击模型下拉菜单的"Thinking with 3 Pro",通过Gemini 3 Pro实现复杂主题的可视化呈现。我们还将生成式图像模型Nano Banana Pro推广至更多国家的英文搜索AI模式,首批面向谷歌AI Pro和Ultra订阅用户开放。对于美国用户,我们进一步放宽这些专业版模型的访问权限(无需订阅),并为谷歌AI Pro和Ultra订阅用户提供更高使用额度。
我们发布2025年YouTube年度趋势及首份个人年度回顾。YouTube通过回顾2025年庆祝其成立二十周年。MrBeast连续六年蝉联最具影响力创作者,Rosé与Bruno Mars的合作曲《APT.》成为最快突破十亿观看量的KPop视频。为此,YouTube首次推出个人年度回顾功能,为您呈现专属的年度观看总结。
我们为谷歌相册年度回顾新增个性化定制、创作与分享方式。谷歌相册年度回顾功能再度回归,助您重温2025年的珍贵时刻,现配备更丰富的功能让体验真正专属化。新增的控制选项允许您隐藏特定人物或照片,确保记忆长廊完全按您的心意呈现。您还可使用CapCut的独家模板进行创意编辑,并将成品直接分享至WhatsApp或常用社交平台。
我们发布《2025年度搜索趋势》。2025年见证了诸多历史性事件——从首位美国教皇诞生到全球对《KPop Demon Hunters》的狂热——但最悄然的革命正发生在我们的指尖。得益于AI技术,我们见证了搜索行为向自然对话式提问的巨大转变,"如何…"、"…是怎么回事"这类查询量激增,标志着AI终于让科技追上了人类的思考方式。
英文来源:
The latest AI news we announced in December
For more than 20 years, we’ve invested in machine learning and AI research, tools and infrastructure to build products that make everyday life better for more people. Teams across Google are working on ways to unlock AI’s benefits in fields as wide-ranging as healthcare, crisis response and education. To keep you posted on our progress, we're doing a regular roundup of Google's most recent AI news.
Here’s a look back at some of our AI announcements from December.
December is usually a time for reflection, and looking ahead. That’s why this month we’ve been focused on taking frontier intelligence out of the lab and putting it into your hands in ways that actually matter for your day-to-day. Whether it’s the lightning speed of Gemini 3 Flash helping you tackle tasks in seconds, the new video verification tools in the Gemini app or the simple relief of having GenTabs tame your open tabs, these updates share a single goal: making technology adapt to you, not the other way around. And as we push these boundaries, we’re staying grounded in responsibility — launching new tools to help you verify AI content so you can explore this new frontier with confidence.
We released Gemini 3 Flash, featuring frontier intelligence built for speed. Gemini 3 Flash brings frontier intelligence to virtually every corner of the Google ecosystem, combining the speed of our most advanced models with improved reasoning capabilities to help with everyday tasks, all while keeping costs significantly lower. It's rolling out as the default model in the Gemini app and AI Mode in Search so people everywhere can now experience the incredible reasoning of our frontier model, right in our consumer products. And we’ve scaled this rollout to a global community, including developers building in the API Antigravity, our new agentic development platform, and enterprise customers on Vertex AI.
We added new AI verification tools for videos in the Gemini app. We’re bringing video verification capabilities directly to the Gemini app. People can now upload videos — up to 100 MB or 90 seconds — and simply ask if the content was generated or edited using Google AI. Gemini uses imperceptible SynthID watermarks to analyze both audio and visual tracks, pinpointing exactly which segments contain AI-generated elements.
We announced a new experiment to improve browsing and manage complex online tasks. We’ve all felt the friction of juggling dozens of tabs to research a topic or plan a trip. Enter Disco, a new browsing experience from Google Labs designed to tame that complexity. Disco features GenTabs, an experiment that proactively synthesizes your open tabs and chat history to build custom, interactive web applications — transforming a scattered browser session into a streamlined tool for getting things done.
We upgraded Gemini audio models for powerful voice interactions. The updated Gemini 2.5 Flash Native Audio is built to handle complex workflows and natural dialogue — meaning smoother conversations, higher accuracy and better responsiveness to instructions. It’s available now in AI Studio, Vertex AI, Gemini Live and, for the first time, Search Live. Plus, a new live speech translation beta in the Google Translate app brings live translation in 70+ languages directly to your headphones, preserving original intonation and pacing to unlock truly global communication.
We released a new Gemini Deep Research agent. We brought a more powerful Gemini Deep Research to developers through the Interactions API. Developers can now embed advanced research capabilities — like navigating complex topics and synthesizing findings — directly into their own applications using a Gemini API key from Google AI Studio. We’ve also open-sourced our new DeepSearchQA benchmark, offering a transparent way to test just how comprehensive and effective research agents can be on web tasks. Plus, we shared how developers are already building mobile-first solutions to address real-world problems, from AI assistants for the visually impaired to tools fostering autonomy for people with cognitive disabilities.
We released a new way for shoppers in the U.S. to use our virtual try-on tool. U.S. shoppers now have a more personalized way to find their next favorite outfit with our updated virtual try-on tool. Instead of needing a full-body photo, you can now upload a simple selfie and Nano Banana will generate a realistic, full-body digital version of you. Once you’ve selected your preferred studio-like image and clothing size, you can instantly see how you’d look in billions of products from our Shopping Graph.
We expanded Gemini 3 Pro and Nano Banana Pro in Search. We brought our most intelligent model, Gemini 3, to AI Mode in Google Search in nearly 120 countries and territories in English. Google AI Pro and Ultra subscribers can visualize complex topics with Gemini 3 Pro by tapping “Thinking with 3 Pro” in the model drop-down in AI Mode. We also brought our generative imagery model, Nano Banana Pro, to AI Mode in more countries in English, starting with Google AI Pro and Ultra subscribers. For those in the U.S., we also expanded access to these Pro models (no subscription required), with higher usage limits for Google AI Pro and Ultra subscribers.
We released the top YouTube trends of 2025 and first-ever personal Recap. YouTube celebrated its 20th birthday by looking back at 2025. MrBeast was the top creator for the sixth year running, while Rosé and Bruno Mars’ track "APT." became the fastest KPop video to hit one billion views. To mark the occasion, YouTube is launching its first-ever Recap so you can see a personalized summary of your year.
We added new ways for you to personalize, create and share your Google Photos Recap. Google Photos Recap has returned to help you celebrate your favorite moments from 2025, now with more features to make the experience truly yours. We’ve added new controls that let you hide specific people or photos, ensuring your trip down memory lane is exactly how you want it. Plus, you can now get creative with exclusive templates in CapCut and easily share your finished masterpiece directly to WhatsApp or your favorite social feeds.
We released Year in Search 2025. 2025 delivered history-making headlines — from the first American Pope to the global obsession with "KPop Demon Hunters" — but the quietest revolution happened right at our fingertips. Thanks to AI, this was the year we saw a massive shift toward natural, conversational questions with a surge in queries like “How do I…” and “What’s the deal with…” as AI helped technology finally catch up to the way we think.