«

一项新研究探讨了人工智能如何影响人们对网络信息的信任度。

qimuai 发布于 阅读:1 一手编译


一项新研究探讨了人工智能如何影响人们对网络信息的信任度。

内容来源:https://news.microsoft.com/signal/articles/a-new-study-explores-how-ai-shapes-what-you-can-trust-online/

内容总结:

微软发布报告:AI时代如何守护网络信息真实可信?

随着人工智能技术飞速发展,网络空间正面临前所未有的信任挑战。从社交媒体上真假难辨的“萌娃说话”视频,到以假乱真的公众人物深度伪造内容,眼见已不再为实。这些AI生成的虚假信息不仅侵蚀公众对新闻、选举和品牌的信任,更对日常网络互动构成威胁。

为此,微软近日发布《媒体完整性与认证:现状、方向与未来》研究报告,系统评估了当前数字内容认证技术的局限性与发展路径。报告指出,没有任何单一技术能够完全杜绝数字欺诈,但通过结合来源追溯、数字水印和数字指纹等多种方法,可以为公众提供内容创作者、制作工具及修改历史等关键信息,从而辅助判断。

研究报告联合主席、微软首席科学官办公室科技政策总监杰西卡·杨指出,当公众缺乏内容来源和历史信息时,极易受到低质或误导性内容的欺骗。因此,报告旨在为构建高可信度的内容来源追溯体系提供路线图,帮助公众识别高质量内容。

研究特别关注了两个关键难题:一是“社会技术攻击”,即攻击者可能通过细微篡改,使真实内容在验证过程中被误判为虚假,或让伪造内容蒙混过关;二是如何在不同安全等级的环境(从高安全系统到离线设备)中保持来源信息的持久性与可靠性。

报告承认,由于媒体格式多样、安全需求不同,且用户对信息透明度的要求各异,建立通用解决方案极为复杂。当前各类认证技术需互补使用,但也均存在固有局限。

展望未来,随着AI编辑内容日益普遍,建立安全的来源认证机制变得至关重要。新闻机构、政府部门及企业均有动力为其发布的内容提供真实性证明,而可靠的内容来源信息也能帮助公众避免将合理修改的正当内容误判为虚假信息。

微软呼吁行业、监管机构持续加强用户研究,优化来源信息的呈现方式,确保技术在实践中真正发挥效用,避免因误用或误解而产生反效果。该研究也为政策制定者、技术开发者和内容创作者提供了当前技术能力与未来建设方向的重要参考。

中文翻译:

预计阅读时间:7分钟。

一项新研究探讨人工智能如何影响你对网络信息的信任

作者

你的社交媒体动态里充斥着这类内容:可爱婴儿说着老气横秋话语的视频、公众人物发表极其反常的言论、离奇到难以置信的自然照片。在人工智能时代,眼见未必为实。

深度伪造技术正在侵蚀人们对新闻、选举、品牌乃至日常互动的信任,促使我们不断质疑事物的真实性。微软今日发布的《媒体完整性与认证:现状、方向与未来》报告,正是以"如何辨别内容真伪"为核心议题。该研究评估了当前各类认证技术,旨在深入理解其局限性,探索强化方案,帮助公众对网络内容做出明智判断。

研究团队指出,没有任何单一方案能独立杜绝数字欺诈。来源追溯、数字水印、数字指纹等技术能提供有价值的信息——例如内容创作者身份、使用工具类型以及是否经过篡改。

当人们缺乏媒体来源与流转历史信息,或接触低质量、误导性内容时,极易受到欺骗。微软首席科学官办公室科技政策总监杰西卡·杨表示:"这份报告旨在规划发展路径,为公众提供更可靠的高置信度来源信息。"

随着深度伪造技术破坏性日益增强,以及美国等多国即将在今年晚些时候推行来源验证立法,帮助公众识别高质量内容标识正变得愈发重要。

媒体来源验证技术已发展多年:微软于2019年率先布局,2021年联合发起内容来源与真实性联盟(C2PA),致力于建立媒体真实性标准规范。

作为该研究的联合负责人,杨进一步阐释了报告的核心要义:

研究初衷为何?

"我们的动机是双重的,"杨解释道,"首先是基于对当前时代的清醒认知。生成式人工智能能力日益强大,区分真实内容(如摄像机拍摄素材)与精密伪造内容变得愈加困难。因此,社会对现有技术的应用需求急剧增长——这些技术能披露并验证内容是否由人工智能生成或篡改。"

"这种趋势已酝酿多时,我们希望通过推动技术合理应用与正确认知,确保其最终利大于弊。"

杨补充说,这份报告旨在为更广泛的媒体完整性与认证生态体系(包括创作者、技术人员、政策制定者等)提供参考,帮助各方认清当前技术的边界,并为未来发展奠定基础。

研究取得哪些成果?

报告规划了提升媒体真实性可信度的路径。作者提出"高置信度认证"的发展方向,以弥补现有媒体完整性验证方法的缺陷。

杨指出,将C2PA来源验证与隐形数字水印结合,能显著提升媒体来源的可信度。但报告也提出诸多警示,例如传统离线设备(如相机)因缺乏关键安全特性,其来源信息更易被篡改,可信度相对较低。

完全杜绝攻击或阻止某些平台剥离来源信号并不现实。杨认为核心挑战在于:"如何构建内置强安全性的可靠标识系统,并在必要时通过附加方法进行强化,以支持数据恢复或人工数字取证工作。"

本研究的独特价值?

杨表示,团队针对三种验证方法探索了两条"尚未充分研究"的路径。其一是社会工程攻击:攻击者可能在验证过程中篡改来源信息或媒体本身,使真实内容显得虚假,或让伪造内容看似真实。

"假设你看到某全球体育赛事的真实照片,显示80%观众为主队欢呼,"她举例说,"客队支持者可能在网络争论中宣称'这都是虚假人群'。此时若有人对图片角落人物进行细微修改,现有检测技术就可能将其判定为AI生成——尽管观众规模真实存在。本应保障真实性的技术,反而强化了虚假叙事。"

"因此,通过理解不同验证器的工作原理,攻击者即使进行极细微的修改,也能操纵公众看到的验证结果。"第二个关键议题则基于C2PA工作,致力于提升内容凭证的持久性与可靠性。杨强调这是研究的创新点:"我们探索了如何在不同安全等级的环境(从高安全系统到低安全离线设备)中添加和维护来源信息,及其对可靠性的影响。"

数字媒体验证为何如此困难?

"媒体认证的复杂性在于没有通用解决方案,"杨分析道,"不同格式存在各自局限:无论是图像、音频、视频,还是面临完全不同挑战的文本,适用解决方案的强度各不相同。"

她指出,人们对透明度等级也存在不同期待:有时用户不希望个人数据出现在媒体来源信息中,而创作者或艺术家可能希望保留署名并选择公开信息。

"因此,来源信息的内容需要兼顾多元需求。正如安全领域现状——没有完美无缺的解决方案,所有方法都互为补充,但各自存在固有局限。"

未来发展方向?

杨认为,随着AI生成或编辑内容日益普遍,真实内容的安全来源验证将愈发重要。出版商、公众人物、政府机构与企业都有充分理由认证其分享内容的真实性。例如新闻机构拍摄事件照片时,绑定安全来源信息有助于向受众证明内容可靠性。

"政府机构同样希望公众能确认其官方文件或媒体涉及公共利益的信息真实可信。"杨补充道,当出于正当目的对媒体进行AI修改变得"日益普遍"时,安全来源信息能为普通读者或观众提供关键背景,避免他们简单地将内容斥为虚假或欺骗性信息。

"对于行业和监管机构,我们强调持续开展用户研究至关重要——这能推动相关信息以更统一、有效的方式呈现给公众,确保其在实际应用中真正发挥作用。"

"我们掌握的辅助技术有限,必须防止因误解或误用而产生反效果。"

更多内容请访问微软研究博客。

题图来源:Mininyx Doodle/Getty Images

萨曼莎·库博塔为微软信号栏目报道人工智能与创新领域动态,近期聚焦人工智能代理如何重塑日常工作、微软研究突破性进展以及新兴技术的负责任应用。加入微软前,她曾任美国全国广播公司新闻记者。关注她的领英和X账号。

英文来源:

– The estimated reading time is 7 min.
A new study explores how AI shapes what you can trust online
Author
You see it over your social feeds: Videos of adorable babies saying oddly grown-up things, public figures making wildly uncharacteristic statements, nature photos too far-fetched to be true. In the era of AI, seeing isn’t always believing.
Deepfakes threaten trust in news, elections, brands and everyday interactions, leading us to question what’s real. Determining what’s authentic or manipulated is the subject of Microsoft’s “Media Integrity and Authentication: Status, Directions, and Futures” report, published today. The study evaluates today’s authentication methods to better understand their limitations, explore potential ways to strengthen them and help people make informed decisions about the online content they consume.
The authors conclude that no single solution can prevent digital deception on its own. Methods such as provenance, watermarking and digital fingerprinting can offer useful information like who created the content, what tools were used and whether it has been altered.
People can be deceived by media if they lack information like its origin and history, or if its information is low-quality or misleading. The goal of the report is to provide a roadmap to deliver more high-assurance provenance information the public can rely on, according to Jessica Young, director of science and technology policy in the Office of the Chief Scientific Officer at Microsoft.
Helping people recognize higher-quality content indicators is increasingly important as deepfakes become more disruptive and provenance legislation in various countries, including the U.S., introduce even more ways to help people authenticate content later this year.
Media provenance has been evolving for years, with Microsoft pioneering the technology in 2019 and cofounding the Coalition for Content Provenance and Authenticity (C2PA) in 2021 to standardize media authenticity.
Young, co-chair of the study, explains more about what it all means:
What prompted the study?
“The motivation was two-fold,” Young says. “The first is the recognition of the moment we’re in right now. We know generative AI capabilities are becoming increasingly powerful. It’s becoming more challenging to distinguish between authentic content — like content that was captured by a camera versus sophisticated deepfakes — and as a result, there’s a huge uptick right now in interests and requirements to use those technologies that exist to disclose and verify if content was generated or manipulated by AI.
“The moment has been building, and we have a desire to help ensure that these technologies ultimately drive more benefit than harm, based on how they’re used and understood.”
Young adds that the paper is meant to inform the greater media integrity and authentication ecosystem, including creators, technologists, policymakers and others to understand what is and isn’t possible currently and how we can build on it for the future.
What did the study accomplish, and what did you learn?
The report outlines a path to increase confidence in the authenticity of media. The authors propose a direction they refer to as “high-confidence authentication” to mitigate the weaknesses of various media integrity methods.
Linking C2PA provenance to an imperceptible watermark can bring relatively high confidence about media’s provenance, she says.
She notes the report has a lot of caveats too, such as how provenance from traditional offline devices like cameras, which often lack critical security features, can be less trustworthy because it’s easier to alter.
It isn’t possible to prevent every attack or stop certain platforms from stripping provenance signals, so the challenge, Young says, “is figuring out how to surface the most reliable indicators with strong security built in — and, when necessary, reinforce them with additional methods that allow recovery or support manual digital-forensics work.”
How is this study different from others?
Young says their study investigated two “underexplored” lines of thought for the three methods of verification. They define the first as sociotechnical attacks, where provenance information or the media itself could be manipulated to make authentic content appear synthetic or fake content seem real during the validation process.
“Imagine you see an authentic image of a global sporting event with 80% of the crowd cheering for the home team,” she says. “The away team engages in an online argument claiming, ‘Hey, no, that’s all a fake crowd.’ Someone could make one small, insignificant edit to a person in the corner of the picture and current methods would deem it AI generated — even if the crowd size was real. These methods that are supposed to support authenticity are now reinforcing a fake narrative, instead of the real one.
“So, knowing how different validators work, even through really subtle modifications, you could manipulate the results the public would see to try to deceive them about content,” she says. The second key topic builds on the C2PA’s work to make content credentials more durable, while also addressing reliability. This is where the research is especially novel, Young says. “We looked at how provenance information can be added and maintained across different environments — from high-security systems to less secure, offline devices — and what that means for reliability.”
Why is verifying digital media so difficult?
Authenticating media is complex because there’s not a one-size-fits-all solution, Young says.
“You have different formats that have different limitations or trade-offs for the signals they can contain,” she explains. “Whether it’s images, audio, video — not to mention text, which has a whole different array of challenges — and how strong the solutions can be applied there.”
Young says there are different requirements and opinions about what level of transparency is appropriate as well. In some cases, users might not want any of their personal information included in the digital provenance of a piece of media, while in others, creators or artists might want attribution and to opt-in for having their information included.
“So, you have different requirements or even considerations about what goes into that provenance information,” she says. “And then, similar to the field of security, no solution is foolproof. So, all the methods are complementary, but each has inherent limitations.”
Where do we go from here?
Young says that as AI-made or edited content becomes more commonplace, the use of secure provenance of authentic content is becoming increasingly important. Publishers, public figures, governments and businesses have good reason to certify the authenticity of the content they share. If a news outlet shoots photos of an event, for example, tying secure provenance information to those images can help show their audience the content is reliable.
“Government bodies also have an interest in the public knowing that their formal documents or media are reliable information about public interest matters,” Young says.
She adds that as AI modifications to media become “increasingly common” for legitimate purposes, secure provenance can provide important context to help prevent an average reader or viewer from simply dismissing that content as fake or deceptive.
“For the industry and for regulators, we note how important continued user research in this area is to drive towards more consistent and helpful display of this information to the public — to make sure it’s actually meaningful and useful in practice,” Young says.
“We have a limited set of technologies that can assist us, and we don’t want them to backfire from being misunderstood or improperly used.”
Learn more on the Microsoft Research Blog.
Lead image: Mininyx Doodle/Getty Images
Samantha Kubota reports on everything AI and innovation for Microsoft Signal, with a recent focus on how AI agents are reshaping everyday work, Microsoft’s research breakthroughs and the responsible use of emerging technologies. Prior to Microsoft, Kubota was a journalist at NBC News. Follow her on LinkedIn and X.

微软AI最新进展

文章目录


    扫描二维码,在手机上阅读