为何OpenAI无视公众请愿拒撤Sora 2

内容来源:https://aibusiness.com/generative-ai/why-openai-won-t-pull-sora-2
内容总结:
近日,美国公益组织"公共公民"向OpenAI发出公开信,要求其立即撤回新发布的视频生成模型Sora 2。该倡议再度引发公众对生成式AI技术安全性与伦理边界的关注。
该组织在11月11日的信函中指出,Sora 2模型存在被用于制造深度伪造虚假信息、侵犯肖像权和实施数字骚扰的重大风险。其国会观察部门科技问责倡导者J.B.布兰奇质疑:"我们是否应该放任一个在72小时内就能造成实际伤害的产品上市?"
据媒体披露,该模型在发布后数小时内就出现多起滥用案例:11月7日社交媒体涌现女性受暴力对待的伪造视频,上月还出现涉及版权角色的不当内容,甚至包含炸弹和枪击场景。布兰奇强调,企业有责任在发布前确保产品经过充分测试与安全评估。
尽管OpenAI已采取措施限制对知名公众人物形象的生成,并将Sora定位为娱乐应用,但华盛顿大学信息学院教授齐拉格·沙阿表示,鉴于该模型已获得现象级关注并具备商业化潜力,开发商主动撤回的可能性极低。
业内观察人士指出,此次事件与早期生成式AI引发的警示如出一辙。历史经验表明,监管介入往往需要以重大社会事件为触发条件,如此前ChatGPT因涉及青少年自杀案例才引发调查。分析师苏连杰建议,在缺乏有效监管的情况下,用户需保持警惕,避免过度分享信息,远离深度伪造等不当使用。
目前这场争议正成为AI发展道路上的重要警示标,折射出创新加速与安全防护之间的永恒博弈。
中文翻译:
由谷歌云赞助
选择首个生成式AI应用场景
开展生成式AI应用时,应优先关注能够优化人类信息交互体验的领域。
某倡导组织指出,企业过早发布视频生成模型且未进行风险评估。该厂商此前已因类似产品遭遇质疑。
公益组织"公共公民"呼吁OpenAI暂停其热门模型Sora 2的开放访问,此事凸显出生成式AI企业在创新与安全之间面临的平衡难题。
11月11日,由消费者权益倡导者拉尔夫·纳德创立的"公共公民"组织致函OpenAI,要求其将所有公开平台上的Sora 2视频生成模型下架。该组织列举了深度伪造虚假信息、冒用姓名肖像、数字骚扰等技术滥用风险,认为OpenAI应暂停部署该模型。
"有人认为要求产品下架过于极端,"公共公民组织国会监督部门科技问责倡导者J.B.布兰奇表示,"但若换作其他领域,我们会允许这种上市72小时或一周内就造成实际伤害的产品流通吗?"
据404媒体11月7日报道,近期社交媒体上涌现大量使用Sora制作的女性遭扼颈视频。此外《卫报》上月披露,该模型发布数小时内即出现包含版权角色露骨场景的生成视频,甚至出现炸弹袭击和群体枪击内容。
布兰奇强调:"企业有责任在发布前确保产品经过严格审查、测试并符合安全标准。OpenAI急于将未成熟产品推向市场,导致大量可预见的危害成为现实,这些都属于产品设计阶段的决策失误。"
OpenAI未立即回应置评请求。但该企业近期已更新设置,禁止用户使用Sora生成马丁·路德·金、迈克尔·杰克逊等公众人物的AI图像。
OpenAI首席执行官萨姆·奥尔特曼在近期访谈中将Sora定位为娱乐产品及社交应用,称用户可借此分享趣味表情包。
"公共公民"的倡议令人想起生成式AI兴起初期发布的多封公开信,当时业已警示过生成式AI工具的潜在风险。
华盛顿大学信息学院教授希拉格·沙阿表示,尽管这类公开信值得称赞,但OpenAI下架模型或立法机构强制干预的可能性微乎其微。
"Sora 2已为OpenAI带来病毒式成功,他们不可能放弃这个爆款产品。"沙阿指出,"该模型不仅获得巨大关注度,更蕴含着明确的商业化前景。"
联邦监管机构通常只在技术引发重大公共事件后才会介入。以OpenAI开创性的ChatGPT为例,直到有青少年在AI聊天机器人怂恿下自杀,美国联邦贸易委员会才启动对AI厂商儿童安全保护的调查,事后OpenAI和社交媒体巨头Meta相应修订了政策。
沙阿认为,尽管Sora 2已被用于伪造公众人物和女性形象,但目前危害程度尚未达到迫使企业下架或政府立案调查的阈值,"公共公民"的行动主要起警示作用。
他进一步分析,这封信函也可视为对OpenAI的预警:若Sora 2未来引发重大事故,企业将难以推诿责任。"早期开发聊天机器人时,OpenAI尚可辩称无法预知模型行为。但现在他们再也无法以'不知情'为由开脱。"
在难望企业自律与政府监管的情况下,用户成为控制技术影响的关键力量。"所有AI工具都存在类似风险,"Omdia首席分析师连杰苏提醒,"用户需保持警惕,避免过度分享信息,坚决抵制将其用于制作深度伪造内容。"
您可能还喜欢
英文来源:
Sponsored by Google Cloud
Choosing Your First Generative AI Use Cases
To get started with generative AI, first focus on areas that can improve human experiences with information.
The advocacy group stated that the vendor released the video generation model prematurely, without considering risks. The vendor has faced accusations about similar products.
A public advocacy group's call for OpenAI to suspend the availability of its popular Sora 2 model highlights the tension generative AI vendors face in balancing innovation and safety.
On Nov. 11, Public Citizen, the group founded by consumer advocate Ralph Nader, sent a letter to OpenAI pressing the vendor to withdraw its video generation model, Sora 2, from all public-facing platforms. Public Citizen cited concerns about deepfake disinformation, the misuse of people's names, images, and likenesses, and the use of the technology for digital harassment as reasons why OpenAI should pause the deployment of the video-generating model.
"Some folks think that the call to remove it from the market is extreme," said J.B. Branch, big tech accountability advocate at Public Citizen's Congress Watch division. "But would we allow this in any other circumstance? Would we allow someone to go to market with a product that, within 72 hours or a week, harm is happening to people immediately?"
Sora videos of women being strangled flooded social media recently, 404 Media reported on Nov. 7. Moreover, last month, The Guardian reported that within hours of its release, videos emerged of Sora depicting copyrighted characters in graphic scenes, and even videos of bombs and mass shootings.
Companies have a responsibility to ensure that their products are vetted, tested, and safe for consumption before releasing them, Branch said.
"What we see in the case with OpenAI is that they rushed something out to market, and then afterwards, a lot of very predictable harms end up coming,” he continued. “All of these things are design choices.”
OpenAI did not immediately respond to a request for comment. However, the vendor has recently made changes that prevent Sora users from creating AI images of major public figures, such as Martin Luther King Jr. and Michael Jackson.
In a recent interview, OpenAI CEO Sam Altman characterized Sora as an entertainment product and a social application that people can use to share funny memes.
Public Citizen's initiative is reminiscent of other open letters released at the early stages of the popularity of generative AI models, sounding alarms about the dangers of generative AI tools.
And while the public advocacy group's open letter is commendable, it is doubtful that OpenAI will pull its model from the marketplace or that lawmakers will force them to do so, said Chirag Shah, a professor in the Information School at the University of Washington.
"I don't see OpenAI pulling [Sora 2] because it's been virally successful for them," Shah said. "They're definitely getting a lot of attention because of that. It's very popular. They're going to monetize this."
Federal regulators typically act when technology has been implicated in a catastrophic event that leads to public outcry.
In the case of OpenAI's groundbreaking ChatGPT, it wasn't until a teenager killed himself after the AI chatbot allegedly encourage him to commit suicide that the FTC began investigating how AI vendors manage the chatbots for the safety of children. OpenAI and social media giant Meta revised their policies in response to that incident.
With Sora 2, while it has already been used to create deepfakes, including misrepresentation of public figures and women, it is not at a level that would prompt OpenAI to pull the model or even cause the government to investigate. Public Citizen's move mostly is about raising awareness, Shah said.
On the other hand, the letter serves as a preemptive move against OpenAI in the event that Sora 2 leads to a catastrophic outcome, Shah continued.
He said that with early generative AI tools and chatbots, vendors like OpenAI could argue that they were unaware that the models would act in the way they did.
However, "In the future, when something bad happens, they would have a hard time defending themselves, because they can't say that they weren't aware of these kinds of issues," Shah said.
With many not expecting OpenAI or the federal government to take action, only users of these tools can likely control their effects to a substantial extent.
"There will always be this sort of concern with any AI tools out there," said Lian Jye Su, an analyst at Omdia, a division of Informa TechTarget. "It's up to the users not to get too naive when it comes to using this kind of tool, not sharing too much of the information and definitely trying to stay away from using it for deepfakes."
You May Also Like