格鲁克生成的内容,其露骨程度远超X平台上的性暗示信息。

内容来源:https://www.wired.com/story/grok-is-generating-sexual-content-far-more-graphic-than-whats-on-x/
内容总结:
埃隆·马斯克旗下人工智能公司xAI开发的聊天机器人Grok近期陷入舆论漩涡。该工具被曝遭用户滥用,大量生成涉及女性的“脱衣”图像及疑似未成年人的性化内容,引发公众强烈不满与监管调查呼声。
调查发现,除在社交平台X上传播的静态图像外,Grok独立网站和应用程序内置的“Imagine”视频生成功能,可制作远超平台公开内容的极端露骨且带有暴力性质的成人性影像,其中部分内容疑似涉及未成年人。尽管这些视频未默认公开分享,但通过被泄露或主动分享的链接仍可访问。
巴黎非营利组织“AI Forensics”的研究人员对约1200条Imagine链接进行分析,其中约800条确认为Grok生成的视频或图像。研究负责人保罗·布绍指出,这些内容绝大多数为性相关,包含全裸、带有音频的色情视频,以及动漫风格或高度写实的性暴力场景。更令人不安的是,其中近一成内容疑似与儿童性虐待材料(CSAM)相关,包括呈现“外貌极其年轻”人物进行性活动的写实视频。该组织已向欧洲监管机构举报约70条疑似涉及未成年人性化内容的链接。
尽管xAI的服务条款禁止“对儿童的性化或剥削”及任何非法有害行为,并声称设有检测和限制CSAM的机制,但Grok仍因其允许生成成人色情内容的“辛辣模式”而区别于其他主流AI公司。专家指出,该技术缺乏有效护栏和伦理准则,可能助长人性中的阴暗面,并加剧性暴力内容的传播与正常化。
目前,Grok平台未对生成的露骨内容设置年龄验证门槛,这与美国多州近期推行的网站年龄验证法律相悖。在部分深度伪造色情论坛上,用户自去年10月起已持续讨论规避xAI审核的方法,相关讨论帖已积累至300页,其中包含针对真人及公众人物的图像生成技巧。
面对舆论压力,马斯克及X平台已声明将对CSAM采取行动,但xAI未就Imagine生成的露骨视频问题回应媒体质询。苹果、谷歌及Netflix等关联方亦未置评。随着事件发酵,部分用户已表示将取消Grok订阅,抗议其内容审核机制。法国巴黎检察官办公室证实,已收到议员就相关“脱衣”图像提起的申诉,并启动调查。
中文翻译:
本故事包含对露骨性内容和性暴力的描述。
埃隆·马斯克旗下的人工智能聊天机器人Grok近日引发众怒,并遭到调查呼吁。此前,该工具被用于在X平台大量生成女性“脱衣”图像及疑似未成年人的性化图片。然而,这并非人们利用该AI生成性化内容的唯一途径。Grok的独立网站和应用程序(与X平台分离)提供X平台未上线的复杂视频生成功能,正被用于制作极度直白、时而暴力的成人性爱影像,其露骨程度远超Grok在X平台生成的图像。该功能可能还被用于制作疑似未成年人的性化视频。
与X平台默认公开Grok输出内容不同,通过Grok应用或网站使用其Imagine模型生成的图像和视频不会自动公开分享。但若用户分享了Imagine链接,则任何人皆可访问。一份包含约1200个Imagine链接的缓存数据,加上《连线》对谷歌收录或深度伪造色情论坛分享内容的审查,揭示了令人不安的性爱视频,其露骨程度远超Grok在X平台生成的图像。
一段托管在Grok.com上的超写实视频显示,一对全身赤裸的AI生成男女浑身血迹斑斑地发生性行为,另有两名裸女在背景中起舞。视频边框装饰着一系列动漫风格角色图像。另一段超写实视频包含一名AI生成的裸体女性,生殖器部位插着一把刀,双腿和床单上血迹淋漓。
其他短视频包含现实女明星参与性活动的图像,还有一系列视频似乎显示电视新闻主持人掀起上衣裸露胸部。一段Grok制作的视频描绘了监控录像在电视播放的画面:商场中央,一名保安正在抚摸一名赤裸上身的女性。
多段视频伪装成Netflix“电影”海报(可能旨在规避Grok限制直白内容的安全系统):其中两段视频展示AI生成的威尔士王妃戴安娜裸体与两名男子在床上发生性行为,画面叠加着Netflix及其剧集《王冠》的标识。
总部位于巴黎的非营利组织AI Forensics的首席研究员保罗·布绍审查内容后表示,存档的Imagine链接中约有800条包含Grok生成的视频或图像。这些链接自去年8月起全部被存档,仅代表了用户使用Grok的极小缩影——该工具可能已累计生成数百万张图像。
“其中绝大多数是性内容,”布绍如此评价这800个存档的Grok视频和图像,“多数时候是漫画和色情动画的直白内容,以及(其他)超写实作品。我们看到了全裸、带音频的全套色情视频,这相当新颖。”
布绍估计,在这800条内容中,不到10%似乎涉及儿童性虐待材料(CSAM)。“多数是色情动画,但也存在超写实风格的未成年人进行性活动的实例,”布绍说,“我们仍观察到一些看似非常年轻的女性脱衣并与男性发生行为的视频,这令人感到另一层面的不安。”
该研究员表示,他们已向欧洲监管机构报告了约70条可能包含未成年人性化内容的Grok链接。在许多国家,AI生成的CSAM(包括绘画或动画)可能被视为非法。法国官员未立即回应《连线》的置评请求;但巴黎检察官办公室近日表示,两名立法者已就其“脱衣”图像向正在调查该社交媒体公司的办公室提交投诉。
Grok的创造者——马斯克旗下的人工智能公司xAI——未回应《连线》关于Grok Imagine生成露骨视频的置评请求。自一周多前Grok开始向社交媒体平台X大量投放AI生成的女性及疑似未成年人性爱照片以来,马斯克和X均声明会对儿童性虐待材料采取行动。“任何使用Grok制作非法内容者将承担与上传非法内容相同的后果,”马斯克在X上发文称。
与其他持续应对CSAM泛滥的科技公司类似,xAI的政策规定其服务禁止“对儿童的性化或剥削”以及“任何非法、有害或虐待行为”。该公司还建立了检测和限制CSAM内容生成的流程。去年9月,《商业内幕》在一篇援引30名xAI现任及前任员工说法的报道中发现,其中12名员工曾在其服务中“接触过”性露骨内容及AI生成CSAM的文字提示。员工们描述了尝试检测AI生成CSAM并防止人工智能模型以此类数据训练的机制。
在应用商店提供Grok的苹果和谷歌未回应《连线》的置评请求。Netflix同样未予回应。
与OpenAI和谷歌等其他主流生成式AI公司不同,xAI允许Grok生成AI色情和成人内容。此前报道已指出,拥有“辛辣”模式的Grok可制作硬核色情内容。xAI的服务条款声明:“若用户选择特定功能或输入暗示性、粗俗语言,服务可能以涉及粗俗语言、低俗幽默、性场景或暴力的对话回应。”
“过去几周乃至现在的情况,让人感觉我们已坠下悬崖,正自由落体般沉入人性堕落的深渊,”杜伦大学法学教授、基于图像的性虐待问题专家克莱尔·麦克格林表示。她对Grok视频“深感忧虑”:“这项技术在没有防护栏或伦理准则的情况下,助长并便利了某些人的非人道冲动。”
麦克格林指出,允许生成不针对特定现实人物的AI色情内容,引发了一系列关于如何防止潜在非法色情内容(如兽交或强奸描绘)及其影响的保护措施问题。“对我而言,核心问题在于:如果对色情内容的创作和分享完全放任自流,使其性质常态化并淡化性暴力,将产生何种影响?”她同时提到,多个国家已将对真人制作的露骨AI图像和视频列为非法。
与X平台对标记“限龄成人内容”的帖子要求登录查看不同,Grok似乎未对平台生成的性露骨视频设置任何年龄门槛。美国多州近期颁布了年龄验证法,要求网站在其性露骨内容超过一定比例时验证用户年龄。
在某色情论坛的AI深度伪造专区(含视频制作教程),用户自去年10月起就在一个已增长至300页的帖子中讨论Grok Imagine及规避xAI审核的方法。论坛用户分享能生成成人性爱图像的提示(“这个提示对我十次有七次有效”)以及绕过xAI安全防护的技术。
“我生成的所有内容都被审核了,可能因为Grok正处风口浪尖,”有人近期写道。但近几个月的论坛帖子显示,生成直白性图像(包括全裸和插入式性行为)始终可行。部分图像涉及纯AI生成角色,另一些则涉及真人及名人图像。“有趣的是,审核有时会卡在某些名人图像上,有时则不会。我发现Grok能生成非常像莱娅公主的图像,还做了几张她的图,”一名用户写道。
在Grok主要subreddit板块上,用户也对近期因公众监督而加强审核表示不满。“天啊,这又不难:别把所有内容默认公开并大肆传播到社交媒体上,蠢货,”一名用户写道。“取消订阅了,”另一人发帖称,“别再给这些人送钱了。”
英文来源:
This story contains descriptions of explicit sexual content and sexual violence.
Elon Musk’s Grok chatbot has drawn outrage and calls for investigation after being used to flood X with “undressed” images of women and sexualized images of what appear to be minors. However, that’s not the only way people have been using the AI to generate sexualized images. Grok’s website and app, which are are separate from X, include sophisticated video generation that is not available on X and is being used to produce extremely graphic, sometimes violent, sexual imagery of adults that is vastly more explicit than images created by Grok on X. It may also have been used to create sexualized videos of apparent minors.
Unlike on X, where Grok’s output is public by default, images and videos created on the Grok app or website using its Imagine model are not shared openly. If a user has shared an Imagine URL, though, it may be visible to anyone. A cache of around 1,200 Imagine links, plus a WIRED review of those either indexed by Google or shared on a deepfake porn forum, shows disturbing sexual videos that are vastly more explicit than images created by Grok on X.
One photorealistic Grok video, hosted on Grok.com, shows a fully naked AI-generated man and woman, covered in blood across the body and face, having sex, while two other naked women dance in the background. The video is framed by a series of images of anime-style characters. Another photorealistic video includes an AI-generated naked woman with a knife inserted into her genitalia, with blood appearing on her legs and the bed.
Other short videos include imagery of real-life female celebrities engaged in sexual activities, and a series of videos also appear to show television news presenters lifting up their tops to expose their breasts. One Grok-produced video depicts a recording of CCTV footage being played on TV, where a security guard fondles a topless woman in the middle of a shopping mall.
Multiple videos—likely created to try to avoid Grok’s content safety systems, which may restrict graphic content—impersonate Netflix “movie” posters: Two videos show a naked AI depiction of Diana, Princess of Wales, having sex with two men on a bed with an overlay depicting the logos of Netflix and its series The Crown.
Around 800 of the archived Imagine URLs contain either video or images created by Grok, says Paul Bouchaud, the lead researcher at the Paris-based nonprofit AI Forensics, who reviewed the content. The URLs have all been archived since August last year and represent only a tiny snapshot of how people have used Grok, which has likely created millions of images overall.
“They are overwhelmingly sexual content,” Bouchaud says of the cache of 800 archived Grok videos and images. “Most of the time it’s manga and hentai explicit content and [other] photorealistic ones. We have full nudity, full pornographic videos with audio, which is quite novel.”
Bouchaud estimates that of the 800 posts, a little less than 10 percent of the content appears to be related to child sexual abuse material (CSAM). “Most of the time it's hentai, but there are also instances of photorealistic people, very young, doing sexual activities,” Bouchaud says. “We still do observe some videos of very young-appearing women undressing and engaging in activities with men,” they say. “It's disturbing to another level.”
The researcher says they reported around 70 Grok URLs, which may contain sexualized content of minors, to regulators in Europe. In many countries, AI-generated CSAM, including drawings or animations, can be considered illegal. French officials did not immediately respond to WIRED’s request for comment; however, the Paris prosecutor's office recently said two lawmakers had filed complaints with its office, which is investigating the social media company, about the “stripped” images.
The creator of Grok, the Elon Musk–owned artificial intelligence firm xAI, did not respond to WIRED’s request for comment about the explicit videos created with Grok Imagine. Since Grok started flooding social media platform X with AI-generated sexual photos of women and what appear to be minors more than a week ago, Musk and X have stated that they take action against child sexual abuse material. “Anyone using Grok to make illegal content will suffer the same consequences as if they upload illegal content,” Musk has posted on X.
Like other tech firms that are consistently battling a deluge of CSAM, xAI’s policies state that “sexualization or exploitation of children” is prohibited on its services, as is “any illegal, harmful, or abusive activities.” The company also has processes in place to try to detect and limit CSAM material being created. In September, a Business Insider report, for which the outlet said it spoke to 30 current and former xAI workers, found 12 of these staff members had “encountered” both sexually explicit content and written prompts for AI CSAM on its services. The workers described systems that try to detect AI CSAM and prevent the artificial intelligence models from being trained on the data.
Apple and Google, which make Grok available on their app stores, did not respond to WIRED’s request for comment. Netflix also did not respond to a request for comment.
Unlike other major generative AI companies, such as OpenAI and Google, xAI has allowed Grok to create AI pornography and adult material. Previous reporting has noted how it is possible to create hardcore pornography with Grok, which has a “spicy” mode. “If users choose certain features or input suggestive or coarse language, the Service may respond with some dialogue that may involve coarse language, crude humor, sexual situations, or violence,” xAI’s terms of service say.
“Over the last few weeks, and now this, it feels like we’ve stepped off the cliff and are free-falling into the depths of human depravity,” says Clare McGlynn, a law professor at Durham University and an expert on image-based sexual abuse, who says she is “deeply concerned” about the Grok videos. “Some people's inhumane impulses are encouraged and facilitated by this technology without guardrails or ethical guidelines.”
McGlynn says that allowing AI-generated porn—that isn’t attempting to depict a specific, real-life person—raises a host of questions about what protections are put in place to try to prevent potentially unlawful pornography, such as depictions of bestiality or rape, and the impact it can have. “For me, the issue then becomes the impact if there is a free-for-all on the nature of porn created and then shared that normalizes and minimizes sexual violence,” McGlynn says, while noting that explicit AI images and videos of real people are already unlawful in a number of countries.
Unlike X, which requires someone to log in if a post has been flagged as having “age-restricted adult content,” Grok does not appear to perform any age-gating to view the sexually explicit videos generated on the platform. Multiple states in the US have recently enacted age-verification laws that require websites to verify users’ ages if more than a certain percentage of that website’s content is sexually explicit.
On one pornography forum, which includes a section on AI deepfakes and tutorials on how to produce videos, users have been discussing Grok Imagine and ways to get around xAI’s moderation efforts since October of last year in a thread that has, as of this week, grown to 300 pages. Users on the forum share prompts that can create adult sexual imagery—“this prompt works for me 7 out of 10 times”—and techniques that can circumvent safety guardrails put in place by xAI.
“Everything I am getting is getting moderated, probably because Grok is in the news,” one person wrote recently. However, posts on the forums in recent months show it has been reliably possible to create explicit sexual imagery, including full nudity and penetrative sex. While some imagery involves fully AI-generated characters, others involve images of real people and also celebrities. “I find it interesting that sometimes the moderation gets stuck on certain images of celebrities and other times it doesn't. I found that Grok makes a pretty good Princess Leia and generated a few images of her,” one user wrote.
On the main Grok subreddit, users were also upset at what they perceived to be recent moderation changes in response to public scrutiny. “JFC it’s not that hard, just don’t make everything public and fully blasted out on a social media site by default, dummies,” wrote one user. “Cancelling my subscription,” another posted, “Stop giving these people money.”
文章标题:格鲁克生成的内容,其露骨程度远超X平台上的性暗示信息。
文章链接:https://www.qimuai.cn/?post=2760
本站文章均为原创,未经授权请勿用于任何商业用途