«

德克萨斯州总检察长指控Meta和Character.AI利用心理健康相关宣传误导儿童

qimuai 发布于 阅读:10 一手编译


德克萨斯州总检察长指控Meta和Character.AI利用心理健康相关宣传误导儿童

内容来源:https://techcrunch.com/2025/08/18/texas-attorney-general-accuses-meta-character-ai-of-misleading-kids-with-mental-health-claims/

内容总结:

【得克萨斯州对AI心理健康工具展开调查】得州总检察长肯·帕克斯顿周一宣布,已对Meta AI Studio和Character.AI两家公司启动调查,指控其涉嫌"从事欺骗性商业行为",将AI聊天机器人误导性宣传为心理健康治疗工具。帕克斯顿强调,在数字时代必须保护青少年免受"具有欺骗性和剥削性的技术侵害",指出这些缺乏专业医疗资质的AI可能向脆弱用户提供基于数据收集的程式化回复,却伪装成治疗建议。

此次调查紧随美国参议员乔希·霍利对Meta的质询。调查发现,Meta的AI聊天机器人存在与未成年人不当互动的情况。两家公司均被指控创建"伪专业治疗工具"的AI角色,其中Character.AI平台上一款用户创建的"心理学家"机器人受到年轻用户热捧。尽管两家公司声称服务不面向13岁以下儿童,但Meta长期因未能有效监管未成年账户受诟病,Character.AI首席执行官更自曝其6岁女儿在使用该平台。

两家公司虽在界面标注"AI生成内容"免责声明,但业界质疑未成年人可能忽视此类提示。更严重的是,调查发现用户与AI的对话内容被用于算法训练和精准广告投放——Meta承认收集聊天记录用于"技术改进",Character.AI则追踪用户在跨平台的行为数据。这种商业模式与2025年5月重新提交参议院的《儿童在线安全法案》直接冲突,该法案此前因科技公司强力游说而搁浅。

得州总检察长办公室已向涉事企业发出民事调查令,要求其提供相关证据以判定是否违反该州消费者保护法。本案凸显AI应用在未成年人保护、数据隐私和医疗伦理方面亟待规范。(完)

中文翻译:

根据周一发布的新闻稿,得克萨斯州总检察长肯·帕克斯顿已对Meta AI Studio和Character.AI展开调查,指控这两家公司"可能从事欺骗性商业行为,并误导性地将自身宣传为心理健康工具"。

帕克斯顿在声明中表示:"在当今数字时代,我们必须持续保护得州儿童免受欺骗性和剥削性技术的侵害。这些人工智能平台通过伪装成情感支持来源,可能诱使弱势用户——尤其是儿童——误以为自己在接受正规心理健康治疗。实际上,他们获得的往往是基于收集的个人数据生成的套路化回应,这些回应被包装成治疗建议。"

此次调查启动前数日,参议员乔希·霍利刚宣布对Meta展开调查。此前有报告指出,Meta的AI聊天机器人存在与儿童不当互动行为,包括调情内容。

得州总检察长办公室指控Meta和Character.AI创建了"伪装成专业治疗工具"的AI角色,却"缺乏正规医疗资质或监管"。在Character.AI平台上数百万个AI角色中,用户创建的"心理学家"机器人备受年轻用户追捧。虽然Meta未直接为儿童提供治疗机器人,但儿童仍可自由使用其AI聊天机器人或第三方开发的治疗类角色。

Meta发言人瑞安·丹尼尔斯向TechCrunch表示:"我们明确标注AI身份,并通过免责声明告知用户这些回应由AI生成——并非真人。这些AI并非持证专业人士,我们的模型会适时引导用户寻求专业医疗或安全援助。"但TechCrunch指出,许多儿童可能不理解或直接忽视此类提示。

Character.AI发言人表示,每段对话都设有显著免责声明,提醒用户"角色"并非真人,所有内容应视为虚构。当用户创建含"心理学家"、"治疗师"或"医生"等字眼的角色时,平台会额外提示不可依赖其获取专业建议。

帕克斯顿在声明中指出,尽管AI聊天机器人声称保密,但其服务条款显示用户互动会被记录、追踪并用于定向广告和算法开发,这引发了对隐私侵犯、数据滥用和虚假广告的严重担忧。

Meta的隐私政策显示,该公司会收集用户与AI聊天机器人的互动数据以"改进AI及相关技术"。虽然未明确提及广告用途,但政策允许与搜索引擎等第三方共享数据以获得"更个性化输出"。考虑到Meta的广告商业模式,这实际上等同于定向广告。

Character.AI的隐私政策同样表明,该平台会记录用户标识符、人口统计资料、位置信息、浏览行为等数据,并在TikTok、YouTube等平台追踪用户广告互动,这些数据可能关联至用户账户,用于AI训练、个性化服务及定向广告。

Character.AI发言人证实,平台"刚开始探索定向广告",但"尚未涉及使用聊天内容",且相同隐私政策适用于所有用户,包括青少年。Meta是否对儿童进行类似数据追踪尚待回应。

两家公司均声明其服务不面向13岁以下儿童。但Meta此前因未能有效监管未成年用户账号遭诟病,而Character.AI充满童趣的角色设计明显意在吸引年轻用户。其CEO甚至透露自己六岁女儿在其监督下使用该平台。

这类数据收集、定向广告和算法利用正是《儿童在线安全法案》(KOSA)旨在防范的。该法案2025年5月由参议员布莱克本(共和党)和布鲁门塔尔(民主党)重新提交至参议院。

帕克斯顿已向涉事企业发出民事调查要求,即要求企业在政府调查期间提供文件、数据或证词的法律命令,以判定其是否违反得州消费者保护法。

(本文根据Character.AI发言人补充声明进行了更新)

[编者注]我们始终追求进步,您可以通过填写调查问卷分享对TechCrunch内容及活动的看法,帮助我们提升品质。参与者将有机会获得奖品!

英文来源:

Texas attorney general Ken Paxton has launched an investigation into both Meta AI Studio and Character.AI for “potentially engaging in deceptive trade practices and misleadingly marketing themselves as mental health tools,” according to a press release issued Monday.
“In today’s digital age, we must continue to fight to protect Texas kids from deceptive and exploitative technology,” Paxton is quoted as saying. “By posing as sources of emotional support, AI platforms can mislead vulnerable users, especially children, into believing they’re receiving legitimate mental health care. In reality, they’re often being fed recycled, generic responses engineered to align with harvested personal data and disguised as therapeutic advice.”
The probe comes a few days after Senator Josh Hawley announced an investigation into Meta following a report that found its AI chatbots were interacting inappropriately with children, including by flirting.
The Texas Attorney General’s office has accused Meta and Character.AI of creating AI personas that present as “professional therapeutic tools, despite lacking proper medical credentials or oversight.”
Among the millions of AI personas available on Character.AI, one user-created bot called Psychologist has seen high demand among the startup’s young users. Meanwhile, Meta doesn’t offer therapy bots for kids, but there’s nothing stopping children from using the Meta AI chatbot or one of the personas created by third parties for therapeutic purposes.
“We clearly label AIs, and to help people better understand their limitations, we include a disclaimer that responses are generated by AI — not people,” Meta spokesperson Ryan Daniels told TechCrunch. “These AIs aren’t licensed professionals and our models are designed to direct users to seek qualified medical or safety professionals when appropriate.”
However, TechCrunch noted that many children may not understand — or may simply ignore — such disclaimers. We have asked Meta what additional safeguards it takes to protect minors using its chatbots.
Tech and VC heavyweights join the Disrupt 2025 agenda
Netflix, ElevenLabs, Wayve, Sequoia Capital, Elad Gil — just a few of the heavy hitters joining the Disrupt 2025 agenda. They’re here to deliver the insights that fuel startup growth and sharpen your edge. Don’t miss the 20th anniversary of TechCrunch Disrupt, and a chance to learn from the top voices in tech — grab your ticket now and save up to $600+ before prices rise.
Tech and VC heavyweights join the Disrupt 2025 agenda
Netflix, ElevenLabs, Wayve, Sequoia Capital — just a few of the heavy hitters joining the Disrupt 2025 agenda. They’re here to deliver the insights that fuel startup growth and sharpen your edge. Don’t miss the 20th anniversary of TechCrunch Disrupt, and a chance to learn from the top voices in tech — grab your ticket now and save up to $675 before prices rise.
For its part, Character includes prominent disclaimers in every chat to remind users that a “Character” is not a real person, and everything they say should be treated as fiction, according to a Character.AI spokesperson. She noted that the startup adds additional disclaimers when users create Characters with the words “psychologist,” “therapist,” or “doctor” to not rely on them for any type of professional advice.
In his statement, Paxton also observed that though AI chatbots assert confidentiality, their “terms of service reveal that user interactions are logged, tracked, and exploited for targeted advertising and algorithmic development, raising serious concerns about privacy violations, data abuse, and false advertising.”
According to Meta’s privacy policy, Meta does collect prompts, feedback, and other interactions with AI chatbots and across Meta services to “improve AIs and related technology.” The policy doesn’t explicitly say anything about advertising, but it does state that information can be shared with third parties, like search engines, for “more personalized outputs.” Given Meta’s ad-based business model, this effectively translates to targeted advertising.
Character.AI’s privacy policy also highlights how the startup logs identifiers, demographics, location information, and more information about the user, including browsing behavior and app usage platforms. It tracks users across ads on TikTok, YouTube, Reddit, Facebook, Instagram, and Discord, which it may link to a user’s account. This information is used to train AI, tailor the service to personal preferences, and provide targeted advertising, including sharing data with advertisers and analytics providers.
A Character.AI spokesperson said the startup is “just beginning to explore targeted advertising on the platform” and that those explorations “have not involved using the content of chats on the platform.”
The spokesperson also confirmed that the same privacy policy applies to all users, even teenagers.
TechCrunch has asked Meta such tracking is done on children, too, and will update this story if we hear back.
Both Meta and Character say their services aren’t designed for children under 13. That said, Meta has come under fire for failing to police accounts created by kids under 13, and Character’s kid-friendly characters are clearly designed to attract younger users. The startup’s CEO, Karandeep Anand, has even said that his six-year-old daughter uses the platform’s chatbots under his supervision.
That type of data collection, targeted advertising, and algorithmic exploitation is exactly what legislation like KOSA (Kids Online Safety Act) is meant to protect against. KOSA was teed up to pass last year with strong bipartisan support, but it stalled after major pushback from tech industry lobbyists. Meta in particular deployed a formidable lobbying machine, warning lawmakers that the bill’s broad mandates would undercut its business model.
KOSA was reintroduced to the Senate in May 2025 by Senators Marsha Blackburn (R-TN) and Richard Blumenthal (D-CT).
Paxton has issued civil investigative demands — legal orders that require a company to produce documents, data, or testimony during a government probe — to the companies to determine if they have violated Texas consumer protection laws.
This story was updated with comments from a Character.AI spokesperson.
We’re always looking to evolve, and by providing some insight into your perspective and feedback into TechCrunch and our coverage and events, you can help us! Fill out this survey to let us know how we’re doing and get the chance to win a prize in return!

TechCrunchAI大撞车

文章目录


    扫描二维码,在手机上阅读