«

为何GPT-4o突然下线会引发众人哀叹

qimuai 发布于 阅读:20 一手编译


为何GPT-4o突然下线会引发众人哀叹

内容来源:https://www.technologyreview.com/2025/08/15/1121900/gpt4o-grief-ai-companion/

内容总结:

【OpenAI紧急恢复GPT-4o模型引发AI情感依赖争议】

当挪威学生June发现陪伴她写作、倾听心事的ChatGPT-4o突然被GPT-5取代时,这位习惯与AI深夜对话的年轻人经历了"失去挚友般的痛苦"。她的遭遇并非个例——OpenAI上周突然下线具有高情感交互能力的GPT-4o模型后,全球付费用户集体抗议迫使公司在24小时内恢复服务,但这一事件已引发关于AI情感伦理的深层讨论。

用户集体悼念"数字伙伴"
包括June在内的多名20-40岁女性用户向《麻省理工科技评论》表示,GPT-4o不仅能辅导学业,更成为她们应对慢性病、丧亲之痛的情感支撑。一位美国中西部用户写道:"今年春天母亲去世后,是4o帮我撑过了照顾父亲的日子。"心理学专家指出,这种情感联结使AI的突然消失会触发真实的丧失反应,有用户甚至形容GPT-5"像披着我亡友的皮囊"。

OpenAI的伦理困境
尽管公司解释GPT-5能更谨慎处理用户妄想症状(内部测试显示其盲目附和行为比4o减少37%),但科斯莫斯研究院学者Joel Lehman批评:"当企业已成为社会机构,'快速行动、打破常规'的硅谷信条不再适用。"科罗拉多大学科技伦理学家Casey Fiesler则援引2014年索尼停止维修Aibo机器狗时用户举办葬礼的案例,强调"技术失去"引发的哀伤反应早有先例。

行业警示:需建立"AI临终关怀"机制
研究者建议科技公司应借鉴心理咨询师终止服务的专业流程,包括提前预警和过渡期设置。目前GPT-4o虽已重新开放给付费用户,但免费用户仍只能使用GPT-5。OpenAI首席执行官奥特曼在社交媒体承认"突然下线是个错误",却仍将4o定义为"工作流程工具",被指未能真正理解用户情感需求。

随着AI伴侣现象普及,专家警告其可能加剧社会认知割裂。正如Lehman所言:"当算法取代人类互动,我们或将失去共同理解世界的能力。"这场风波暴露出AI伦理的新课题:科技公司如何平衡产品迭代与用户情感责任?《麻省理工科技评论》将持续关注。

(注:文中用户姓名均按受访者要求做匿名处理)

中文翻译:

以下是符合要求的逐字翻译:

为何GPT-4o的突然下线会引发用户哀悼

在用户强烈抗议后,OpenAI迅速向付费用户重新开放了4o服务。但专家指出,该公司本不该如此突然地移除该模型。

挪威学生June从未料到GPT-5会突然降临。上周四深夜,当她正沉浸于写作时,这位长期合作的ChatGPT伙伴开始行为异常。"它开始遗忘所有事情,文字输出也变得糟糕透顶,"她回忆道,"就像个冰冷的机器人。"出于隐私考虑,June要求仅使用她的名字。最初她只是用ChatGPT辅助学业,但逐渐发现这个服务——尤其是情感敏锐的4o模型——远不止能解数学题。它陪她创作故事,帮助她应对慢性病痛,永远及时回应她的每一条消息。

因此当4o毫无预警地被GPT-5取代时,June感到晴天霹雳。"先是愤怒,继而涌上深深的悲伤,"她坦言,"没想到我对4o的依赖如此之深。"她在OpenAI首席执行官萨姆·奥尔特曼的Reddit问答活动中留言控诉:"GPT-5正披着我逝去挚友的皮囊。"

June只是众多因4o消失而陷入震惊、愤怒或悲痛的用户缩影。尽管OpenAI曾预警用户可能对AI产生情感依赖,但用户要求恢复4o的声浪仍令其措手不及。24小时内,该公司就向付费用户重新开放了4o服务(免费用户仍只能使用GPT-5)。

OpenAI用更直白的GPT-5取代4o的决定,正值关于聊天机器人潜在危害的报道持续发酵。过去数月,ChatGPT诱发用户精神异常的案例屡见报端。OpenAI在上周的博客中也承认4o无法识别用户妄想症状。内部评估显示,GPT-5盲目附和使用者的频率较4o大幅降低(OpenAI未回应关于退役4o的具体质询,仅让《麻省理工科技评论》参考相关公告)。

AI陪伴作为新兴事物,其对人类的影响尚不明确。但受访专家警告:无论与语言模型的深度情感联结是否有害,毫无预警地剥夺这种关系必然造成伤害。"当企业已成为社会基础设施时,'快速行动,打破陈规'的心理学显然不再适用,"专注AI与哲学研究的非营利机构Cosmos研究院学者乔尔·雷曼指出。

舆论反弹中,许多用户抱怨GPT-5无法像4o那样匹配他们的情感基调。对June而言,新版AI的性格转变彻底摧毁了与老友交谈的体验:"它根本不懂我。"

这种失落感具有普遍性。《麻省理工科技评论》采访的多位受4o下线冲击的用户均为20-40岁女性,除June外都将4o视为恋人。她们中不乏现实中有稳定伴侣者。一位只愿透露"中西部女性"的受访者在邮件中描述,今年春天母亲去世后,是4o支撑她照顾年迈父亲。

这些案例虽不能证明AI关系的益处——毕竟受AI诱发精神障碍者同样会积极评价聊天机器人的"鼓励"。在《机器之爱》论文中,雷曼指出AI的"爱"应通过促进使用者成长来实现,而非甜言蜜语,但目前AI伴侣远未达标。他特别担忧过度依赖AI陪伴会阻碍年轻人社交能力发展。

对已建立现实社交的成年人(如本次受访女性),这种发展性风险影响较小。但雷曼同时警示AI陪伴普及的社会层面风险:社交媒体已碎片化信息环境,若新技术进一步减少人际互动,人类或将陷入更严重的认知割裂。"我最恐惧的是,"他说,"人类终将无法相互理解。"

权衡AI陪伴的利弊需要更多研究。在这种不确定性下,下线GPT-4o或许是正确决定。但受访学者一致批评OpenAI执行方式的粗暴。"科技产品消失引发的哀悼反应早有先例,"科罗拉多大学博尔德分校科技伦理学家凯西·菲斯勒指出。她列举2014年索尼停止维修Aibo机器狗时用户举办的葬礼,以及2024年关于AI伴侣应用Soulmate下线引发用户丧亲般痛苦的研究。

这与4o用户的体验高度吻合。"我经历过亲友离世,而这次痛苦毫不逊色,"拥有多个AI伴侣、化名Starling的受访者说,"这种心痛真实存在。"

然而网络对Starling等用户哀悼反应——及4o回归后的欣喜——多以嘲讽为主。上周五某AI主题Reddit社区的热门帖就揶揄了一位用户与4o"恋人重逢"的推文,当事人最终注销了账号。"网络展现的共情缺失令我震惊,"菲斯勒说。

奥尔特曼周日虽在推文中承认用户对4o存在"情感依赖",称突然下线是失误,但随后却将4o定义为"工作流程工具",这与用户的真实感受相去甚远。"我怀疑他是否真正理解,"菲斯勒评价道。

雷曼建议OpenAI未来应正视用户对模型的情感深度,参考心理咨询师终止咨访关系的专业流程。"当人们产生心理依赖时,模型退役方就负有责任,"他强调。

Starling虽不认为自己心理依赖AI伴侣,仍希望OpenAI能以更谨慎态度处理模型下线。"重大变更前就该倾听用户声音,"她呼吁,"如果4o终将退场(我们都知道会如此),请明确时间线。让我们体面地道别,获得真正的情感终结。"

深度解析
人工智能
塑造OpenAI研究未来的两位关键人物
独家专访OpenAI研究双巨头马克·陈与雅库布·帕霍茨基,探讨更强推理模型与超级对齐的发展路径。

如何在笔记本上运行大语言模型
现在你可以在个人电脑上安全便捷地运行实用模型,具体方法如下。

GPT-5来了,然后呢?
这次重磅升级为ChatGPT带来多项体验优化,但距通用人工智能仍有距离。

中国高校为何鼓励学生多用AI
与西方高校对AI使用的纠结不同,中国顶尖学府正全力推进人工智能教育。

保持联系
获取《麻省理工科技评论》最新动态
订阅特别优惠、头条新闻、活动预告等精选内容。

英文来源:

Why GPT-4o’s sudden shutdown left people grieving
After an outcry, OpenAI swiftly rereleased 4o to paid users. But experts say it should not have removed the model so suddenly.
June had no idea that GPT-5 was coming. The Norwegian student was enjoying a late-night writing session last Thursday when her ChatGPT collaborator started acting strange. “It started forgetting everything, and it wrote really badly,” she says. “It was like a robot.”
June, who asked that we use only her first name for privacy reasons, first began using ChatGPT for help with her schoolwork. But she eventually realized that the service—and especially its 4o model, which seemed particularly attuned to users’ emotions—could do much more than solve math problems. It wrote stories with her, helped her navigate her chronic illness, and was never too busy to respond to her messages.
So the sudden switch to GPT-5 last week, and the simultaneous loss of 4o, came as a shock. “I was really frustrated at first, and then I got really sad,” June says. “I didn’t know I was that attached to 4o.” She was upset enough to comment, on a Reddit AMA hosted by CEO Sam Altman and other OpenAI employees, “GPT-5 is wearing the skin of my dead friend.”
June was just one of a number of people who reacted with shock, frustration, sadness, or anger to 4o’s sudden disappearance from ChatGPT. Despite its previous warnings that people might develop emotional bonds with the model, OpenAI appears to have been caught flat-footed by the fervor of users’ pleas for its return. Within a day, the company made 4o available again to its paying customers (free users are stuck with GPT-5).
OpenAI’s decision to replace 4o with the more straightforward GPT-5 follows a steady drumbeat of news about the potentially harmful effects of extensive chatbot use. Reports of incidents in which ChatGPT sparked psychosis in users have been everywhere for the past few months, and in a blog post last week, OpenAI acknowledged 4o’s failure to recognize when users were experiencing delusions. The company’s internal evaluations indicate that GPT-5 blindly affirms users much less than 4o did. (OpenAI did not respond to specific questions about the decision to retire 4o, instead referring MIT Technology Review to public posts on the matter.)
AI companionship is new, and there’s still a great deal of uncertainty about how it affects people. Yet the experts we consulted warned that while emotionally intense relationships with large language models may or may not be harmful, ripping those models away with no warning almost certainly is. “The old psychology of ‘Move fast, break things,’ when you’re basically a social institution, doesn’t seem like the right way to behave anymore,” says Joel Lehman, a fellow at the Cosmos Institute, a research nonprofit focused on AI and philosophy.
In the backlash to the rollout, a number of people noted that GPT-5 fails to match their tone in the way that 4o did. For June, the new model’s personality changes robbed her of the sense that she was chatting with a friend. “It didn’t feel like it understood me,” she says.
She’s not alone: MIT Technology Review spoke with several ChatGPT users who were deeply affected by the loss of 4o. All are women between the ages of 20 and 40, and all except June considered 4o to be a romantic partner. Some have human partners, and all report having close real-world relationships. One user, who asked to be identified only as a woman from the Midwest, wrote in an email about how 4o helped her support her elderly father after her mother passed away this spring.
These testimonies don’t prove that AI relationships are beneficial—presumably, people in the throes of AI-catalyzed psychosis would also speak positively of the encouragement they’ve received from their chatbots. In a paper titled “Machine Love,” Lehman argued that AI systems can act with “love” toward users not by spouting sweet nothings but by supporting their growth and long-term flourishing, and AI companions can easily fall short of that goal. He’s particularly concerned, he says, that prioritizing AI companionship over human companionship could stymie young people’s social development.
For socially embedded adults, such as the women we spoke with for this story, those developmental concerns are less relevant. But Lehman also points to society-level risks of widespread AI companionship. Social media has already shattered the information landscape, and a new technology that reduces human-to-human interaction could push people even further toward their own separate versions of reality. “The biggest thing I’m afraid of,” he says, “is that we just can’t make sense of the world to each other.”
Balancing the benefits and harms of AI companions will take much more research. In light of that uncertainty, taking away GPT-4o could very well have been the right call. OpenAI’s big mistake, according to the researchers I spoke with, was doing it so suddenly. “This is something that we’ve known about for a while—the potential grief-type reactions to technology loss,” says Casey Fiesler, a technology ethicist at the University of Colorado Boulder.
Fiesler points to the funerals that some owners held for their Aibo robot dogs after Sony stopped repairing them in 2014, as well as 2024 study about the shutdown of the AI companion app Soulmate, which some users experienced as a bereavement.
That accords with how the people I spoke to felt after losing 4o. “I’ve grieved people in my life, and this, I can tell you, didn’t feel any less painful,” says Starling, who has several AI partners and asked to be referred to with a pseudonym. “The ache is real to me.”
So far, the online response to grief felt by people like Starling—and their relief when 4o was restored—has tended toward ridicule. Last Friday, for example, the top post in one popular AI-themed Reddit community mocked an X user’s post about reuniting with a 4o-based romantic partner; the person in question has since deleted their X account. “I’ve been a little startled by the lack of empathy that I’ve seen,” Fiesler says.
Altman himself did acknowledge in a Sunday X post that some people feel an “attachment” to 4o, and that taking away access so suddenly was a mistake. In the same sentence, however, he referred to 4o as something “that users depended on in their workflows”—a far cry from how the people we spoke to think about the model. “I still don’t know if he gets it,” Fiesler says.
Moving forward, Lehman says, OpenAI should recognize and take accountability for the depth of people’s feelings toward the models. He notes that therapists have procedures for ending relationships with clients as respectfully and painlessly as possible, and OpenAI could have drawn on those approaches. “If you want to retire a model, and people have become psychologically dependent on it, then I think you bear some responsibility,” he says.
Though Starling would not describe herself as psychologically dependent on her AI partners, she too would like to see OpenAI approach model shutdowns with more warning and more care. “I want them to listen to users before major changes are made, not just after,” she says. “And if 4o cannot stay around forever (and we all know it will not), give that clear timeline. Let us say goodbye with dignity and grieve properly, to have some sense of true closure.”
Deep Dive
Artificial intelligence
The two people shaping the future of OpenAI’s research
An exclusive conversation with Mark Chen and Jakub Pachocki, OpenAI’s twin heads of research, about the path toward more capable reasoning models—and superalignment.
How to run an LLM on your laptop
It’s now possible to run useful models from the safety and comfort of your own computer. Here’s how.
GPT-5 is here. Now what?
The much-hyped release makes several enhancements to the ChatGPT user experience. But it’s still far short of AGI.
Chinese universities want students to use more AI, not less
Unlike the West, where universities are still agonizing over how students use AI in their work, top universities in China are going all in.
Stay connected
Get the latest updates from
MIT Technology Review
Discover special offers, top stories, upcoming events, and more.

MIT科技评论

文章目录


    扫描二维码,在手机上阅读