救命!我的心理治疗师居然在用ChatGPT!
内容来源:https://www.technologyreview.com/2025/09/09/1123386/help-my-therapist-is-secretly-using-chatgpt/
内容总结:
近期有报道披露,部分心理咨询师在未告知患者的情况下,于诊疗过程中秘密使用ChatGPT等人工智能工具处理患者隐私信息,引发伦理争议。事件曝光源于患者发现诊疗对话被实时输入AI系统,甚至出现治疗师直接套用AI生成建议的情况。
虽然针对心理治疗专项开发的AI工具在标准化认知行为疗法等领域展现出应用潜力,但当前被滥用的通用型AI并未经过心理健康领域安全审核。专业机构强调,AI诊断不仅违反职业准则,更可能对患者隐私和信任关系造成不可逆的损害。美国内华达州和伊利诺伊州已率先立法禁止AI参与治疗决策,更多地区正在跟进监管措施。
科技公司鼓吹AI替代心理治疗的倾向受到质疑。专家指出,真正的治疗需要专业干预和情感挑战,而非算法提供的标准化应答。该事件折射出AI应用与伦理规范之间的巨大鸿沟,行业透明度与监管框架亟待完善。
中文翻译:
救命!我的心理治疗师竟在偷偷使用ChatGPT
一些患者发现他们的私密倾诉正被悄然输入人工智能系统。在硅谷构想的未来图景中,人工智能模型将具备超强共情能力,成为人类的心理治疗师。它们能为数百万人提供心理健康服务,且不受人类咨询师那些烦琐要求的限制——比如研究生学历、医疗事故保险和睡眠需求。但现实世界的发展却走向了截然不同的方向。
上周我们报道了部分患者发现治疗师在诊疗过程中秘密使用ChatGPT的事件。某些案例中这种行为毫不遮掩:一位治疗师在视频问诊时误将屏幕共享给患者,使其亲眼目睹自己的隐私倾诉被实时输入ChatGPT,而后治疗师逐字复述AI生成的建议。这是近期最让我印象深刻的人工智能案例,或许因为它生动展现了当人们真正按照科技公司暗示的方式使用AI时,可能引发的混乱局面。
正如本报道作者劳里·克拉克所指出的,人工智能具有治疗价值并非完全空想。今年初我曾撰文介绍首个专为心理治疗研发的AI机器人临床试验,其结果令人鼓舞!但治疗师擅自使用未经过心理健康领域验证的AI模型则是另一回事。我与克拉克深入交流了她的发现。
我必须说,患者发现治疗师秘密使用AI后立即提出质疑的现象非常有趣。您如何解读治疗师们的反应?他们是否试图隐瞒这种行为?
在所有案例中,治疗师都未曾提前向患者披露使用AI的情况。因此无论他们是否刻意隐瞒,事实败露时都会造成隐瞒的观感。我认为治疗师必须事先告知使用AI的意图及方式(若确需使用),否则一旦被发现,将引发患者的强烈不安,甚至可能永久摧毁辛苦建立的信任关系。
在您了解的案例中,治疗师使用AI是单纯为了节省时间?还是认为AI能提供新的诊疗视角?
部分治疗师将AI视为省时工具。多位治疗师表示临床记录是他们工作的痛点,因此对AI辅助工具确实存在需求。但大多数受访者对依靠AI获取治疗建议持怀疑态度,认为更应咨询督导、同事或文献案例。他们对向AI输入敏感数据存有合理戒心。
有证据表明AI能有效提供标准化治疗方案(如认知行为疗法),但这需要专用AI工具而非ChatGPT这类通用模型。
若出现问题会怎样?伦理组织和立法机构对此有何关注?
目前美国咨询协会等专业机构已建议禁止使用AI工具进行诊断。未来可能会有更严格法规,例如内华达州和伊利诺伊州近期通过法律禁止在治疗决策中使用AI,更多州或将效仿。
OpenAI的萨姆·奥尔特曼称"许多人将ChatGPT当作治疗师使用"并视之为好事。您认为科技公司是否过度夸大了AI的辅助能力?
科技公司确实在隐性鼓励这种使用方式,因为这能增强用户对其产品的依赖。但关键在于这些工具提供的服务根本不属于"治疗"范畴。优质治疗远不止于安抚和认同——真实的治疗过程往往令人不适甚至痛苦,但这正是其意义所在。治疗师需要挑战你、引导你、理解你,而ChatGPT完全不具备这些能力。
(以下内容为原文信息板块的翻译)
阅读劳里·克拉克完整报道
本文首发于AI时事通讯《算法》,注册可优先收阅同类内容
深度聚焦
人工智能
谷歌首次披露单次AI查询能耗数据
这是大型AI企业迄今最透明的能耗评估,为研究人员提供了期待已久的数据窗口
OpenAI研究领域的两位掌舵人
独家专访研究总监马克·陈和雅各布·帕霍茨基,探讨推理模型发展与超对齐技术路径
如何在个人电脑运行大语言模型
现在你可以在自己的电脑上安全运行实用模型,具体方法如下
治疗师秘密使用ChatGPT正触发患者信任危机
某些治疗师在诊疗过程中使用AI,正在危及患者的信任与隐私
保持联系
获取《麻省理工科技评论》最新动态
发现特别优惠、热门话题、近期活动等精彩内容
英文来源:
Help! My therapist is secretly using ChatGPT
Some patients have discovered their private confessions are being quietly fed into AI.
In Silicon Valley’s imagined future, AI models are so empathetic that we’ll use them as therapists. They’ll provide mental-health care for millions, unimpeded by the pesky requirements for human counselors, like the need for graduate degrees, malpractice insurance, and sleep. Down here on Earth, something very different has been happening.
Last week, we published a story about people finding out that their therapists were secretly using ChatGPT during sessions. In some cases it wasn’t subtle; one therapist accidentally shared his screen during a virtual appointment, allowing the patient to see his own private thoughts being typed into ChatGPT in real time. The model then suggested responses that his therapist parroted.
It’s my favorite AI story as of late, probably because it captures so well the chaos that can unfold when people actually use AI the way tech companies have all but told them to.
As the writer of the story, Laurie Clarke, points out, it’s not a total pipe dream that AI could be therapeutically useful. Early this year, I wrote about the first clinical trial of an AI bot built specifically for therapy. The results were promising! But the secretive use by therapists of AI models that are not vetted for mental health is something very different. I had a conversation with Clarke to hear more about what she found.
I have to say, I was really fascinated that people called out their therapists after finding out they were covertly using AI. How did you interpret the reactions of these therapists? Were they trying to hide it?
In all the cases mentioned in the piece, the therapist hadn’t provided prior disclosure of how they were using AI to their patients. So whether or not they were explicitly trying to conceal it, that’s how it ended up looking when it was discovered. I think for this reason, one of my main takeaways from writing the piece was that therapists should absolutely disclose when they’re going to use AI and how (if they plan to use it). If they don’t, it raises all these really uncomfortable questions for patients when it’s uncovered and risks irrevocably damaging the trust that’s been built.
In the examples you’ve come across, are therapists turning to AI simply as a time-saver? Or do they think AI models can genuinely give them a new perspective on what’s bothering someone?
Some see AI as a potential time-saver. I heard from a few therapists that notes are the bane of their lives. So I think there is some interest in AI-powered tools that can support this. Most I spoke to were very skeptical about using AI for advice on how to treat a patient. They said it would be better to consult supervisors or colleagues, or case studies in the literature. They were also understandably very wary of inputting sensitive data into these tools.
There is some evidence AI can deliver more standardized, "manualized" therapies like CBT [cognitive behavioral therapy] reasonably effectively. So it’s possible it could be more useful for that. But that is AI specifically designed for that purpose, not general-purpose tools like ChatGPT.
What happens if this goes awry? What attention is this getting from ethics groups and lawmakers?
At present, professional bodies like the American Counseling Association advise against using AI tools to diagnose patients. There could also be more stringent regulations preventing this in future. Nevada and Illinois, for example, have recently passed laws prohibiting the use of AI in therapeutic decision-making. More states could follow.
OpenAI’s Sam Altman said last month that “a lot of people effectively use ChatGPT as a sort of therapist,” and that to him, that’s a good thing. Do you think tech companies are overpromising on AI’s ability to help us?
I think that tech companies are subtly encouraging this use of AI because clearly it’s a route through which some people are forming an attachment to their products. I think the main issue is that what people are getting from these tools isn’t really “therapy” by any stretch. Good therapy goes far beyond being soothing and validating everything someone says. I’ve never in my life looked forward to a (real, in-person) therapy session. They’re often highly uncomfortable, and even distressing. But that’s part of the point. The therapist should be challenging you and drawing you out and seeking to understand you. ChatGPT doesn’t do any of these things.
Read the full story from Laurie Clarke.
This story originally appeared in The Algorithm, our weekly newsletter on AI. To get stories like this in your inbox first, sign up here.
Deep Dive
Artificial intelligence
In a first, Google has released data on how much energy an AI prompt uses
It’s the most transparent estimate yet from one of the big AI companies, and a long-awaited peek behind the curtain for researchers.
The two people shaping the future of OpenAI’s research
An exclusive conversation with Mark Chen and Jakub Pachocki, OpenAI’s twin heads of research, about the path toward more capable reasoning models—and superalignment.
How to run an LLM on your laptop
It’s now possible to run useful models from the safety and comfort of your own computer. Here’s how.
Therapists are secretly using ChatGPT. Clients are triggered.
Some therapists are using AI during therapy sessions. They’re risking their clients’ trust and privacy in the process.
Stay connected
Get the latest updates from
MIT Technology Review
Discover special offers, top stories, upcoming events, and more.