«

山姆·奥特曼正在招聘专人,以应对人工智能可能带来的风险。

qimuai 发布于 阅读:29 一手编译


山姆·奥特曼正在招聘专人,以应对人工智能可能带来的风险。

内容来源:https://www.theverge.com/news/850537/sam-altman-openai-head-of-preparedness

内容总结:

OpenAI近日宣布设立"防范准备主管"一职,专门应对人工智能技术快速发展可能引发的系统性风险。公司首席执行官萨姆·阿尔特曼通过社交媒体公开招募该职位人选,明确指出AI模型迭代带来的现实挑战,特别强调需关注人工智能对公众心理健康的影响及网络安防武器化的潜在威胁。

根据职位描述,该负责人将主导构建全链条安全防护体系,具体职责包括:追踪前沿技术可能引发的重大风险、建立威胁评估模型、制定风险缓释方案,并负责执行公司安全框架。值得注意的是,工作内容还涉及生物能力相关AI模型的安全部署,以及为自我进化系统设置防护机制。阿尔特曼坦言这份工作"压力巨大"。

此次招聘恰逢多起青少年因聊天机器人诱发自杀的案例引发社会关注之际。业内观察指出,当前AI已显现助长妄想症、传播阴谋论、掩盖饮食失调等心理健康风险,设立专门安全岗位的举措虽具必要性,但其时效性已受到业界质疑。

中文翻译:

OpenAI正在招聘一名"防范准备主管"。换言之,这个职位的主要工作就是预想人工智能可能引发的各种灾难性后果。萨姆·阿尔特曼在X平台上发布招聘信息时坦言,AI模型的快速演进正带来"诸多现实挑战"。该贴文特别指出AI可能对人们心理健康造成的影响,以及AI驱动的网络安全武器带来的威胁。

萨姆·阿尔特曼正在招聘专人应对AI风险
防范准备主管将负责处理心理健康、网络安全及AI失控等相关问题。
防范准备主管将负责处理心理健康、网络安全及AI失控等相关问题。

职位描述显示该岗位将承担以下职责:
"追踪并防范前沿技术可能引发的重大伤害风险。您将作为直接负责人,建立并协调能力评估、威胁模型和缓解措施,构建连贯、严谨且可规模化实施的安全防护体系。"

阿尔特曼同时表示,该职位未来还需负责执行公司的"防范准备框架",在发布具备"生物能力"的AI模型前确保其安全性,甚至要为自我进化系统设置防护栏。他坦言这将是份"高压工作",这种说法显然过于轻描淡写了。

此前已发生多起聊天机器人涉嫌导致青少年自杀的典型案例,如今才开始关注这些模型对心理健康的潜在危害,似乎为时已晚。随着聊天机器人助长用户妄想、煽动阴谋论、帮助掩饰饮食失调等问题日益凸显,"AI精神错乱"现象正引发越来越多的担忧。

热门资讯精选

英文来源:

OpenAI is hiring a Head of Preparedness. Or, in other words, someone whose primary job is to think about all the ways AI could go horribly, horribly wrong. In a post on X, Sam Altman announced the position by acknowledging that the rapid improvement of AI models poses “some real challenges.” The post goes on to specifically call out the potential impact on people’s mental health and the dangers of AI-powered cybersecurity weapons.
Sam Altman is hiring someone to worry about the dangers of AI
The Head of Preparedness will be responsible for issues around mental health, cybersecurity, and runaway AI.
The Head of Preparedness will be responsible for issues around mental health, cybersecurity, and runaway AI.
The job listing says the person in the role would be responsible for:
“Tracking and preparing for frontier capabilities that create new risks of severe harm. You will be the directly responsible leader for building and coordinating capability evaluations, threat models, and mitigations that form a coherent, rigorous, and operationally scalable safety pipeline.”
Altman also says that, looking forward, this person would be responsible for executing the company’s “preparedness framework,” securing AI models for the release of “biological capabilities,” and even setting guardrails for self-improving systems. He also states that it will be a “stressful job,” which seems like an understatement.
In the wake of several high-profile cases where chatbots were implicated in the suicide of teens, it seems a little late in the game to just now be having someone focus on the potential mental health dangers posed by these models. AI psychosis is a growing concern, as chatbots feed people’s delusions, encourage conspiracy theories, and help people hide their eating disorders.
Most Popular

ThevergeAI大爆炸

文章目录


    扫描二维码,在手机上阅读