«

揭秘OpenAI对思维机器实验室的突袭行动

qimuai 发布于 阅读:1 一手编译


揭秘OpenAI对思维机器实验室的突袭行动

内容来源:https://www.wired.com/story/inside-openai-raid-on-thinking-machines-lab/

内容总结:

本周,人工智能行业再起波澜,一系列人事变动与技术进展勾勒出这个新兴领域的激烈竞争与快速演进。

人事震荡:OpenAI“回收”创业公司核心团队

周三,OpenAI应用部门CEO菲吉·西莫宣布,重新聘用了巴雷特·佐夫和卢克·梅茨,二人是米拉·穆拉蒂旗下AI初创公司Thinking Machines Lab的联合创始人,曾于2024年底离开OpenAI。据西莫称,此次招聘已筹备数周,佐夫本周一已告知穆拉蒂其离职意向。

然而,事件背后存在不同说法。有知情人士透露,Thinking Machines管理层认为佐夫去年在公司内有严重不当行为,导致穆拉蒂对其失去信任,双方合作关系破裂。该公司称,已于周三因“不当行为引发的后续问题”解雇了佐夫,且在其决定返回OpenAI后,内部对其是否曾向竞争对手泄露机密信息表示担忧。OpenAI则回应称,不认同Thinking Machines对佐夫职业道德的质疑。

除上述二人外,原OpenAI研究员、后加入Thinking Machines的萨姆·舍恩霍尔茨也将回归。据悉,未来几周至少还有两名Thinking Machines员工将加入OpenAI。

有接近此事的人士指出,近期人事变动并非全因佐夫个人,而是源于公司内部在产品、技术及未来发展方向上长期存在的讨论与分歧。Thinking Machines Lab与OpenAI均对此事不予置评。

此次风波令不少顶尖AI实验室的研究人员感到疲惫,他们表示行业戏剧性事件频发已令人应接不暇。这让人联想到2023年OpenAI短暂罢免山姆·奥特曼的“闪回事件”,当时穆拉蒂作为首席技术官扮演了关键角色。近年来,从xAI、Safe Superintelligence到Meta,多家主要AI实验室的联合创始人离职事件屡见不鲜。

有观点认为,对于一个仍处早期、巨额投入正推动美国GDP增长的行业而言,此类动荡或许难以避免。尤其当人们相信这些研究人员可能在人AGI(通用人工智能)之路上取得突破时,他们的去向更值得关注。只要资本仍能轻易涌入,AI行业的权力洗牌预计将持续上演。

技术前沿:AI智能体正学习如何“取代”你的工作

与此同时,AI实验室在让智能体学习执行实际工作任务方面取得了显著进展。过去几个月,相关努力变得更为精密。

各实验室正更智能地获取训练数据。据报道,OpenAI已通过Handshake等第三方承包商,收集专业人士(如麦肯锡顾问、高盛投资银行家、哈佛医生)过往工作的真实案例,用于评估其AI智能体。数据供应商会要求员工剔除文件中的机密与个人信息,尽管专家警告若发生泄露OpenAI可能面临严重法律风险。

Mercor、Surge、Turing等主要数据供应商正以每小时100美元以上的薪酬招募顶尖人才,为AI实验室生成数据集。这些数据的一个重要用途是构建“模拟环境”——本质上是枯燥的“视频游戏”,用于训练AI智能体学习使用企业级软件应用程序。

企业云存储公司Box的CEO亚伦·莱维指出:“过去一年,实验室越来越意识到,他们需要在法律、医疗、咨询、银行等众多知识工作领域训练和微调模型。这些公司一直在雇佣承包商生成数据集和评估标准,以训练模型提升特定技能。”

AI智能体能否借此准确、稳定地执行办公室任务,仍有待观察。但过去一年,相关能力已显著提升,例如Claude Code等产品已不仅用于编码,其应用范围正不断拓宽。这或许预示着,其他行业也将迎来AI智能体带来的深刻变革。

中文翻译:

如果有人要拍一部关于人工智能行业的HBO Max剧集,本周发生的事件绝对能撑起一整集。

周三,OpenAI应用部门首席执行官菲吉·西莫宣布,公司已重新聘用了米拉·穆拉蒂旗下AI初创公司Thinking Machines Lab的联合创始人巴雷特·佐夫和卢克·梅茨。佐夫和梅茨于2024年底离开OpenAI。

我们昨晚报道了关于两人离职原因的两种说法,目前又获得了新信息。一位知情人士透露,Thinking Machines管理层认为佐夫去年在公司期间曾涉及严重不当行为。该人士称,此事破坏了穆拉蒂对他的信任,并导致两人工作关系恶化。该人士还声称,穆拉蒂于周三解雇了佐夫——当时她尚不知晓佐夫将重返OpenAI——公司给出的解雇理由是据称的不当行为引发的后续问题。就在公司得知佐夫将回归OpenAI之际,Thinking Machines内部对其是否曾向竞争对手泄露机密信息提出了质疑。(佐夫尚未回复《连线》多次提出的置评请求。)

与此同时,西莫在周三给员工的备忘录中声称,此次招聘已筹备数周,且佐夫在周一(即被解雇前)已告知穆拉蒂他正考虑离开Thinking Machines。西莫还向员工表示,OpenAI并不认同Thinking Machines对佐夫职业道德的担忧。

根据西莫的公告,除佐夫和梅茨外,另一位曾在Thinking Machines工作的前OpenAI研究员萨姆·舍恩霍尔茨也将重返这家ChatGPT制造商。据知情人士透露,预计未来数周内至少还有两名Thinking Machines员工将加入OpenAI。科技记者亚历克斯·希斯最先报道了额外招聘的消息。

另一位熟悉内情的消息人士反驳了近期人事变动完全与佐夫相关的说法:"这是Thinking Machines长期讨论的一部分。公司在发展目标上存在分歧——涉及产品定位、技术路线和未来规划。"

Thinking Machines Lab和OpenAI均拒绝置评。

事件发酵后,我们采访了多家顶尖AI实验室的研究人员,他们纷纷表示已对行业层出不穷的戏剧性事件感到厌倦。此次风波让人联想到2023年OpenAI短暂罢免萨姆·阿尔特曼的事件(OpenAI内部称之为'闪点')。据《华尔街日报》报道,时任首席技术官的穆拉蒂在那次事件中扮演了关键角色。

自阿尔特曼被罢免以来,AI行业的戏剧性事件持续上演,多家主要AI实验室的联合创始人相继离职,包括xAI的伊戈尔·巴布什金、安全超级智能公司的丹尼尔·格罗斯,以及Meta的扬·勒昆(他毕竟是Facebook长期AI实验室FAIR的联合创始人)。

有人或许认为,对于一个处于萌芽阶段、其支出正推动美国GDP增长的行业而言,这些波澜在所难免。况且,如果你相信这些研究人员中可能有人会在通往通用人工智能的道路上取得突破性进展,那么追踪他们的去向或许值得关注。

话虽如此,许多研究人员在ChatGPT取得突破性成功之前就已投身该领域,他们似乎惊讶于自己的行业如今几乎时刻处于聚光灯下。

只要研究人员还能凭一时兴起筹集数十亿美元的种子轮资金,我们猜测AI行业的权力洗牌将继续快速上演。HBO Max的编剧们,素材来了。

提供线索?
如果您是现任或前任AI研究员,愿意聊聊行业动态,我们期待您的消息。请使用非工作电话或电脑,通过Signal加密联系记者mzeff.88。

AI实验室如何训练智能体取代你的工作

硅谷人士数十年来一直在思考AI取代人类工作的可能性。然而过去几个月,让AI真正从事具有经济价值工作的尝试已变得复杂得多。

AI实验室正在更明智地选择用于训练AI智能体的数据。上周《连线》报道称,OpenAI一直要求来自Handshake公司的第三方承包商上传他们以往工作中的真实案例,用以评估OpenAI的智能体。公司要求员工清除这些文件中的机密数据和个人身份信息。尽管可能有企业机密或姓名信息泄露的风险,但这很可能不是OpenAI的目标(不过专家表示,若发生这种情况,该公司可能面临严重麻烦)。

AI实验室更感兴趣的是获得麦肯锡顾问、高盛投资银行家或哈佛医生创作的真实工作案例。这正是Mercor等数据供应商在其招聘信息中专门寻找曾任职于这些公司的专业人士的原因。

Handshake、Mercor、Surge和Turing是AI实验室获取此类数据的主要供应商。过去一年,数据公司开始以每小时100美元以上的薪酬为AI实验室签约顶尖人才。

他们利用这些数据的方式之一是创建"训练环境"——本质上是枯燥的电子游戏,用于教导AI智能体如何使用企业软件应用程序。其理念是AI智能体可以在这些环境中测试学习,掌握专业人士工作中使用的真实软件技能。

"过去一年,实验室越来越意识到需要针对法律、医疗、咨询、银行等众多知识工作领域训练和微调模型,"企业服务公司Box的首席执行官亚伦·莱维表示。该公司使用OpenAI、Anthropic和谷歌的模型驱动企业智能体。"这些公司一直在雇佣承包商生成数据集和评估标准,为模型训练和评估提供途径,从而提升特定技能。"

这是否足以训练AI智能体准确稳定地执行办公任务仍有待观察。过去一年AI实验室显著改进了他们的智能体,Claude Code等爆款产品的出现就是明证——人们越来越多地将其用于编码之外的任务。如果这预示着其他行业的发展趋势,那么这些企业级智能体值得密切关注。

本文节选自《模型行为》时事通讯,过往内容可在此处查阅。

英文来源:

If someone ever makes an HBO Max series about the AI industry, the events of this week will make quite the episode.
On Wednesday, OpenAI’s CEO of applications, Fidji Simo, announced the company had rehired Barret Zoph and Luke Metz, cofounders of Mira Murati’s AI startup, Thinking Machines Lab. Zoph and Metz had left OpenAI in late 2024.
We reported last night on two narratives forming around what led to the departures, and have since learned new information.
A source with direct knowledge says that Thinking Machines leadership believed Zoph engaged in an incident of serious misconduct while at the company last year. That incident broke Murati’s trust, the source says, and disrupted the pair’s working relationship. The source also alleged Murati fired Zoph on Wednesday—before knowing he was going to OpenAI—due to what the company claimed were issues that arose after the alleged misconduct. Around the time the company learned that Zoph was returning to OpenAI, Thinking Machines raised concerns internally about whether he had shared confidential information with competitors. (Zoph has not responded to several requests for comment from WIRED.)
Meanwhile, in a Wednesday memo to employees, Simo claimed the hires had been in the works for weeks and that Zoph told Murati he was considering leaving Thinking Machines on Monday—prior to the date he was fired. Simo also told employees that OpenAI doesn’t share Thinking Machines' concerns about Zoph’s ethics.
Alongside Zoph and Metz, another former OpenAI researcher that was working at Thinking Machines, Sam Schoenholz, is rejoining the ChatGPT-maker, per Simo’s announcement. At least two more Thinking Machines employees are expected to join OpenAI in the coming weeks, according to a source familiar with the matter. Technology reporter Alex Heath was first to report the additional hires.
A separate source familiar with the matter pushed back on the perception that the recent personnel changes were wholly related to Zoph. "This has been part of a long discussion at Thinking Machines. There were discussions and misalignment on what the company wanted to build—it was about the product, the technology, and the future.”
Thinking Machines Lab and OpenAI declined to comment.
In the aftermath of these events, we’ve been hearing from several researchers at leading AI labs who say they are exhausted by the constant drama in their industry. This specific incident is reminiscent of OpenAI’s brief ouster of Sam Altman in 2023, known inside of OpenAI as “the blip.” Murati played a key role in that event as the company’s then chief technology officer, according to reporting from The Wall Street Journal.
In the years since Altman’s ouster, the drama in the AI industry has continued, with departures of cofounders at several major AI labs, including xAI’s Igor Babuschkin, Safe Superintelligence’s Daniel Gross, and Meta’s Yann LeCun (he did cofound Facebook’s longstanding AI lab, FAIR, after all).
Some might argue the drama is justified for a nascent industry whose expenditures are contributing to America’s GDP growth. Also, if you buy into the idea that one of these researchers might crack a few breakthroughs on the path to AGI, it’s probably worth tracking where they’re going.
That said, many researchers started working before ChatGPT’s breakout success and appear surprised that their industry is now the source of nearly constant scrutiny.
As long as researchers can keep raising billion-dollar seed rounds on a whim, we’re guessing the AI industry’s power shake-ups will continue apace. HBO Max writers, lock in.
Got a Tip?
Are you a current or former AI researcher who wants to talk about what's happening? We'd like to hear from you. Using a nonwork phone or computer, contact the reporter securely on Signal at mzeff.88.

How AI Labs Are Training Agents to Do Your Job
People in Silicon Valley have been musing about AI displacing jobs for decades. In the past few months, however, the efforts to actually get AI to do economically valuable work have become far more sophisticated.
AI labs are smartening up about the data they’re using to create AI agents. Last week, WIRED reported that OpenAI has been asking third-party contractors from the firm Handshake to upload examples of their real work from previous jobs to evaluate OpenAI’s agents. The companies ask employees to scrub these documents of any confidential data and personally identifying information. While it’s possible some corporate secrets or names slip by, that’s likely not what OpenAI is after (though the company could get in serious trouble if that happens, experts say).
AI labs are more interested in getting realistic examples of work created by a McKinsey consultant, Goldman Sachs investment banker, or Harvard doctor. That’s why data suppliers such as Mercor specifically seek out professionals that have worked at these companies on their job postings.
Handshake, Mercor, Surge, and Turing are some of the major data suppliers that AI labs rely on to get this data. In the past year, data firms have started paying upwards of $100 an hour to contract top talent for AI labs.
One way they’re using this data is to create “environments,” which are essentially boring video games that teach AI agents how to use enterprise software applications. The idea is that AI agents can test on environments and learn how to use real-world software that professionals would use to do their jobs.
“Over the past year, labs have increasingly recognized that they need to train and fine-tune models for a whole bunch of areas of knowledge work, including legal, health care, consulting, and banking,” says Aaron Levie, the CEO of the enterprise company Box, which offers enterprise agents powered by models from OpenAI, Anthropic, and Google. “These firms have been hiring contractors to generate datasets and rubrics, which offer ways that they can train and evaluate the model so it can get better at particular skills.”
Whether this is enough to train AI agents to execute office tasks accurately and consistently remains to be seen. AI labs have significantly improved their agents in the past year, as shown by viral products like Claude Code, which people are increasingly using for tasks outside of coding. If that’s any indication of what’s to come for other industries, it’s worth watching these enterprise agents.
This is an edition of the Model Behavior newsletter. Read previous newsletters here.

连线杂志AI最前沿

文章目录


    扫描二维码,在手机上阅读