«

当心那些用人工智能炮制“垃圾工作成果”的同事。

qimuai 发布于 阅读:22 一手编译


当心那些用人工智能炮制“垃圾工作成果”的同事。

内容来源:https://techcrunch.com/2025/09/27/beware-coworkers-who-produce-ai-generated-workslop/

内容总结:

近日,咨询机构BetterUp Labs与斯坦福社交媒体实验室的研究人员在《哈佛商业评论》撰文,提出新概念“工作糟粕”,专指那些看似规范却缺乏实质价值的AI生成内容。这类内容往往无法有效推动工作进展,反而可能因信息缺失或脱离核心语境增加团队负担。

研究指出,“工作糟粕”或可解释为何超九成试水AI的企业未能获得预期回报。其隐蔽性危害在于将工作压力转嫁给下游接收者,迫使同事耗费时间进行修正或重做。一项针对1150名美国全职员工的持续调查显示,近四成受访者过去一月内曾收到此类低效内容。

为应对这一问题,专家建议企业管理者应以身作则,示范具有明确目标与审慎思考的AI使用方式,同时为团队建立清晰的应用规范与边界约束,确保人工智能真正成为提升效能的工具而非新型负担。

中文翻译:

咨询公司BetterUp Labs的研究人员与斯坦福社交媒体实验室合作,创造了一个新词来描述低质量的AI生成内容:"workslop"(废稿)。根据本周发表在《哈佛商业评论》的文章定义,废稿指"那些伪装成优质成果、实则缺乏实质内容、无法有效推进任务的AI生成工作内容"。

BetterUp Labs研究人员指出,废稿现象或许能解释为何95%尝试使用AI的企业表示投资未见回报。他们写道,废稿往往"毫无助益、内容残缺或缺乏关键背景信息",最终只会给其他成员增加额外负担。"废稿的隐患在于将工作负担转嫁给下游接收者,迫使对方进行解读、修正或重做。"

研究人员对1150名美国全职员工的持续调查显示,40%的受访者表示过去一个月收到过此类废稿。为避免这种情况,研究人员建议职场领导者必须"示范具有明确目标和审慎思考的AI使用方式",并"为团队制定清晰的行为规范与使用边界"。

英文来源:

Researchers at consulting firm BetterUp Labs, in collaboration with Stanford Social Media Lab, have coined a new term to describe low-quality, AI-generated work: “workslop.”
As defined in an article published this week in the Harvard Business Review, workslop is “AI generated work content that masquerades as good work, but lacks the substance to meaningfully advance a given task.”
BetterUp Labs researchers suggest that workslop could be one explanation for the 95% of organizations that have tried AI but report seeing zero return on that investment. Workslop, they write, can be “unhelpful, incomplete, or missing crucial context,” which just creates more work for everyone else.
“The insidious effect of workslop is that it shifts the burden of the work downstream, requiring the receiver to interpret, correct, or redo the work,” they write.
The researchers also conducted an ongoing survey of 1,150 full-time, U.S.-based employees, with 40% of respondents saying they’d received workslop in the past month.
To avoid this, the researchers say workplace leaders must “model thoughtful AI use that has purpose and intention” and “set clear guardrails for your teams around norms and acceptable use.”

TechCrunchAI大撞车

文章目录


    扫描二维码,在手机上阅读