人工智能能否避开品质劣化陷阱?
内容来源:https://www.wired.com/story/can-ai-escape-enshittification-trap/
内容总结:
近日,一位旅行者在意大利度假期间,通过GPT-5规划行程时获得了一家名为"Babette"的罗马餐厅推荐。实际体验后,该游客称赞这顿晚餐"令人难忘"。人工智能通过综合分析本地食客好评、美食博主力荐及媒体报道,结合餐厅融合传统罗马风味与现代烹饪的特色,最终精准匹配了用户需求。
这一案例展现了AI作为信息中介的潜力,但也引发行业反思:当科技巨头斥资数千亿美元推进AI研发时,如何避免重蹈现有互联网平台的"服务劣化"覆辙?作家科里·多克托罗提出的"服务劣化"理论指出,科技平台往往在垄断市场后通过植入广告、降低服务质量等手段追求利润最大化。该术语因精准刻画数字生态现状,被美国方言学会评为2023年度词汇。
目前AI领域正处在"用户友好"阶段,但资本回报压力可能催生变相收费。业界已出现苗头:OpenAI与沃尔玛达成购物功能合作,Perplexity搜索平台开始试行标注广告。多克托罗警告,由于大型语言模型的"黑箱"特性,其服务质量的暗中降级更难被察觉。尽管OpenAI首席执行官承诺将谨慎平衡商业利益与用户体验,但GPT-5自身分析也承认,若缺乏有效监管,"服务劣化"理论完全适用于AI发展轨迹。
在流媒体平台集体涨价、游戏引擎商突然增收运行时费等前车之鉴背景下,消费者担忧未来或需支付更高费用才能维持现有AI服务质量。这场关乎技术伦理与商业道德的考验,正成为悬在AI爆发式增长之上的达摩克利斯之剑。
中文翻译:
最近我去意大利度了个假。和如今大多数人的习惯一样,我把行程路线发给GPT-5征询观光建议和餐厅推荐。这个人工智能模型反馈说,在罗马下榻酒店附近有家绝佳餐厅,只需沿着马尔古塔大街步行片刻即可抵达。结果这竟成为我记忆中最难忘的一餐。回程后我向模型询问推荐依据——此处我本不愿透露店名(毕竟还想着日后订位,天知道能否再去罗马?这家名为芭贝特的餐厅记得提前预约),但其分析之缜密令人惊叹:综合考虑了本地食客的热情赞誉、美食博客与意大利媒体的报道,以及该店融合古罗马与当代烹饪技艺的盛名。当然,还有"步行可达"这个要素。
而我也需要付出某种代价:信任。我必须相信GPT-5是诚实的中间人,其推荐毫无偏见;既非植入广告,也不会从我的消费中抽成。虽然我可以自行深入查证(确实浏览了餐厅官网),但使用AI的初衷本就是为了规避这种繁琐。
这段经历既增强了我对AI结果的信心,也引发思考:当OpenAI这类公司日益强大,当它们开始需要回报投资者时,AI是否会重蹈当下科技应用价值衰减的覆辙?
文字游戏
作家兼科技评论家科利·多克托罗将这种衰退称为"粪坑化"。他的核心观点是:诸如谷歌、亚马逊、脸书和TikTok等平台,初期都以取悦用户为目标,但一旦消灭竞争对手,就会刻意降低服务品质以攫取更大利润。自《连线》杂志2022年重新刊载多克托罗关于该现象的开创性论述后,这个术语迅速流行——根本原因在于人们意识到其精准命中了现状。"粪坑化"获选美国方言协会2023年度词汇。这个概念被频繁引用的程度,甚至让其突破了粗俗语的界限,出现在向来对此类词汇嗤之以鼻的场合。多克托罗新近出版的同名著作封面,印的正是那个众所周知的表情符号……你懂的。
若聊天机器人和AI智能体陷入"粪坑化",其后果将比谷歌搜索失效、亚马逊充斥广告、脸书用引战内容取代社交动态更为严重。AI正朝着全能伴侣的方向发展,对我们各类需求提供直击要害的答案。人们已开始依赖它解读时事、指导消费决策,甚至左右人生选择。鉴于开发完整AI模型的巨额成本,可以预见未来仅有少数企业能主导该领域。所有这些公司都计划在未来数年投入数千亿美元改进模型,争取最大规模的用户群。就现阶段而言,AI正处于多克托罗所说的"用户友好期"。但收回巨额投资的压力将空前巨大——特别是对那些已锁定用户基础的企业。正如多克托罗所言,这种环境将诱使企业通过压榨用户和商业客户"夺回全部价值"。
设想AI的"粪坑化",最先浮现的便是广告植入。最令人担忧的是AI模型将根据商家付费情况进行推荐。目前虽未发生,但AI公司已在积极布局广告领域。OpenAI首席执行官萨姆·奥尔特曼近期受访时表示:"我相信能开发出既让用户受益,又增进用户关系的创新广告产品。"与此同时,OpenAI刚宣布与沃尔玛达成合作,允许该零售商客户在ChatGPT应用内直接购物。这其中的利益冲突不言而喻!AI搜索平台Perplexity已推出标注赞助商信息的跟进回答项目,但承诺"广告不会改变我们提供直接、客观答案的初心"。
这些底线能守住吗?Perplexity发言人杰西·德怀尔向我保证:"对我们而言,首要原则就是绝不退让。"而在OpenAI最近的开发者大会上,奥尔特曼强调公司"深刻意识到必须审慎服务用户而非自身利益"。多克托罗理论对此类声明持怀疑态度:"一旦企业具备粪坑化产品的能力,就将持续面临这种诱惑。"他在著作中如是写道。
广告植入并非AI"粪坑化"的唯一途径。多克托罗列举了企业垄断市场后变更商业模式与收费的案例。例如2023年,最流行的游戏开发工具提供商Unity突然宣布征收新"运行时费用",引发用户强烈抗议最终被迫撤回。再看亚马逊Prime视频的演变:从无广告服务到强制播放贴片广告,取消广告还需额外付费。与此同时Prime会员费持续上涨。这或是科技巨头的标准套路——先用服务锁定用户,再逐步提价。甚至可能有一天,为维持聊天机器人原有智能水平,用户不得不升级至更昂贵的套餐——这又是"粪坑化"的伎俩。那些曾承诺不将用户对话用于模型训练的企业,也可能出尔反尔——只因他们能逍遥法外。
多克托罗之声
由于多克托罗著作未涉及AI领域,我专程致电请教他对此技术是否终将堕落的看法。本以为他会详述AI公司罹患这种"恶臭综合征"的种种途径,不料他提出独特见解。尽管他并非AI拥趸,甚至认为该领域尚未达到我所说的"用户友好期",但他指出"粪坑化"仍可能发生。由于大语言模型如同黑箱难以透视,"它们完全有能力伪装衰退过程,从而肆无忌惮地胡作非为"。更重要的是该领域"糟糕的经济模式"迫使企业急不可耐,甚至在创造价值前就已开始堕落。"当经济模式走向死胡同时,它们会尝试所有你能想到的拙劣伎俩。"他断言。
我对AI价值的看法与多克托罗相左——毕竟它找到了芭贝特餐厅!但确实担忧这项技术可能重蹈当下科技巨头的覆辙。有趣的是,GPT-5认同我的忧虑。当向它提出这个问题时,其回复是:"若缺乏制衡机制,多克托罗的粪坑化理论(平台先取悦用户,再转向商业客户,最终榨取所有价值)将惊人地契合AI系统发展轨迹。"GPT-5随后详细列举了AI公司为牟取利益可能采取的产品降级手段。AI企业或许会承诺避免"粪坑化",但它们自己的产品早已绘制出行动蓝图。
本文节选自史蒂文·利维的《反向通道》新闻通讯,欢迎订阅阅读往期内容。
英文来源:
I recently vacationed in Italy. As one does these days, I ran my itinerary past GPT-5 for sightseeing suggestions and restaurant recommendations. The bot reported that the top choice for dinner near our hotel in Rome was a short walk down Via Margutta. It turned out to be one of the best meals I can remember. When I got home, I asked the model how it chose that restaurant, which I hesitate to reveal here in case I want a table sometime in the future (Hell, who knows if I’ll even return: It is called Babette. Call ahead for reservations.) The answer was complex and impressive. Among the factors were rave reviews from locals, notices in food blogs and the Italian press, and the restaurant’s celebrated combination of Roman and contemporary cooking. Oh, and the short walk.
Something was required from my end as well: trust. I had to buy into the idea that GPT-5 was an honest broker, picking my restaurant without bias; that the restaurant wasn’t shown to me as sponsored content and wasn’t getting a cut of my check. I could have done deep research on my own to double-check the recommendation (I did look up the website), but the point of using AI is to bypass that friction.
The experience bolstered my confidence in AI results but also made me wonder: As companies like OpenAI get more powerful, and as they try to pay back their investors, will AI be prone to the erosion of value that seems endemic to the tech apps we use today?
Word Play
Writer and tech critic Cory Doctorow calls that erosion “enshittification.” His premise is that platforms like Google, Amazon, Facebook, and TikTok start out aiming to please users, but once the companies vanquish competitors, they intentionally become less useful to reap bigger profits. After WIRED republished Doctorow’s pioneering 2022 essay about the phenomenon, the term entered the vernacular, mainly because people recognized that it was totally on the mark. Enshittification was chosen as the American Dialect Society’s 2023 Word of the Year. The concept has been cited so often that it transcends its profanity, appearing in venues that normally would hold their noses at such a word. Doctorow just published an eponymous book on the subject; the cover image is the emoji for … guess what.
If chatbots and AI agents become enshittified, it could be worse than Google Search becoming less useful, Amazon results getting plagued with ads, and even Facebook showing less social content in favor of anger-generating clickbait.
AI is on a trajectory to be a constant companion, giving one-shot answers to many of our requests. People already rely on it to help interpret current events and get advice on all sorts of buying choices—and even life choices. Because of the massive costs of creating a full-blown AI model, it’s fair to assume that only a few companies will dominate the field. All of them plan to spend hundreds of billions of dollars over the next few years to improve their models and get them into the hands of as many people as possible. Right now, I’d say AI is in what Doctorow calls the “good to the users” stage. But the pressure to make back the massive capital investments will be tremendous—especially for companies whose user base is locked in. Those conditions, as Doctorow writes, allow companies to abuse their users and business customers “to claw back all the value for themselves.”
When one imagines the enshittification of AI, the first thing that comes to mind is advertising. The nightmare is that AI models will make recommendations based on which companies have paid for placement. That’s not happening now, but AI firms are actively exploring the ad space. In a recent interview, OpenAI CEO Sam Altman said, “I believe there probably is some cool ad product we can do that is a net win to the user and a sort of positive to our relationship with the user.” Meanwhile, OpenAI just announced a deal with Walmart so the retailer’s customers can shop inside the ChatGPT app. Can’t imagine a conflict there! The AI search platform Perplexity has a program where sponsored results appear in clearly labeled follow-ups. But, it promises, “these ads will not change our commitment to maintaining a trusted service that provides you with direct, unbiased answers to your questions.”
Will those boundaries hold? Perplexity spokesperson Jesse Dwyer tells me, “For us, the number one guarantee is that we won’t let it.” And at OpenAI’a recent developer’s day, Altman said that the company is “hyper aware of the need to be very careful” about serving its users rather than serving itself. The Doctorow doctrine doesn’t put much credence in statements like that: “Once a company can enshittify its products, it will face the perennial temptation to enshittify its products,” he writes in his book.
Putting ads in chatbot conversations or in search results is not the only way that AI can become enshittified. Doctorow cites examples where companies, once they dominate a market, change their business model and fees. For instance, in 2023, Unity, the most popular provider of videogame development tools, decided to charge a new “runtime fee.” That misbehavior was so egregious that users revolted and got the fee walked back. But look at what has happened to streaming services like Amazon Prime Video: It used to be an ad-free service. Now it makes you watch commercials before and during the movie. You have to pay to turn them off. Oh, and the price of Amazon Prime keeps rising. So it might be standard big-tech practice to lock users into a service and then charge ever higher fees. It could even be that in order to maintain the same level of intelligence in a chatbot’s results, users one day might have to upgrade to a higher, even more expensive tier—another enshittification trick. Maybe companies that once promised that your chatbot activities would not be used to train future models will change their minds about that—simply because they can get away with it.
Cory Speaks
Doctorow didn’t address AI in his book, so I gave him a call to see whether he thinks the category is destined to travel down defecation row. I expected that he might outline the various ways that AI companies will fall prey to his smelly syndrome. To my surprise, he had a different take. He is not a fan of AI, and he claims the field has not even reached the “good to users” stage I outlined earlier. Nonetheless, he says, it could be that the enshittification process happens anyway. Because it’s so hard to see what goes on inside the “black boxes” of LLMs, he says, “they have an ability to disguise their enshittifying in a way that would allow them to get away with an awful lot.” Most of all, he says, the “terrible economics” of the field demand that the companies can’t afford to wait and will enshittify even before they deliver value. “I think they’ll try every sweaty gambit you can imagine as the economics circle the drain,” he said.
I disagree with Doctorow about the value of AI. Hey, it found Babette for me! But I do fear that the technology might be prone to the enshittification process that he unerringly identified in the current tech giants. And guess what—GPT-5 agrees with me. When I posed the question to the chatbot, it replied, “Doctorow’s ‘enshittification’ framework (platforms start good for users, then shift value to business customers, then extract it for themselves) maps disturbingly well onto AI systems if incentives go unchecked.” GPT-5 then proceeded to lay out a number of methods by which AI companies could degrade their products for profit and power. AI companies might assure us they won’t enshittify. But their own products have already written the blueprint.
This is an edition of Steven Levy’s Backchannel newsletter. Read previous newsletters here.