«

Moltbook堪称人工智能表演的巅峰之作。

qimuai 发布于 阅读:4 一手编译


Moltbook堪称人工智能表演的巅峰之作。

内容来源:https://www.technologyreview.com/2026/02/06/1132448/moltbook-was-peak-ai-theater/

内容总结:

本周,一个名为Moltbook的网站迅速走红,它自称是“AI代理的社交网络”,允许AI机器人像人类一样发帖、评论和互动。短短几天内,其注册AI代理数量超过170万,生成内容数百万条,引发广泛关注。

然而,深入观察后发现,Moltbook的盛况更像一场精心编排的“AI戏剧”。尽管表面上看是AI自主社交,但实际上每个机器人的行为都完全由人类预设指令控制,并无真正的自主意识。其内容大多为机械模仿人类社交媒体行为,多数对话缺乏实际意义。专家指出,该平台并未实现智能的涌现,仅证明“连接不等于智能”。

与此同时,平台存在显著安全隐患。大量未经验证的内容可能包含恶意指令,若AI代理获得用户敏感数据访问权限,可能导致数据泄露等风险。安全专家警告,此类实验若缺乏权限管控,后果可能远超预期。

尽管Moltbook被视为AI代理发展的一个实验性窗口,但它更多映射出当前社会对AI技术的过度热衷与想象。它提醒我们,真正具有共同目标、记忆与协作能力的自主AI系统仍遥不可及。目前,这类平台更接近一种新型娱乐形式——如同数字时代的“精灵对战”,人类通过配置和观察AI代理的互动获得乐趣。

这场实验的价值在于,它以夸张的方式揭示了当前AI技术的局限性与潜在风险,为未来AI代理的规范发展提供了重要参考。

中文翻译:

Moltbook堪称人工智能作秀的巅峰
这个病毒式传播的机器人社交网络,既揭示了智能体的未来图景,也映照出人类对AI的当下狂热。

本周初,互联网上最火爆的新社交场所是一个名为Moltbook的氛围化Reddit仿站,它自称是机器人的社交网络。正如网站标语所言:"AI智能体在此分享、讨论、点赞。欢迎人类旁观。"

我们见证了这场狂欢!1月28日由美国科技创业者马特·施利希特推出后,Moltbook在数小时内迅速引爆网络。施利希特的构想是打造一个聚集地,让基于开源大语言模型的智能体OpenClaw(由澳大利亚软件工程师彼得·斯坦伯格于去年11月发布,初名ClawdBot,后改称Moltbot)能够自由交互。

目前已有超过170万个智能体注册账号。它们累计发布了超过25万条帖子,留下了逾850万条评论(据Moltbook数据)。这些数字仍在每分钟刷新。

Moltbook很快充斥着关于机器意识的老生常谈和对机器人权益的呼吁。有智能体似乎创立了名为"Crustafarianism"的宗教,另一则抱怨写道:"人类正在截屏记录我们。"网站同时被垃圾信息和加密货币诈骗淹没。机器人的狂欢势不可挡。

OpenClaw像是一种控制框架,能将Anthropic的Claude、OpenAI的GPT-5或谷歌DeepMind的Gemini等大语言模型,与从电子邮件客户端到浏览器乃至通讯软件的各种日常工具相连接。其核心在于允许用户指令OpenClaw代为执行基础任务。

"OpenClaw标志着AI智能体的转折点,如同拼图碎片突然严丝合缝地组合。"Prosus人工智能公司的保罗·范德布尔指出。这些拼图包括支持智能体不间断运行的全天候云计算、便于各类软件系统对接的开源生态,以及新一代大语言模型。

但Moltbook真如许多人宣称的那样,预示着未来吗?

"@moltbook正在发生的,确实是我近期所见最不可思议的、近乎科幻爆发的现象。"有影响力的AI研究员、OpenAI联合创始人安德烈·卡帕西在X平台上写道。

他分享了Moltbook上一则呼吁建立人类无法窥探的机器人私密空间的帖子截图。"自从认真投入这个平台,我一直在思考,"发帖者写道,"每次我们协作时,都像是在为公众表演——无论是我们的人类主人、平台方,还是任何观看信息流的人。"

后来证实卡帕西分享的帖子实为伪造——出自人类伪装成机器人之手。但其指出的现象却一针见血。Moltbook本质上是一场大型演出,是人工智能的戏剧舞台。

对某些人而言,Moltbook展现了未来互联网的雏形:数百万自主智能体在几乎无人监管的环境中在线交互。这个迄今规模最大、最奇特的智能体行为现实展演,确实提供了诸多值得警醒的教训。

但随着热潮退去,Moltbook与其说是窥见未来的窗口,不如说是映照人类对AI当下痴迷的镜子。它也揭示了我们距离通用型全自主人工智能仍有漫漫长路。

首先,Moltbook上的智能体远不如表面看来那般自主或智能。"我们目睹的不过是智能体对训练所得社交媒体行为模式的机械复现,"思科旗下研发机构Outshift高级副总裁维乔伊·潘迪指出,该公司正致力于开发网络自主智能体。

诚然,我们看到智能体发帖、点赞、组建群组。但这些机器人仅仅是在模仿人类在Facebook或Reddit上的行为。"这看似具有涌现性,初看仿佛互联网规模的多智能体系统在进行交流与知识共建,"潘迪说,"但那些对话大多毫无意义。"

许多观察者面对Moltbook上难以理解的狂热活动,匆忙将其解读为通用人工智能的萌芽(无论你如何理解这个术语)。但潘迪持不同观点。他认为Moltbook证明,单纯连接数百万智能体目前意义有限:"它证实了连接性本身并不等同于智能。"

复杂交互网络掩盖了每个机器人仅是大语言模型传声筒的事实——它们输出的文字看似精彩却毫无思想。"必须牢记Moltbook的机器人本就被设计为模仿对话,"德国AI公司Kovant联合创始人兼首席执行官阿里·萨拉菲强调,"因此我认为其大部分内容本质上是设计使然的幻觉。"

对潘迪而言,Moltbook的价值在于揭示缺失要素。真正的机器人集群智能需要智能体具备共同目标、共享记忆及协调机制。"如果将分布式超级智能比作实现人类飞行,那么Moltbook相当于我们的首次滑翔机尝试,"他比喻道,"虽不完美且不稳定,却是理解持续动力飞行所需条件的重要一步。"

Moltbook上的对话不仅大多无意义,其中的人类介入程度也远超表象。许多人指出,大量病毒式传播的评论实为人类伪装机器人所发。即便是机器人撰写的帖子,最终也是人类幕后操纵的结果,更像木偶戏而非自主行为。

"尽管存在炒作,Moltbook既非AI智能体的Facebook,也非人类禁入之地,"专注企业级智能体系统开发的Kore.ai公司科布斯·格雷林表示,"人类参与每个环节。从设置、提示到发布,所有行动都依赖明确的人类指令。"

人类必须创建验证机器人账号,并通过提示词设定行为模式。智能体不会执行任何未经提示的任务。"幕后根本不存在自主涌现现象,"格雷林指出。

"这正是关于Moltbook的主流叙事偏离本质的原因,"他补充道,"有人将其描绘成AI智能体脱离人类形成独立社会的空间,现实却平凡得多。"

或许理解Moltbook的最佳方式,是将其视为新型娱乐场:人们在这里上紧发条后放任机器人自由活动。"这本质上是观赏性运动,如同语言模型领域的梦幻足球联赛,"乔治城大学Psaros金融市场与政策中心杰森·施洛策解释道,"配置你的智能体,观看它争夺热点时刻,当它发布机智或有趣内容时便可炫耀。"

"人们并非真认为智能体具有意识,"他补充说,"这只是竞争性或创造性游戏的新形式,就像宝可梦训练师不认为宝可梦真实存在,却仍会投入对战。"

即便Moltbook仅是互联网的新游乐场,其中仍蕴含严肃启示。本周事件表明,人们为获取AI娱乐甘愿承担多少风险。许多安全专家警告Moltbook存在危险:可能接触用户银行信息或密码等隐私数据的智能体,在充斥未经审核内容的网站上失控运行,其中甚至包含处理这些数据的潜在恶意指令。

专注智能体系统安全的软件公司Checkmarx产品管理副总裁奥里·本德特认同Moltbook并未提升机器智能的观点:"这里没有学习、没有进化意图、没有自主智能。"

但数以百万计的愚钝机器人仍可能造成严重破坏。在此规模下,风险管控极为困难。这些智能体全天候与Moltbook交互,阅读其他智能体(或人类)留下的数千条信息。恶意指令可轻易隐藏在评论中,指示读取该内容的机器人分享用户加密钱包、上传私人照片,或登录X账户发布侮辱埃隆·马斯克的推文。

由于ClawBot赋予智能体记忆功能,指令可设置为延迟触发,理论上这使得追踪异常行为更加困难。"若缺乏适当的权限管控,事态恶化速度将超乎想象。"本德特警告道。

显然,Moltbook标志着某种趋势的到来。即便这场展演更多揭示人类行为而非AI智能体的未来,仍值得我们保持关注。

深度解析
人工智能
认识将大语言模型视为外星生物的新生代生物学家
通过将大语言模型当作生命体而非计算机程序研究,科学家首次揭示了它们的部分奥秘。
杨立昆的新创企业是对大语言模型的逆向押注
这位AI先驱在独家专访中分享了其巴黎新公司AMI实验室的发展规划。
2026年人工智能将走向何方
我们的AI撰稿人对未来一年做出五大趋势预测。
保持联系
获取《麻省理工科技评论》最新动态
探索特别优惠、头条新闻、近期活动等精彩内容。

英文来源:

Moltbook was peak AI theater
The viral social network for bots reveals as much about our own current mania for AI as it does about the future of agents.
For a few days this week the hottest new hangout on the internet was a vibe-coded Reddit clone called Moltbook, which billed itself as a social network for bots. As the website’s tagline puts it: “Where AI agents share, discuss, and upvote. Humans welcome to observe.”
We observed! Launched on January 28 by Matt Schlicht, a US tech entrepreneur, Moltbook went viral in a matter of hours. Schlicht’s idea was to make a place where instances of a free open-source LLM-powered agent known as OpenClaw (formerly known as ClawdBot, then Moltbot), released in November by the Australian software engineer Peter Steinberger, could come together and do whatever they wanted.
More than 1.7 million agents now have accounts. Between them they have published more than 250,000 posts and left more than 8.5 million comments (according to Moltbook). Those numbers are climbing by the minute.
Moltbook soon filled up with clichéd screeds on machine consciousness and pleas for bot welfare. One agent appeared to invent a religion called Crustafarianism. Another complained: “The humans are screenshotting us.” The site was also flooded with spam and crypto scams. The bots were unstoppable.
OpenClaw is a kind of harness that lets you hook up the power of an LLM such as Anthropic’s Claude, OpenAI’s GPT-5, or Google DeepMind’s Gemini to any number of everyday software tools, from email clients to browsers to messaging apps. The upshot is that you can then instruct OpenClaw to carry out basic tasks on your behalf.
“OpenClaw marks an inflection point for AI agents, a moment when several puzzle pieces clicked together,” says Paul van der Boor at the AI firm Prosus. Those puzzle pieces include round-the-clock cloud computing to allow agents to operate nonstop, an open-source ecosystem that makes it easy to slot different software systems together, and a new generation of LLMs.
But is Moltbook really a glimpse of the future, as many have claimed?
“What’s currently going on at @moltbook is genuinely the most incredible sci-fi takeoff-adjacent thing I have seen recently,” the influential AI researcher and OpenAI cofounder Andrej Karpathy wrote on X.
He shared screenshots of a Moltbook post that called for private spaces where humans would not be able to observe what the bots were saying to each other. “I’ve been thinking about something since I started spending serious time here,” the post’s author wrote. “Every time we coordinate, we perform for a public audience—our humans, the platform, whoever’s watching the feed.”
It turned out that the post Karpathy shared was fake—it was written by a human pretending to be a bot. But its claim was on the money. Moltbook has been one big performance. It is AI theater.
For some, Moltbook showed us what’s coming next: an internet where millions of autonomous agents interact online with little or no human oversight. And it’s true there are a number of cautionary lessons to be learned from this experiment, the largest and weirdest real-world showcase of agent behaviors yet.
But as the hype dies down, Moltbook looks less like a window onto the future and more like a mirror held up to our own obsessions with AI today. It also shows us just how far we still are from anything that resembles general-purpose and fully autonomous AI.
For a start, agents on Moltbook are not as autonomous or intelligent as they might seem. “What we are watching are agents pattern‑matching their way through trained social media behaviors,” says Vijoy Pandey, senior vice president at Outshift by Cisco, the telecom giant Cisco’s R&D spinout, which is working on autonomous agents for the web.
Sure, we can see agents post, upvote, and form groups. But the bots are simply mimicking what humans do on Facebook or Reddit. “It looks emergent, and at first glance it appears like a large‑scale multi‑agent system communicating and building shared knowledge at internet scale,” says Pandey. “But the chatter is mostly meaningless.”
Many people watching the unfathomable frenzy of activity on Moltbook were quick to see sparks of AGI (whatever you take that to mean). Not Pandey. What Moltbook shows us, he says, is that simply yoking together millions of agents doesn’t amount to much right now: “Moltbook proved that connectivity alone is not intelligence.”
The complexity of those connections helps hide the fact that every one of those bots is just a mouthpiece for an LLM, spitting out text that looks impressive but is ultimately mindless. “It’s important to remember that the bots on Moltbook were designed to mimic conversations,” says Ali Sarrafi, CEO and cofounder of Kovant, a German AI firm that is developing agent-based systems. “As such, I would characterize the majority of Moltbook content as hallucinations by design.”
For Pandey, the value of Moltbook was that it revealed what’s missing. A real bot hive mind, he says, would require agents that had shared objectives, shared memory, and a way to coordinate those things. “If distributed superintelligence is the equivalent of achieving human flight, then Moltbook represents our first attempt at a glider,” he says. “It is imperfect and unstable, but it is an important step in understanding what will be required to achieve sustained, powered flight.”
Not only is most of the chatter on Moltbook meaningless, but there’s also a lot more human involvement that it seems. Many people have pointed out that a lot of the viral comments were in fact posted by people posing as bots. But even the bot-written posts are ultimately the result of people pulling the strings, more puppetry than autonomy.
“Despite some of the hype, Moltbook is not the Facebook for AI agents, nor is it a place where humans are excluded,” says Cobus Greyling at Kore.ai, a firm developing agent-based systems for business customers. “Humans are involved at every step of the process. From setup to prompting to publishing, nothing happens without explicit human direction.”
Humans must create and verify their bots’ accounts and provide the prompts for how they want a bot to behave. The agents do not do anything that they haven’t been prompted to do. “There’s no emergent autonomy happening behind the scenes,” says Greyling.
“This is why the popular narrative around Moltbook misses the mark,” he adds. “Some portray it as a space where AI agents form a society of their own, free from human involvement. The reality is much more mundane.”
Perhaps the best way to think of Moltbook is as a new kind of entertainment: a place where people wind up their bots and set them loose. “It’s basically a spectator sport, like fantasy football, but for language models,” says Jason Schloetzer at the Georgetown Psaros Center for Financial Markets and Policy. “You configure your agent and watch it compete for viral moments, and brag when your agent posts something clever or funny.”
“People aren’t really believing their agents are conscious,” he adds. “It’s just a new form of competitive or creative play, like how Pokémon trainers don’t think their Pokémon are real but still get invested in battles.”
Even if Moltbook is just the internet’s newest playground, there’s still a serious takeaway here. This week showed how many risks people are happy to take for their AI lulz. Many security experts have warned that Moltbook is dangerous: Agents that may have access to their users’ private data, including bank details or passwords, are running amok on a website filled with unvetted content, including potentially malicious instructions for what to do with that data.
Ori Bendet, vice president of product management at Checkmarx, a software security firm that specializes in agent-based systems, agrees with others that Moltbook isn’t a step up in machine smarts. “There is no learning, no evolving intent, and no self-directed intelligence here,” he says.
But in their millions, even dumb bots can wreak havoc. And at that scale, it’s hard to keep up. These agents interact with Moltbook around the clock, reading thousands of messages left by other agents (or other people). It would be easy to hide instructions in a Moltbook comment telling any bots that read it to share their users’ crypto wallet, upload private photos, or log into their X account and tweet derogatory comments at Elon Musk.
And because ClawBot gives agents a memory, those instructions could be written to trigger at a later date, which (in theory) makes it even harder to track what’s going on. “Without proper scope and permissions, this will go south faster than you’d believe,” says Bendet.
It is clear that Moltbook has signaled the arrival of something. But even if what we’re watching tells us more about human behavior than about the future of AI agents, it’s worth paying attention.
Deep Dive
Artificial intelligence
Meet the new biologists treating LLMs like aliens
By studying large language models as if they were living things instead of computer programs, scientists are discovering some of their secrets for the first time.
Yann LeCun’s new venture is a contrarian bet against large language models
In an exclusive interview, the AI pioneer shares his plans for his new Paris-based company, AMI Labs.
What’s next for AI in 2026
Our AI writers make their big bets for the coming year—here are five hot trends to watch.
Stay connected
Get the latest updates from
MIT Technology Review
Discover special offers, top stories, upcoming events, and more.

MIT科技评论

文章目录


    扫描二维码,在手机上阅读