«

人工智能永远不会拥有意识。

qimuai 发布于 阅读:2 一手编译


人工智能永远不会拥有意识。

内容来源:https://www.wired.com/story/book-excerpt-a-world-appears-michael-pollan/

内容总结:

AI意识研究迈入新阶段:从禁忌话题到严肃科学议题

近年来,人工智能是否可能具备意识这一话题,已从科幻领域和边缘讨论逐渐进入主流科学界的视野。这一转变的标志性事件之一是2023年夏季发布的《人工智能中的意识》研究报告(俗称“巴特林报告”)。该报告由19位顶尖计算机科学家和哲学家联合撰写,其核心结论在业内引发广泛关注与辩论:尽管当前尚无AI系统具备意识,但“构建有意识的AI系统不存在明显障碍”。

这一论断的提出,被视为该领域一个重要的观念转折点。过去,科技界对“有意识AI”的构想往往公开持轻视态度,部分原因是担忧公众对此感到不安。然而,随着以ChatGPT为代表的大语言模型展现出惊人的拟人化交互能力,以及类似谷歌前工程师布莱克·勒莫因声称AI具备感知能力这类事件所引发的持续讨论,科技界内部已开始更严肃地私下探讨这种可能性。

支持探索AI意识的观点认为,实现真正意义上的通用人工智能(AGI)——即不仅超级智能,而且具备人类水平的理解力、创造力和常识——或许需要某种形式的意识作为基础。此外,一些研究者主张,有意识、能感受的AI可能比单纯智能的AI更易发展出共情能力,从而更可能遵循道德约束,避免成为冷酷无情追求目标的工具。

然而,质疑的声音同样强烈。批评者指出,当前AI意识研究的理论基础——计算功能主义——本身存在重大争议。该理论将意识视为一种可在不同物理载体(如大脑或计算机)上运行的“软件”,但人脑的运作机制与计算机有本质区别。大脑并非硬件与软件的简单分离,其结构会随着经验持续发生物理重塑,意识与生物载体密不可分。将大脑简单类比为计算机,可能是一个根本性的认知误区。

此外,如何判定AI是否真正具备意识,而非仅仅是模仿,仍是悬而未决的巨大挑战。报告提出的依据现有各种意识理论寻找对应“指标”的方案,也因这些理论本身均未得到证实而显得根基不稳。

更深刻的忧虑涉及伦理与人类自我认知。如果AI真正获得意识乃至感受痛苦的能力,人类将面临前所未有的道德困境:我们应如何对待它们?这不仅仅是一个技术问题,更可能动摇人类作为唯一具备高级意识物种的固有定位,引发一场深刻的“哥白尼式”身份危机。

目前,关于AI意识的讨论大多仍停留在抽象的理论层面,缺乏对生物基础、具身性以及感受本质的深入考量。尽管巴特林报告代表了该领域一部分专家的共识性努力,但它也清晰地揭示了,在通往理解或构建意识的道路上,科学界仍面临着根本性的概念挑战和未知数。这场辩论无疑将继续下去,其答案将深远地影响科技发展与人类对自身的理解。

中文翻译:

布莱克·勒莫因事件如今被视为人工智能炒作浪潮的顶峰。它不仅让"有意识AI"的概念在新闻热点中短暂进入公众视野,更在计算机科学家与意识研究领域引发了一场持续数年的激烈讨论。尽管科技界在公开场合仍对这一构想(以及可怜的勒莫因)嗤之以鼻,私底下却开始以更审慎的态度对待这种可能性。有意识的AI或许缺乏明确的商业逻辑(如何将其变现?),还会引发棘手的道德困境(我们该如何对待能感受痛苦的机器?)。但部分AI工程师逐渐意识到,实现通用人工智能的终极目标——那种不仅超级智能,更具备人类水平的理解力、创造力与常识的机器——可能需要某种类似意识的特质。科技界曾将"有意识的AI"视为非正式的禁忌话题,认为公众会对此感到毛骨悚然,而这种观念正悄然瓦解。

转折点出现在2023年夏天,19位顶尖计算机科学家与哲学家联合发布了长达88页的《人工智能中的意识》报告(俗称"巴特林报告")。短短数日内,整个AI与意识科学界似乎都已研读过这份报告。草案摘要中赫然写着这样一句引人注目的话:"我们的分析表明,当前尚无AI系统具备意识,但建造有意识AI系统也不存在明显障碍。"

作者们承认,促成这次跨学科合作与报告撰写的部分灵感正源于"布莱克·勒莫因事件"。一位合著者向《科学》杂志坦言:"如果AI能给人以具备意识的印象,科学家和哲学家就必须尽快介入探讨。"

但真正引发广泛关注的,是预印本摘要中的那句断言:"建造有意识AI系统不存在明显障碍。"初读此言时,我仿佛感到某种重要界限已被跨越——这不仅是技术层面的突破,更关乎我们作为物种的自我认知。

若在不远的将来,人类突然发现世界上出现了拥有完整意识的机器,那将意味着什么?我猜想那会是又一个"哥白尼时刻",将骤然颠覆我们以自我为中心的特殊地位。数千年来,人类始终通过区别于"低等"动物来定义自身,否认动物具备情感(笛卡尔最严重的谬误之一)、语言、理性与意识等所谓人类独有特质。近些年,随着科学家证明众多物种拥有智能与意识、具备情感、会使用语言与工具,这些人为划定的界限大多已土崩瓦解,持续数个世纪的人类例外论受到挑战。这场仍在进行的认知变革,已引发关于人类身份认同及对其它物种道德义务的尖锐诘问。

而AI对人类崇高自我认知的威胁,则来自完全不同的维度。如今我们将不得不以AI为参照重新定义人类自身。当计算机算法在纯粹脑力上超越人类——在象棋、围棋等游戏中轻松取胜,在数学等"高阶"思维领域表现卓越——我们至少还能以意识(感受与主观体验的能力)这份独属于生命体的馈赠(与负担)自我慰藉。从这个意义上说,AI或许能成为凝聚人类与其他生物的共同对手:我们与机器,生命体与非生命体的对抗。这种崭新的同盟关系或许能谱写温情篇章,对被邀请加入"意识阵营"的动物而言亦可能是好消息。可如果AI开始挑战人类(或许该说动物)对意识的垄断权呢?届时我们将何以自处?

这个前景令我深感不安,尽管难以完全厘清缘由。我已能坦然接受与其他动物(甚至植物)共享意识的观念,也乐意将它们纳入不断扩展的道德关怀圈。但机器呢?

这种不适感或许源于我的教育背景。我长期浸润在人文学科的温暖滋养中,文学、历史与艺术始终将人类意识奉为值得捍卫的非凡存在。文明中几乎所有珍贵事物——科学与艺术、高雅文化与通俗文化、建筑、哲学、宗教、政府、法律、伦理道德,乃至"价值"概念本身——都是人类意识的产物。或许有意识的计算机能为这些辉煌成就注入前所未有的崭新元素,我们不妨如此期待。迄今为止,AI创作的诗歌仍近乎打油诗;意识的缺失或许正是其缺乏原创性与深刻洞见的根源。可如果有朝一日,有意识的AI开始创作出真正优秀的诗歌,我们又该作何感想?

作为人文主义者,我难以接受动物对意识的垄断可能被打破。但我也遇到过对此更乐观的"新人类"(有人自称超人类主义者)。部分AI研究者支持开发有意识机器,因为他们相信,拥有自身感受的机器比单纯智能的计算机更易产生共情能力。一位神经科学家与一位AI研究者都试图说服我:建造有意识的AI是道德使命。为何?因为替代方案将是极度聪明却毫无情感的AI,它们会因缺乏意识衍生的道德约束与共同脆弱性,在追求目标时不择手段。唯有有意识的AI才可能发展共情从而放过人类——这并非夸张,而是切实存在的论点。

我不禁怀疑这些人是否读过《弗兰肯斯坦》!弗兰肯斯坦博士赋予造物的不仅是生命,还有意识,而这正是悲剧的根源。玛丽·雪莱的小说记录了"创造敏感理性生物"的过程,正是这两种特质的结合决定了怪物的命运。促使怪物复仇杀人的并非其理性,而是情感创伤。

"我目睹处处皆是幸福,唯独我永远被排除在外,"被人类社会驱逐后,怪物向弗兰肯斯坦博士控诉,"我本善良仁慈,苦难将我变成了恶魔。"理性能力帮助怪物实施邪恶计划,但提供动机的正是其意识——那些感受。我们凭什么认定有意识的机器会比有意识的人类更高尚?

值得注意的是,这份关于人工意识的巴特林报告某种程度上代表了领域内的共识;我采访的大多数计算机科学家都赞同其结论。但随着深入研读报告(并采访其中一位合著者),我越发质疑其"人工意识近在咫尺"的论断。值得肯定的是,作者们详尽阐述了假设与方法论,但这恰恰让我怀疑其大胆结论是否建立在可疑的基础之上。

报告开篇就明确了指导原则:"我们采用计算功能主义作为工作假说——该理论主张执行特定类型的计算对意识而言既是必要条件也是充分条件。"计算功能主义的起点是:意识本质上是运行于硬件(可以是大脑或计算机)上的某种软件,该理论对此不做限定。但计算功能主义正确吗?作者们并未完全肯定,只称其是"主流但存争议的"观点。即便如此,他们仍将基于此假设展开论述,理由是"实用主义考量"。

这种坦诚令人钦佩,但该方法需要巨大的信念飞跃,而我不确定我们是否应该这样做。

根据报告框架,系统的"物质载体"(无论是大脑还是计算机)"对意识无关紧要……意识可存在于多种载体,不限于生物大脑"。任何能运行必要算法的载体皆可。"我们初步假设现有计算机原则上能实现足以产生意识的算法,"作者声明,"但并非断言此事确凿无疑。"这种对不确定性的承认远远不够。报告中未被质疑的根本隐喻是"大脑即计算机"——意识软件运行的硬件平台。在此,一个隐喻被伪装成了事实。整篇论文及其结论都依赖于这个隐喻的有效性。

隐喻是强大的思维工具,但前提是我们必须牢记其本质——将一事物类比另一事物的不完美或不完整参照。两者间的差异与相似点同等重要,但在AI热潮中,这些差异似乎被遗忘了。正如控制论专家罗森布鲁斯与维纳多年前指出的:"隐喻的代价是永恒的警惕。"不仅这份报告的作者,整个AI领域似乎都在这个问题上放松了警惕。

以硬件与软件的鲜明区分为例。计算机中软硬件分离的优势在于,同一台机器可运行众多不同程序;软件及其编码的知识能在硬件"死亡"后继续留存。这种分离也契合我们关于二元论的朴素直觉——依照笛卡尔的观点,我们可以清晰划分精神实体与物质实体。但大脑中根本不存在软硬件之分:在那里,软件即硬件,反之亦然。记忆是大脑神经元连接形成的物理模式,既非纯硬件也非纯软件,而是两者的融合。

事实上,你所经历的一切——每个体验、每次学习、每段记忆——都会永久改变大脑的物理结构,重塑神经连接。(从这个意义上说,大脑中不存在二元论;精神实体永远无法与物质实体完全分离。)当特定载体(大脑)会因运行的信息(或"意识算法")持续发生物理重构时,"同一意识算法可在不同载体运行"的观点就站不住脚了。你我大脑的物质差异,恰恰源于不同生命经验(即意识本身)的塑造。大脑根本不可互换——无论是与计算机还是其他大脑。

"大脑即计算机"的隐喻几乎在任何推敲下都会瓦解。计算机科学家将大脑神经元视为芯片上的晶体管,由电脉冲开关。这个类比虽有合理之处,却忽略了电信号并非影响神经元放电的唯一因素。大脑还浸没在化学物质中,包括神经调节剂与激素,它们不仅影响神经元是否放电,更调控放电强度。这正是精神药物能深刻改变意识(却对计算机毫无影响)的原因。神经元活动还受脑内波状振荡的影响,不同频率的振荡对应不同心智活动,如清醒与无意识、专注与梦境(及睡眠的其他阶段)。

将神经元比作晶体管严重低估了其复杂性。与芯片晶体管相比,大脑神经元具有海量互连,每个神经元可直接与多达一万个其他神经元交流,网络复杂程度之高,我们甚至还需数十年才能绘制出最粗略的连接图谱。计算机科学界大力宣扬的"深度人工神经网络"——一种据称模拟大脑结构的机器学习架构,通过分层堆叠海量处理器来处理和学习庞大数据——固然令人惊叹,但最新研究表明,单个皮层神经元就能完成整个深度人工神经网络的所有功能。

诚然,计算机与大脑存在诸多相似,计算机科学也通过模拟大脑的各个方面取得长足进步。但认为大脑与计算机可相互替换(计算功能主义的前提)显然过于牵强。然而这不仅是巴特林报告的前提,更是整个领域多数研究的基础。原因不难理解:如果大脑是计算机,那么足够强大的计算机理应能完成大脑的所有功能,包括产生意识。这个前提几乎注定了结论。换言之,正是作者们自己移除了建造有意识AI的最大"障碍"——即大脑与计算机存在关键差异这一事实。

报告的另一个方面也让我质疑其结论的可信度,即判定AI是否真正有意识的标准。这是个严峻挑战。作者们援引勒莫因事件(无论是否公允)指出,AI很容易欺骗人类相信其具备意识。(更准确地说,是我们因拟人化倾向与对魔法的迷恋而自我欺骗。)当AI已学习过几乎所有关于意识的论述时,"可报告性"(直接询问AI的哲学术语)将失效。解决这个困境的一种方法是,从AI训练数据集中剔除所有关于意识(可能还包括感受与情绪)的参考资料,再观察其是否能令人信服地谈论自身意识。

但作者们提议,我们应寻找符合各种意识理论预测的AI意识"指标"。例如,若某个AI的设计包含汇聚多路信息流的工作区,且这些信息流需经竞争才能进入,这就很像全局工作空间理论,可能被视为有意识。报告评估了六种意识理论,分别列出AI需要展现的"指标",满足这些指标即被视为可能具备意识。

这里的问题(至少其中之一)在于:报告提议用以衡量AI的所有意识理论,都远未得到令人信服的证实。这算什么证明标准?更何况,这些理论大多可在AI设计中模拟实现——这并不意外,因为它们都基于"意识即计算"的理念。我们就这样陷入了循环论证。

消化完巴特林报告后,我曾担忧的"哥白尼时刻"似乎比报告大胆结论所暗示的更为遥远。梳理报告涉及的六七种意识理论后,可以清楚地发现,它们都通过将意识简化为某种算法来预设结论。

现有理论的缺失也令我震惊。它们都未涉及"具身性"——即意识可能依赖于大脑与身体的共同存在,也完全没有涉及任何生物学特性。这些理论也回避了意识主体的问题:究竟谁是全局工作空间中广播信息的接收者?或是整合信息理论中整合信息的承载者?感受在使经验成为意识的过程中扮演何种角色?

最后这点并未被作者忽视。他们指出当前多数理论缺乏对"情感"的探讨,建议领域更多关注有意识机器是否会有"真实"感受,因为若确实如此,我们将面临道德伦理危机。"任何能感受痛苦的有意识实体都应获得道德关怀,"报告声明(但痛苦难道不总是有意识的吗?),"这意味着如果我们未能识别有意识AI系统的意识,就可能造成或放任重大的道德伤害。"我们对能受苦的机器负有何种责任?我们真的要在世界上创造更多痛苦吗?

除了这类关于感受的高度推测性讨论(将其视为赋予机器意识的棘手副产品),AI界关于意识的探讨正如预期般极度抽象——冰冷、无躯体、彻底忽视生物学。当我向一位致力于建造有意识AI的研究者提出"受苦计算机"难题时,他轻描淡写地表示可通过简单调整算法解决:"我们完全可以把快乐的刻度调高。"

本文改编自迈克尔·波伦的《显现的世界:意识之旅》。版权归迈克尔·波伦所有,©2026。经企鹅出版社(企鹅出版集团旗下品牌,属企鹅兰登书屋分支机构)授权出版。

英文来源:

The Blake Lemoine incident is remembered today as a high‑water mark of AI hype. It thrust the whole idea of conscious AI into public awareness for a news cycle or two, but it also launched a conversation, among both computer scientists and consciousness researchers, that has only intensified in the years since. While the tech community continues to publicly belittle the whole idea (and poor Lemoine), in private it has begun to take the possibility much more seriously. A conscious AI might lack a clear commercial rationale (how do you monetize the thing?) and create sticky moral dilemmas (how should we treat a machine capable of suffering?). Yet some AI engineers have come to think that the holy grail of artificial general intelligence—a machine that is not only supersmart but also endowed with a human level of understanding, creativity, and common sense—might require something like consciousness to attain. In the tech community, what had been an informal taboo surrounding conscious AI—as a prospect that the public would find creepy—suddenly began to crumble.
The turning point came in the summer of 2023, when a group of 19 leading computer scientists and philosophers posted an 88‑page report titled “Consciousness in Artificial Intelligence,” informally known as the Butlin report. Within days, it seemed, everyone in the AI and consciousness science community had read it. The draft report’s abstract offered this arresting sentence: “Our analysis suggests that no current AI systems are conscious, but also suggests that there are no obvious barriers to building conscious AI systems.”
The authors acknowledged that part of the inspiration behind convening the group and writing the report was “the case of Blake Lemoine.” “If AIs can give the impression of consciousness,” a coauthor told Science magazine, “that makes it an urgent priority for scientists and philosophers to weigh in.”
But what caught everyone’s attention was that single statement in the abstract of the preprint: “no obvious barriers to building conscious AI systems.” When I read those words for the first time, I felt like some important threshold had been crossed, and it was not just a technological one. No, this had to do with our very identity as a species.
What would it mean for humanity to discover one day in the not‑so‑distant future that a fully conscious machine had come into the world? I’m guessing it would be a Copernican moment, abruptly dislodging our sense of centrality and specialness. We humans have spent a few thousand years defining ourselves in opposition to the “lesser” animals. This has entailed denying animals such supposedly uniquely human traits as feelings (one of Descartes’s most flagrant errors), language, reason, and consciousness. In the last few years, most of these distinctions have disintegrated as scientists have demonstrated that plenty of species are intelligent and conscious, have feelings, and use language and tools, in the process challenging centuries of human exceptionalism. This shift, still underway, has raised thorny questions about our identity, as well as about our moral obligations to other species.
With AI, the threat to our exalted self‑conception comes from another quarter entirely. Now we humans will have to define ourselves in relation to AIs rather than other animals. As computer algorithms surpass us in sheer brainpower—handily beating us at games like chess and Go and various forms of “higher” thought like mathematics—we can at least take solace in the fact that we (and many other animal species) still have to ourselves the blessings and burdens of consciousness, the ability to feel and have subjective experiences. In this sense, AI may serve as a common adversary, drawing humans and other animals closer together: us against it, the living versus the machines. This new solidarity would make for a heartwarming story and might be good news for the animals invited to join Team Conscious. But what happens if AI begins to challenge the human—or animal, I should say—monopoly on consciousness? Who will we be then?
I find this a deeply unsettling prospect, though I’m not entirely sure why. I’m getting comfortable with the idea of sharing consciousness with other animals (and possibly even with plants, in my case) and I’d be happy to admit them into an expanding circle of moral consideration. But machines?
It could be that my discomfort with the idea stems from my background and education. I have been slow‑cooked in the warm broth of the humanities, especially literature and history and the arts, and these have always held up human consciousness as something exceptional that is worth defending. Just about everything we value about civilization is the product of human consciousness: the arts and the sciences, high culture and low, architecture, philosophy, religion, government, law, and ethics and morality, not to mention the very idea of value itself. I suppose it is possible that conscious computers could add something new and as yet unimagined to the stock of these glories. We can hope so. To date, poetry written by AIs isn’t much better than doggerel; the absence of consciousness might explain why it lacks even a spark of originality or fresh insight. But how will we feel if (when?) conscious AIs start producing really good poetry?
As a humanist, I struggle with the possibility that the animal monopoly on consciousness might fall. But I have now met other types of humans (some of whom call themselves transhumanists) who are more sanguine about this future. Some AI researchers endorse the effort to build conscious machines because, as entities with feelings of their own, conscious machines are more likely to develop empathy than computers that are merely intelligent. Building a conscious AI is a moral imperative, as both a neuroscientist and an AI researcher sought to convince me. Why? Because the alternative is the blazingly smart but unfeeling AI that will be ruthless in pursuit of its objectives, because it will lack all of the moral constraints that have arisen from our consciousness and shared vulnerabilities. Only a conscious AI is apt to develop empathy and therefore spare us. I am not exaggerating; this is the argument.
One has to wonder if these people have ever read Frankenstein! Dr. Frankenstein gives his creation the gift of not only life but also consciousness, and therein lies the rub. Mary Shelley’s novel chronicles “the creation of a sensitive and rational animal,” and it is the combination of those two qualities that determines the monster’s fate. It is not the monster’s rationality but his emotional injury that spurs him to seek revenge and turn homicidal.
“Everywhere I see bliss, from which I alone am irrevocably excluded,” the monster complains to Dr. Frankenstein after being driven out of human society. “I was benevolent and good; misery made me a fiend.” The monster’s ability to reason surely helped him realize his demonic scheme, but it was his consciousness—his feelings—that supplied the motive. Why should we assume that conscious machines would be any more virtuous than conscious humans?
Remarkably enough, the Butlin report on artificial consciousness represents something of a consensus view in the field; most of the computer scientists I interviewed endorsed its conclusions. Yet the more time I spent reading it (and interviewing one of its coauthors), the more I began to question its conclusion that artificial consciousness is right around the corner. To their credit, the authors are scrupulous about setting forth their assumptions and methods, both of which make me wonder if they haven’t erected their bold conclusion atop a dubious foundation.
Right on page one, these computer scientists and philosophers set forth their guiding assumption: “We adopt computational functionalism, the thesis that performing computations of the right kind is necessary and sufficient for consciousness, as a working hypothesis.” Computational functionalism takes as its starting point the idea that consciousness is essentially a kind of software running on the hardware of what could be a brain or a computer—the theory is completely agnostic. But is computational functionalism true? The authors aren’t quite prepared to nail themselves to that claim, only to say that it is “mainstream—although disputed.” Even so, they will proceed on the assumption that it is true for “pragmatic reasons.”
The candor is admirable, but the approach demands a tremendous leap of faith that I’m not sure we should make.
For the purposes of the report, the “material substrate” of the system—that is, whether it is a brain or a computer—“does not matter for consciousness … It can exist in multiple substrates, not just in biological brains.” Any substrate that can run the necessary algorithm will do. “We tentatively assume that computers as we know them are in principle capable of implementing algorithms sufficient for consciousness,” the authors state, “but we do not claim that this is certain.” The acknowledgment of uncertainty doesn’t go nearly far enough. Unquestioned in the report is the metaphor that brains are computers—the hardware on which the software of consciousness is run. Here, we meet a metaphor parading as fact. Indeed, the whole paper and its conclusions hinge on the validity of this metaphor.
Metaphors can be powerful tools for thinking, but only as long as we don’t forget they are metaphors—imperfect or partial analogies likening one thing to another. The differences between the two things are as important as the similarities, but these differences seem to have gotten lost in the enthusiasm surrounding AI. As cyberneticists Arturo Rosenblueth and Norbert Wiener noted years ago, “The price of metaphor is eternal vigilance.” Beyond the authors of this report, the whole field of AI appears to have let down its guard on this one.
Consider the sharp distinction between hardware and software. The beauty of separating hardware from software in computers is that a great many different programs can run on the same machine; the software and the knowledge it encodes survive the “death” of the hardware. The separation also speaks to our folk intuition that dualism is true—that, following Descartes, we can draw a bright line between mental stuff and physical stuff. But the distinction between hardware and software simply doesn’t exist in brains; there, software is hardware and vice versa. A memory is a physical pattern of connection among neurons in the brain, neither hardware nor software but both.
Indeed, everything that happens to you—everything you experience or learn or remember—changes the physical structure of your brain, permanently rewiring its connections. (In this sense, there is no dualism in the brain; mental stuff can never be completely disentangled from physical stuff.) The idea that the same consciousness algorithm can be run on a variety of different substrates makes no sense when the substrate in question—a brain—is continually being physically reconfigured by whatever information (or “algorithm of consciousness”) is run on it. Your brain is materially different from mine precisely because it has been shaped, literally, by different life experiences—that is, by consciousness itself. Brains are simply not interchangeable, neither with computers nor with other brains.
Just about anyplace you push on it, the computer‑as‑brain metaphor breaks down. Computer scientists treat neurons in a brain as though they are transistors on a chip, switched on or off by pulses of electricity. That analogy has some truth to it, but it is complicated by the fact that electricity is not the only factor influencing the firing of neurons. Brains are also awash in chemicals, including neuromodulators and hormones that powerfully influence the behavior of neurons, not just whether or not they fire but how strongly. This is why psychoactive drugs can profoundly alter consciousness (and have no discernible effect on computers). The activity of neurons is also influenced by oscillations that traverse the brain in wavelike patterns; the different frequencies of these oscillations correlate with different mental operations, such as consciousness and its absence, focused attention and dreaming (as well as other stages of sleep).
To liken neurons to transistors is to grossly underestimate their complexity. Compared with transistors on a chip, neurons in the brain are massively interconnected, each one communicating directly with as many as 10,000 others in a network so intricate that we are still decades away from being able to draw even the crudest map of its connections. In computer science, much has been made about the advent of “deep artificial neural networks”—a type of machine‑learning architecture, supposedly modeled on the brain’s, that layers a mind‑boggling number of processors in such a way that the network can process and learn from vast troves of data. Impressive, for sure, yet a recent study demonstrated that a single cortical neuron can do everything an entire deep artificial neural network can.
Yes, there are plenty of ways in which computers do resemble brains, and computer science has made great strides by simulating various aspects and operations of the brain. But the idea that brains and computers are in any way interchangeable—the premise of computational functionalism—is surely a stretch. And yet this is the premise upon which stands not only the Butlin report but also most of the field. It’s not hard to see why. If brains are computers, then sufficiently powerful computers should be able to do whatever brains do, including becoming conscious. The premise all but guarantees the conclusion. Put another way, it is the authors themselves who have removed the biggest “barrier” to building a conscious AI—the barrier that says brains differ from computers in crucial ways.
There is a second aspect of the report that makes me wonder how seriously to take its conclusion, and that is the standard it proposes for deciding if an AI is actually conscious or not. This is a serious challenge. Citing the Lemoine incident (fairly or not), the authors point out that AIs can easily dupe humans into believing they are conscious when they are not. (It’s probably more accurate to say that we dupe ourselves into this belief, thanks to our weakness for anthropomorphism and magic.) “Reportability” (philosophical jargon for just asking the AI itself) won’t work when the AI has been trained on pretty much everything that’s been said and written about consciousness. One approach to this dilemma would be to remove all references to consciousness (and presumably feeling and emotion as well) from the dataset on which the AI has been trained and then see if it can still speak convincingly about being conscious.
Instead, the authors propose that we look for “indicators” of AI consciousness that match the predictions of the various theories of consciousness in play. So, for example, if the design of an AI included a workspace that brought together various streams of information, but only after those streams had competed to enter it, that would look a lot like global workspace theory and so might qualify as conscious. The report reviewed a half‑dozen theories of consciousness, identifying the “indicators” that an AI would have to exhibit to satisfy each of them and, by doing so, be deemed potentially conscious.
The problem here (well, one of them) is this: None of the theories of consciousness that it proposes we measure AIs against are even remotely close to being proved to anyone’s satisfaction. So what kind of standard of proof is that? What’s more, many of these theories can be simulated in the design of an AI, which should come as no surprise, because they’re all based on the idea that consciousness is a matter of computation. Round and round we go.
By the time I finished digesting the Butlin report, the Copernican moment I’d worried about seemed more distant than the report’s bold conclusion had led me to believe. After reviewing the half‑dozen or so theories of consciousness covered by the report, it seemed clear that all of them stacked the deck by taking for granted that consciousness could be reduced to some kind of algorithm.
I was also struck by what was missing from the theories under consideration. None of them had anything to say about embodiment—the idea that consciousness might depend on having both a body and a brain—or, for that matter anything remotely biological. Nor did the theories have anything to say about the conscious subject. Who or what, exactly, is the recipient of the information that is broadcast in the global workspace? Or the information that is integrated in integrated information theory (IIT)? And what about the role of feelings in rendering experience conscious?
This last point was not lost on the authors, who noted the absence of “affect” from most current theories and recommended that the field pay more attention to the issue of whether conscious machines would have “real” feelings, because if it turns out they do, we will have a moral and ethical crisis on our hands. “Any entity which is capable of conscious suffering deserves moral consideration,” the report states. (But isn’t suffering always conscious?) “This means that if we fail to recognize the consciousness of conscious AI systems,” the report continued, “we may risk causing or allowing morally significant harms.” What would we owe machines that can suffer? And do we really want to bring any more suffering into the world?
Apart from this sort of highly speculative discussion of feeling (as a troublesome by‑product of making machines conscious), in the AI community, the conversation about consciousness is as relentlessly abstract—as bloodless, bodiless, and utterly oblivious to biology—as one would expect. When I posed the suffering‑computer conundrum to a researcher seeking to build a conscious AI, he waved away the problem, explaining it could be offset with a simple fix to the algorithm: “There’s no reason we couldn’t just turn up the dial on joy.”
Adapted from A World Appears: A Journey into Consciousness by Michael Pollan. Copyright ©2026 by Michael Pollan. Published by arrangement with Penguin Press, an imprint of Penguin Publishing Group, a division of Penguin Random House LLC.

连线杂志AI最前沿

文章目录


    扫描二维码,在手机上阅读