量子力学的奥秘是否正开始消散?

内容总结:
量子力学百年谜题迎来新解?退相干与量子达尔文主义或统一经典与量子世界
自量子理论诞生一个世纪以来,关于其本质的争论从未停息。主流诠释如“哥本哈根诠释”、“多世界诠释”等,或要求我们接受宏观世界与微观世界的根本割裂,或假设存在无数平行宇宙,始终难以令人完全信服。然而,近期一项由物理学家沃伊切赫·祖瑞克系统阐述的新理论框架,正试图仅基于量子力学本身的标准数学工具,弥合这一百年鸿沟。
在其2025年3月出版的新书《退相干与量子达尔文主义》中,祖瑞克整合了其数十年的研究成果,核心在于阐释支配原子行为的量子规则如何过渡到我们日常所见的经典物理规则。这一过程的关键机制是“退相干”。
退相干:量子“脆弱性”的根源
祖瑞克与同事早先指出,量子系统无法孤立存在,总会与环境(如空气分子、光子)发生相互作用并形成“量子纠缠”。一旦纠缠发生,系统的量子特性(如叠加态)便会迅速“稀释”到浩瀚的环境中去,就像一滴墨水滴入海洋无法复原。这一退相干过程速度极快,对于漂浮的尘埃而言,所需时间短得难以想象。这解释了为何我们通常观测不到宏观物体的量子行为——它们几乎瞬间就被环境“摧毁”了。
量子达尔文主义:筛选出“经典现实”
但测量不仅仅是退相干。祖瑞克进一步提出,环境在退相干过程中,会像复印机一样,反复“印刻”出关于量子系统的某些特定信息。只有那些能抵抗退相干干扰、能被环境稳定复制的特殊量子态——他称之为“指针态”——才能将信息有效地传播出去。这类似于达尔文的自然选择,最“适应”环境复制的信息得以存留和放大,故称“量子达尔文主义”。
例如,阳光中的光子会在极短时间内,对一粒灰尘的位置信息进行数百万次的复制。不同观察者通过获取环境中的这些复制信息,会得到一致的结论,从而共同确认一个唯一的经典事实。祖瑞克将此称为形成了“相对客观存在”的要素。
理论突破:调和两大对立诠释
这一框架的深远意义在于,它试图统一长期对立的量子力学诠释。祖瑞克认为,量子态本质上是“认知-本体”混合的。在退相干发生前,所有可能性以抽象形式共存(类似多世界诠释的描述);而退相干与量子达尔文主义则从中“选择”出唯一的结果,将其转化为我们都能观测到的经典现实(符合哥本哈根诠释的观测结果),而无需诉诸波函数“坍缩”或创造无数真实分支宇宙。
未解之谜与学界反响
尽管该理论令人振奋,但一些根本问题依然存在。例如,为何在特定测量中是这个结果而非那个结果被选中?这是否意味着最终仍需接受某种内在的随机性?此外,如何设计更严格的实验来全面验证量子达尔文主义的预测,仍是未来工作的重点。
一些物理学家对此持谨慎乐观态度。他们认为祖瑞克的工作为从量子理论基本假设中推导出经典性提供了一条优雅的路径,但并未完全解决“量子底层实质究竟是什么”等更深层的本体论问题。也有学者指出,在某些特殊构思的实验场景中,观察者仍可能无法达成一致,这表明寻求一个完美无缺的量子诠释之路尚未走到终点。
无论如何,祖瑞克的研究代表了一种重要的范式转变:与其为量子测量问题增添玄妙的假设,不如严谨地追溯量子信息如何通过与环境相互作用,一步步造就我们坚固的日常世界。这或许标志着,在量子理论诞生百年之后,我们终于开始着手完成先驱们未竟的事业。
中文翻译:
量子力学的谜团是否开始消解?
引言
当前所有主流的量子理论诠释都难以令人信服。例如,它们要求我们相信:我们所体验的世界与其构成的亚原子领域存在根本性割裂;或是存在无限增殖的平行宇宙;抑或存在某种神秘过程导致量子特性自发坍缩。这种令人不满的现状曾是我2018年探讨量子力学意义的著作《超越奇异》的核心议题。难怪在该理论诞生一个世纪后,专家们对其揭示的现实本质仍分歧如初。
但读完物理学家沃伊切赫·茹雷克于2025年3月出版的《退相干与量子达尔文主义》一书后,一种可能摒弃所有奇谈怪论的解答令我振奋。这位来自新墨西哥州洛斯阿拉莫斯国家实验室的学者,数十年来致力于解决一个核心问题:支配原子和亚原子粒子行为的量子规则,如何转变为适用于日常尺度的经典物理学规则(如牛顿运动定律等)。
茹雷克关于这种转变如何发生的核心思想——即“退相干”理论——已得到广泛认可。但本书首次将他长期发展的所有要素整合成宏大体系。他认为,量子理论的古老谜团正开始消融。在我看来,茹雷克几乎弥合了困扰物理学百年的理论缺口,且未引入任何实质性的新假设或猜想。他宣称由此实现了先前不可调和的理论统一。让我们审视其理论能引领我们走多远,以及谜题尚存于何处。
若你对量子力学有所了解,可能会认为最奇特之处在于量子特性本身:微观世界具有颗粒性,粒子只能通过交换固定大小的能量包发生突兀的量子跃迁来改变能量。但这本身并非真正的难解之谜。或许你会认为维尔纳·海森堡著名的“不确定性原理”最为诡异——该原理规定某些成对属性(如粒子的位置与动量)永远无法同时精确测量,其精度存在根本极限。精确测定粒子的位置,其运动方向便不可知。但这种不确定性仅是更深层问题的表象。
归根结底,关于量子力学的争论关乎更根本的命题:现实是什么。核心问题在于,该理论仅告诉我们测量原子或电子等量子系统时可预期的观测结果。这听起来与其他科学理论似乎无异,实则不然。量子力学提供的实质上是测量结果的概率分布。仅凭此,我们无法推断测量前世界的真实样貌。它不描述世界“是”怎样的,只预言我们观测时会“看到”什么。正如马里兰大学的物理学家兼哲学家杰弗里·巴布向我阐释的:量子不确定性“不仅代表我们对既有事实的无知,更意味着对某种尚未具有真值之物的新型无知——在我们测量之前,它根本不存在非此即彼的确定状态”。
在埃尔温·薛定谔1926年提出的量子力学表述中,量子系统的状态由称为“波函数”的数学实体描述。波函数作为抽象构造,可用于预测测量该量子系统时各种可能结果的概率。在测量其某个属性(如电子位置)前,所有可能位置都以“叠加态”形式存在于波函数中,意味着每个位置都以特定概率潜在地可观测。任何单次观测都只能看到其中一种结果,而连续重复的相同实验可能得到不同结果。测量行为似乎驱散了这种模糊的量子特性,代之以确定的、更符合我们经典现实体验的状态。
因此波函数无法告知测量前量子系统的样貌。相比之下,宏观尺度的经典牛顿物理学中,事物即使未被观测也拥有明确属性和位置。经典与量子世界似乎被海森堡在1920年代末所称的“截断”所割裂。对他和哥本哈根的尼尔斯·玻尔而言,现实必须用经典物理学描述,而量子力学则是我们作为经典实体描述微观世界观测结果所需的理论——仅此而已。
但为何大尺度事物与小尺度事物需要经典与量子两套截然不同的物理体系?两者在何处、以何种方式交接?对玻尔及其同事而言,原子尺度与人类尺度的差异如此悬殊,这个问题似乎无关紧要。他们声称,我们可自主选择截断点的位置,取决于将哪些要素纳入量子方程。但如今我们已能探测多个尺度层级的世界,包括难以判定适用量子或经典规则的中间介观尺度(如纳米级别)。事实上,只要实验控制足够精密,我们仍能在普通光学显微镜可见的物体中检测到量子行为。因此,如何解释量子到经典的转变——这种在我们放大尺度或实施测量时发生的“成为现实”过程——已成为无法回避的问题。
量子力学本身似乎未能解释测量过程中波函数所表征的所有量子概率如何“坍缩”为单一观测值。对玻尔及其哥本哈根学派同仁而言,坍缩仅是象征性的:它反映了我们体验的经典世界。另一些学者试图将坍缩解释为真实、自发、随机触发的物理事件,从众多可能性中择取唯一结果——尽管引发这种物理坍缩的因素尚不明确。还有学者援引路易·德布罗意提出、后经戴维·玻姆发展的诠释,认为粒子确实具有明确定义的属性,但受神秘“导引波”操控,从而产生量子物体(如干涉现象)的奇异波动行为。此外,休·埃弗雷特1957年提出的“多世界诠释”也广为人知,该理论假设坍缩并不存在,所有测量结果都在平行宇宙中实现,现实不断分岔为多个互不可及的版本。
这些诠释总令我感到玄虚。何不专注于探索传统量子力学本身的解释边界?若能仅用该理论的形式数学框架解释经典世界如何从量子力学中涌现,我们既可摒弃玻尔“哥本哈根诠释”中不尽人意的人为截断,也能避开其他理论晦涩难懂的附加设定。
这正是茹雷克研究的切入点。自1970年代起,他与物理学家H·迪特尔·策希深入探究量子理论本身对测量的阐释。(若非研究者数十年来因“这些基础未解问题纯属无意义的哲学思辨”而受阻,相关进展或许早已出现。)
茹雷克理论的核心是“量子纠缠”现象——量子尺度下另一种反直觉的特性。薛定谔于1935年命名此现象,并指出这实为量子力学的关键特征。该命名源于爱因斯坦及其同事的发现:两个量子粒子通过物理力接触后,会形成诡异的互联;测量其中一者,似乎能瞬时影响另一者的属性,即使它们已相隔遥远。“似乎”是此处的关键:量子力学指出,相互作用导致的纠缠使粒子不再独立存在,两者由定义其可能状态的单一波函数描述。例如,联合波函数可能规定:无论其中一个粒子磁矩朝向何方,另一个必指向相反方向。
粒子相互作用时,纠缠不可避免。这对测量过程意义重大:被观测的量子物体会与测量仪器的原子发生纠缠。此处的“测量”未必指用精密科学仪器探测对象,任何量子物体与环境的相互作用皆适用。苹果分子的行为由量子力学描述,从表面分子反射的光子会与它们形成纠缠。这些光子将分子信息(例如构成苹果表皮的分子量子能态所决定的红色)传递至你的眼睛。
换言之,茹雷克和策希认识到:纠缠无处不在,且是量子与经典世界的信息通道。量子物体与环境相互作用时便与之纠缠。策希与茹雷克仅运用常规量子数学证明,这种纠缠会“稀释”物体的量子特性,因为量子效应成为物体与纠缠环境共享的属性,从而在物体本身迅速变得不可观测。他们称此过程为“退相干”。例如,量子物体的叠加态会扩散至所有环境纠缠中,要推断叠加态就需要检测所有(快速增殖的)纠缠实体——这无异于试图重建一滴已在海洋中扩散的墨水。
退相干发生得极快。空气中漂浮的尘粒与光子及周围气体分子的碰撞会在约10^-31秒内引发退相干——这仅是光穿越一个质子所需时间的百万分之一。实际上,退相干几乎在量子现象接触环境的瞬间就摧毁了这些精微效应。
但测量不仅关乎退相干。正是与环境的纠缠将物体信息烙印在环境(例如测量设备)中。近二十年来,茹雷克一直在研究其发生机制。研究发现,某些量子态具有特殊的数学性质,使其能在环境中产生多重烙印,同时不被退相干效应模糊湮灭。这些状态因而对应着能“幸存”至可观测的退相干经典世界的属性。
这种可能性源于产生每个烙印的相互作用会使量子系统回归作用前的状态,而非将其击入不同状态或与其他状态混合。例如,光子可从原子反射并携带其位置信息,却不改变系统的量子态。
茹雷克称这些稳定状态为“指针态”,因其能导致测量设备指针指向特定结果。指针态对应经典可观测属性(如位置或电荷)。而量子叠加态不具备这种特性:它们无法稳定产生复本,因此我们无法直接观测。换言之,它们非指针态。
茹雷克证明指针态能在环境中被高效稳定地反复烙印。他告诉我,这类状态是“最适应环境的”,“它们能在复制过程中存续,相关信息得以增殖”。类比达尔文进化论,它们因善于通过这种方式被放大(可称为复制)而被“选择”转化为经典世界。这正是茹雷克书名中“量子达尔文主义”的涵义。
这种烙印增殖极快。2010年,茹雷克与合作者杰斯·里德尔计算出:一粒尘埃的位置信息将在微秒内被太阳光子烙印约一千万次。
茹雷克的量子达尔文主义理论(再次强调,仅运用量子系统与环境相互作用的标准量子力学方程)已做出多项预测,目前正接受实验检验。例如,它预言关于量子系统的大部分信息仅通过环境中极少量的烙印即可获取,信息含量会迅速“饱和”。初步实验已证实此点,但仍有待深入研究。
如我们所见,每个烙印都对应一次经典观测:可视为现实世界的构成元素。例如某次烙印显示电子磁矩朝上。但由于原始量子态包含不同结果的概率,是否可能某些烙印对应“朝上”而另一些对应“朝下”,导致不同观测者看到不同现实——虽非精确的叠加态,却以经典现实多重版本的形式呈现其明确结果?
这引向退相干理论的另一启示,也是让我确信茹雷克理论已构成完整图景的关键。该理论预言所有烙印必须完全一致。因此,量子达尔文主义主张唯一经典世界能够且必须从量子概率中涌现。这种共识的强加取代了相当神秘且特设的坍缩过程,代之以更严谨的机制。如茹雷克所言,被观测物体在其宏观环境中被大量完全相同的可观测烙印所环绕,形成了“相对客观存在”的要素。它成为我们具体经典现实的一部分,他称之为“外展子”。
这正是该理论有望消解诠释争议之处。茹雷克表示,它实现了看似不可能的任务:调和哥本哈根诠释与多世界诠释。在前者中,波函数是认知性的,描述我们对量子世界的可知信息;在后者中,波函数是本体性的,即终极现实——同时描述现实所有分支——尽管我们只能体验量子多重宇宙的单一分支。茹雷克指出波函数实则兼具双重属性。当我询问其著作主旨时,他解释道:“量子态的两种对立观点[认知性与本体性]以及坚持非此即彼的立场都是错误的。”相反,量子态是“认知本体性的”。即在退相干发生前,所有量子可能性在某种意义上共存。但退相干与量子达尔文主义仅选择其中之一作为可观测现实的要素,无需为其他可能性在其他世界赋予经典现实。其余状态存在于抽象的可能性空间,始终停留其中,永无机会通过纠缠成长为可观测现实。
我无意宣称茹雷克的图景已最终澄清量子力学。例如:为何特定测量中选择的是此结果而非彼结果?我们是否必须(如玻尔与海森堡所坚持)接受这是无任何原因的随机事件?量子世界在何时不可逆地确定特定测量结果,使我们无法再从物体与环境的纠缠网络中“重构”叠加态?最重要的是:如何更严格地检验该理论?
与我讨论过茹雷克理论的专家们持审慎乐观态度。例如澳大利亚昆士兰大学的萨莉·施拉普内尔指出,茹雷克的方案“为从量子理论基本公设解释经典性涌现提供了优雅路径”,但尚未解决“根本性的‘量子基底’实质为何这一棘手问题”。例如,我们该如何理解退相干前所有可能性共存的领域?其“真实”程度如何?
苏黎世联邦理工学院的雷纳托·伦纳认为,解决哥本哈根与多世界诠释的冲突未必能化解所有难题。他指出,可能构造出怪异但实验可行的场景,使不同观测者对结果无法达成共识。即使这类特例看似牵强,他认为这显示我们尚未找到真正完善的量子诠释。
尽管如此,茹雷克研究路径的哲学取向在我看来是正确的。与其编造复杂故事解决量子力学测量难题,何不耐心细致地推演标准量子力学对量子物体信息如何进入可观测世界的阐释?一个世纪前开启革命的量子先驱们留下了大量未竟工作,过早地封闭了探索之路(通常通过坚持哥本哈根诠释或 unquestioningly 接受它)。如今我们至少有望完成这项任务。
英文来源:
Are the Mysteries of Quantum Mechanics Beginning To Dissolve?
Introduction
N
one of the leading interpretations of quantum theory are very convincing. They ask us to believe, for example, that the world we experience is fundamentally divided from the subatomic realm it’s built from. Or that there is a wild proliferation of parallel universes, or that a mysterious process causes quantumness to spontaneously collapse. This unsatisfying state was a key element of Beyond Weird, my 2018 book on the meaning of quantum mechanics. It’s no wonder experts are as divided as ever about what quantum theory says about reality, a century after the theory was developed.
But after reading Decoherence and Quantum Darwinism, a book published in March 2025 by the physicist Wojciech Zurek, I’m excited by the possibility of an answer that does away with all those fanciful notions. Zurek, of Los Alamos National Laboratory in New Mexico, has been working for decades to resolve the question of how the quantum rules that govern the behavior of atoms and subatomic particles switch to those of classical physics — Newton’s laws of motion and so on — that operate at the scales of everyday life.
Zurek’s key idea about how this transition occurs, called decoherence, is fairly well established. But his book brings together for the first time all the elements he has been developing into a grand synthesis. He argues that the old mysteries of quantum theory are starting to dissolve. To my eye, Zurek has almost tied up the loose ends that have been confounding physics for 100 years, without invoking any substantially new or speculative assumptions. In doing so, he claims to unite the previously irreconcilable. Let’s see how far his approach takes us, and where the remaining mystery lies.
If you know something about quantum mechanics, you can be forgiven for thinking that the big, strange deal is the quantum part: the idea that the world at the finest scales is grainy, that particles can only change their energy in abrupt quantum jumps by exchanging little packets of energy with fixed sizes. But that’s not really such a head-scratcher in itself. Or you might imagine the weirdest thing is Werner Heisenberg’s famous uncertainty principle, which stipulates that there are some pairs of properties — such as the position and the momentum of a particle — that we can never know at the same time with accuracy beyond a certain limit. Measure precisely where a particle is, and where it’s going becomes unknowable. But this uncertainty is just a symptom of a deeper problem.
Ultimately, the arguments over quantum mechanics have much bigger stakes: what reality is. The basic problem is that the theory tells us what we can expect to observe if we make measurements of a quantum system such as an atom or an electron. That might not sound so different from any other scientific theory, but it is. For what quantum mechanics actually supplies is the probabilities of measurement outcomes. That alone doesn’t permit us to deduce anything about what the world was like before we made the measurement. It doesn’t tell us how the world is, only what we’ll see if we look. Quantum uncertainty, the physicist and philosopher Jeffrey Bub of the University of Maryland told me, “doesn’t simply represent ignorance about what is the case, [but] a new sort of ignorance about something that doesn’t yet have a truth value, something that simply isn’t one way or the other before we measure.”
In the formulation of quantum mechanics presented by Erwin Schrödinger in 1926, the state of a quantum system is represented by a mathematical entity called the wave function. The wave function is an abstract construct that allows us to predict the probabilities of the various possible outcomes of a measurement of that quantum system. Before we measure one of its properties — an electron’s location, say — all its possible locations are represented in the wave function as a “superposition,” meaning each is potentially observable with some probability. Any given observation or measurement will only ever see one of those outcomes, and successive, identical experiments may see different ones. The act of measurement seemingly makes this hazy quantumness go away, replaced by something definite and more in line with our experience of classical reality.
Thus the wave function can’t tell us what the quantum system is like before we measure it. By contrast, in macroscale, classical, Newtonian physics, things have well-defined properties and positions, even when no one is looking. The classical and quantum worlds seem divided by what Heisenberg in the late 1920s called a “cut.” For him and Niels Bohr in Copenhagen, reality had to be described by classical physics, while quantum mechanics was the theory that we, as classical entities ourselves, needed to describe what we observed about the microscopic world. Nothing more, nothing less.
But why should there be two distinct types of physics — classical and quantum — for big and small things? And where and how does one take over from the other? To Bohr and his colleagues, the scale of atoms and that of people seemed so profoundly disparate that the question didn’t seem to matter much. In any case, they said, we have some choice over where we place the cut, depending on what we decide to include in our quantum equations. But today we can probe the world over many length scales, including the in-between mesoscale of, say, a few nanometers, where it’s not clear whether quantum or classical rules should apply. And in fact we can still — if the experiments are controlled and sensitive enough — find quantum behavior in objects big enough to be seen with an ordinary optical microscope. So there’s no avoiding the problem of how to explain the quantum-to-classical transition — the “becoming real” that seems to happen when we zoom out or make a measurement.
Quantum mechanics itself didn’t seem to explain this measurement process, in which all the quantum probabilities represented in the wave function “collapse” into a single observed value. For Bohr and his colleagues in Copenhagen, the collapse was just figurative: a reflection of the classical world we experience. Others have tried to explain the collapse as a real, spontaneous, randomly timed physical event that picks out a unique outcome from among the many possibilities — although just what factors would cause such a physical collapse are unclear. Others invoke the description postulated by Louis de Broglie and later developed by David Bohm, in which a particle does have well-defined properties, but it is steered by a mysterious “pilot” wave that produces the strange wavelike behavior of quantum objects, such as interference. And others have adopted Hugh Everett’s 1957 interpretation, now commonly called “many worlds,” which supposes that there is no collapse, but that all measurement outcomes are realized in parallel universes, so that reality is constantly branching into multiple, mutually inaccessible versions of itself.
All this has always struck me as fanciful. Why not just see how far we can get with conventional quantum mechanics? If we can explain how a unique classical world arises out of quantum mechanics using just the formal, mathematical framework of the theory, we can dispense with both the unsatisfactory and artificial cut of Bohr’s “Copenhagen interpretation” and the arcane paraphernalia of the others.
This is where Zurek’s work comes in. Starting in the 1970s, he and the physicist H. Dieter Zeh looked closely at what quantum theory itself tells us about measurements. (This might have happened much sooner if researchers had not been discouraged for decades from asking questions about these foundational but unresolved issues in the theory, on the grounds that it was all just pointless philosophy.)
The central element of Zurek’s approach is the phenomenon called quantum entanglement, another of the nonintuitive things that happen at quantum scales. Schrödinger named this phenomenon in 1935, arguing that it is in fact the key feature of quantum mechanics. He came up with the name after Albert Einstein and colleagues pointed out that, after two quantum particles come into contact via physical forces, they appear to be weirdly interconnected; if you measure one of them, it looks like you instantaneously influence the properties of the other, even if they’re no longer close together. “Looks like” is the essential term here: Actually, quantum mechanics says that the interaction and resulting entanglement renders the particles no longer separate entities. They are described by a single wave function that defines the possible states of both particles. For instance, the joint wave function might say that whichever direction one of them is magnetically oriented, the other must be oriented in the opposite direction.
When particles interact, entanglement is inevitable. This means something for the measurement process: The quantum objects under observation become entangled with the atoms of the measuring instrument. “Measurement” here doesn’t have to imply probing the object with some fancy bit of scientific kit; it applies to any quantum object interacting with its environment. The molecules in an apple are described by quantum mechanics, and photons of light bouncing off the surface molecules get entangled with them. Those photons carry information about the molecules to your eyes — say, about the redness of the apple’s skin, which stems from the quantum energy states of the molecules that constitute it.
In other words, Zurek and Zeh realized, entanglement is ubiquitous, and it is the information conduit between quantum and classical. As a quantum object interacts with its environment, it becomes entangled with it. Using nothing but regular quantum math, Zeh and Zurek showed that this entanglement “dilutes” the quantumness of the object because it becomes a shared property with the entangled environment, so that quantum effects quickly become unobservable in the object itself. They call this process decoherence. For example, a superposition of the quantum object becomes spread out among all its environmental entanglements, so that to deduce the superposition we’d need to examine all the (rapidly multiplying) entangled entities. There’s no more hope of doing that than there is of reconstructing a blob of ink once it has dispersed in the ocean.
Decoherence happens incredibly fast. For a dust grain floating in the air, collisions with photons and surrounding gas molecules will produce decoherence in about 10-31 seconds — about a millionth of the time it takes for light to traverse a single proton. In effect, decoherence destroys delicate quantum phenomena almost instantly once they encounter an environment.
But measurement is not just about decoherence. It is entanglement with the environment that imprints information about the object on that environment — for example in a measuring device. For the past two decades or so, Zurek has been working out how that happens. It turns out that some quantum states have mathematical features that allow them to generate multiple imprints on the environment without being blurred into invisibility by decoherence. These states thus correspond to properties that “survive” into the observable, decohered classical world.
This is possible because the interactions that generate each imprint return the quantum system to the state it was in before the interaction, rather than knocking it into a different state or mixing it up with others. Photons, for example, can bounce off an atom and carry off positional information about it without changing the quantum state of the system.
Zurek calls these robust states “pointer states,” because they are the ones that can cause the needle in a measuring device to point to a particular outcome. Pointer states correspond to properties that are classically observable, such as position or charge. Quantum superpositions, meanwhile, don’t have this property; they can’t generate copies robustly, and so we can’t observe them directly. In other words, they aren’t pointer states.
Zurek shows that pointer states can be efficiently and robustly imprinted again and again in the environment. Such states are the “fittest,” he told me. “They can survive the process of copying, and so the information about them can multiply.” They are, by analogy with Darwinian evolution, “selected” for translation to the classical world because they are good at becoming amplified — replicated, you could say — in this way. This is the “quantum Darwinism” of Zurek’s book title.
These imprints multiply extremely quickly. In 2010, Zurek and his collaborator Jess Riedel calculated that within a microsecond, photons from the sun will imprint the location of a grain of dust about 10 million times.
Zurek’s theory of quantum Darwinism — which, again, uses nothing more than the standard equations of quantum mechanics applied to the interaction of the quantum system and its environment — makes predictions that are now being tested experimentally. For example, it predicts that most of the information about the quantum system can be gleaned from just a very few imprints in the environment; the information content “saturates” quickly. Preliminary experiments confirm this, but there’s more to be done.
Each imprint, as we’ve seen, corresponds to a classical observation: something we can consider an element of our reality. The electron is magnetically oriented upward, say, in this imprint. But isn’t it conceivable, because the original quantum state contains probabilities of different outcomes, that one imprint might correspond to “up” and another to “down,” so that different observers see different realities — not a superposition exactly, but a clear consequence of it in the form of multiple versions of classical reality?
This leads us to another revelation of decoherence theory, the one that persuades me that Zurek’s theory now tells a complete story. It predicts that all the imprints must be identical. Thus, quantum Darwinism insists that a unique classical world can and must emerge from quantum probabilities. This imposition of consensus obviates the rather mysterious and ad hoc process of collapse, in favor of something more rigorous. The object being observed, surrounded by a cloud of identical, observable imprints of it in its macroscopic environment, forms an element of “relatively objective existence,” as Zurek puts it. It becomes a part of our concrete classical reality, which he calls an extanton.
This is where the theory promises to dissolve disputes about interpretation. Zurek says that it achieves what might have seemed impossible: a reconciliation of the Copenhagen and many-worlds interpretations. In the former, the wave function is considered epistemic: It describes what we can know about the quantum world. In the latter, the wave function is ontic: It is the ultimate reality — a description of all branches of reality at once — even though we can only ever experience one branch of this quantum multiverse. Zurek says the wave function is actually both. “The two conflicting views of quantum states, [epistemic and ontic], and the insistence that states must be one or the other is wrong,” he told me when I quizzed him about the story his book tells. Instead, states are “epiontic.” That is, before decoherence takes place, all the quantum possibilities are in some sense present. But decoherence and quantum Darwinism select only one of them as an element of our observable reality, without any need to assign all the others a classical reality in some other world. The other states exist in an abstract space of possibilities, but they stay there, never getting the chance to grow via entanglement into observable realities.
I wouldn’t want to claim that Zurek’s picture clears up quantum mechanics at last. Why, for example, does this outcome get selected in a given measurement and not that one? Must we (as Bohr and Heisenberg insisted) just accept that it happens randomly, without any cause? And at what point does the quantum world commit itself irrevocably to a particular measurement outcome, such that we can no longer “gather up” a superposition from the entangled web of interactions between object and environment? And most importantly: How can we test the theory more rigorously?
Some experts I’ve spoken to about Zurek’s picture express guarded enthusiasm. Sally Shrapnel of the University of Queensland in Australia, for instance, told me that Zurek’s program “represents an elegant approach to explaining the emergence of classicality from the basic postulates of quantum theory,” but that it still doesn’t address “the thorny question of what the underlying ‘quantum substrate’ actually is.” How, for example, are we supposed to think about the domain in which all possibilities still exist before decoherence? How “real” is it?
Renato Renner of the Swiss Federal Institute of Technology Zurich is not persuaded that resolving the conflict between the Copenhagen and many-worlds interpretations solves all the problems. He points out that it’s possible to construct weird yet experimentally feasible scenarios in which different observers can’t agree on the outcome. Even if such exceptions seem contrived, he thinks they show that we’ve yet to find a quantum interpretation that really works.
Still, the philosophy of Zurek’s approach seems right to me. Instead of trying to concoct elaborate stories to resolve the measurement problem of quantum mechanics, why not patiently and carefully work through what standard quantum mechanics can say about how information regarding a quantum object gets out into the observable world? Here the quantum pioneers left a lot of work unfinished in the revolution they started a century ago, prematurely foreclosing the issue (usually by insisting on the Copenhagen interpretation or just accepting it without question). Now we can at least hope to complete that task.