That’s impossible!

How good scientists reach bad conclusions

by Ralph C. Merkle
Principal Fellow, Zyvex
www.zyvex.com
April, 2001

This web page is also available at http://www.foresight.org/impact/impossible.html.

"The human mind treats a new idea the way the body treats a strange protein: it rejects it."

Peter Medawar, Nobel Laureate, for discovery of acquired immunological tolerance

Introduction

Self replicating programmable manufacturing architectures were known by von Neumann in the 1940’s (Von Neumann, 1966). Feynman knew we could arrange atoms the way we wanted in 1959 (Feynman, 1960). Drexler knew that assemblers – self replicating programmable manufacturing systems able to arrange atoms – were feasible in the late 1970’s (Drexler, 1981, 1986, 1992).

Drexler’s synthesis of the ideas of Feynman and von Neumann implies that nanotechnology is feasible. Nanotechnology, the ability to inexpensively arrange atoms in most of the patterns permitted by physical law – often called the manufacturing technology of the 21st century – is expected to revolutionize essentially all manufactured products, from computers to medical instruments to solar cells to batteries to planes and rockets. It has been over forty years since von Neumann and Feynman published their observations, and 20 years since Drexler published his. Yet it is only in the last few years that the feasibility of assemblers has seen wide acceptance.

How can this be?

This is, of course, just one example of the slow acceptance of new ideas. Innumerable quotes from the sages of ages past have denied the feasibility, desirability or practicality of airplanes, space flight, nuclear weapons, radio, anesthetics, sterile procedures, steam engines; indeed, almost any advance of any merit has been at first attacked and ridiculed for decades (see http://www.foresight.org/News/negativeComments.html).

While the resistance to nanotechnology and assemblers is just a new instance of an age old pattern, the specifics are still instructive. We’ll review the particular reasons that have been advanced for rejecting this particular new idea, hopefully shedding some light on the mechanisms that lead otherwise well informed and well meaning scientists to publicly state claims that subsequently prove to be grossly in error.

In the following discussion, names are omitted to protect the guilty.

Thermal noise

The first and most sweeping rejections were based on general principles. One of the most popular was the impossibility of molecular machines in general because of thermal noise.

Thermal noise, the vibration and motion of atoms and molecules caused by the fact that they have a temperature above absolute zero, is indeed a constraint on molecular machine design, but poses no insurmountable (or even conceptually very difficult) challenges. The most compact argument showing that thermal noise is not incompatible with the existence of molecular machines is the existence of a vast assortment of biological molecular machines, including the ribosome (which can translate mRNA instructions into proteins), molecular motors, and the like.

A more technical argument is to examine the fundamental equation relating thermal noise to positional uncertainty:

σ2 = kT / ks

This is equation 5.4 introduced in chapter 5 of Nanosystems (Drexler, 1992). It relates the positional uncertainty σ to Boltzmann’s constant k, the absolute temperature T and the stiffness ks (usually in Newtons/meter).

Inspection of this equation shows that thermal noise can be controlled by either or both of two approaches: decreasing the temperature T or increasing the stiffness ks. Numerical evaluation of this equation shows that positional uncertainty of less than an atomic diameter can be achieved at room temperature when appropriate attention is paid to the stiffness of the design.In short, very small (~100 nanometer) robotic arms able to accurately position highly reactive molecular tools in the face of thermal noise at room temperature are feasible.

Despite both a technical argument and a very accessible and straightforward argument from biology, thermal noise was raised repeatedly as a “fundamental” problem that would make molecular machines forever infeasible. In part, this is because of a lingering vitalism: the idea that living systems are governed by their own special rules that go beyond mere physics. While vitalism is seldom explicitly defended, living systems and man-made systems are often seen as distinct entities governed by distinct rules. Pragmatically, a bicycle is very different from a horse, and many of the rules for dealing with horses are in fact different from the rules for dealing with a bicycle. Only from the perspective of physics and chemistry can a horse and a bicycle be viewed as simply different embodiments of the same underlying physical laws, and it is usually necessary to remind even those familiar with this fact of its applicability.

A second issue is lack of familiarity with the application of known concepts (thermal noise) in an unfamiliar context (very small machines). The applicability of conventional concepts of machinery that we know and understand at the macro scale is not necessarily obvious at the micro or molecular scales. Would the molecular teeth on a molecular gear slip because of thermal noise?  It’s certainly possible to design molecular gears that don’t work – the important point is that it’s also possible to design molecular gears that do work. Will a molecular “hand” fail to hold onto a molecular part because the part is too slippery and jiggly? Again, it’s possible to design a hand that can’t pick up the part it’s supposed to, but it’s also possible to design a molecular gripper that will attach and hold onto a molecular handle in the face of thermal noise.

In short, the unfamiliarity of the intellectual terrain, coupled with the existence of designs that don’t work, confuses the new comer.The existence of effective molecular designs is obscured by the many ways in which it’s possible to fail. Until you actually work out the details of designs that deal effectively with thermal noise (and forget that living systems have already dealt with this issue), it’s possible to imagine that working systems don’t exist.

Thankfully, the argument that thermal noise makes molecular machines impossible has sunk into oblivion.

Quantum mechanics

The second blanket argument against molecular machines is “Quantum effects make them impossible.” As few people actually understand what quantum effects are, it is possible to become very confused about the subject and hence to reach wildly erroneous conclusions without any great difficulty. Again, the most compact argument that quantum effects don’t pose any fundamental barrier to the design of molecular machines is the existence of biological molecular machines. This reality is again obscured by an unconscious vitalism that blinds us to the consequences of biology for artificial systems.

As for thermal noise, there is also a more technical argument that positional uncertainty caused by quantum mechanics does not pose a fundamental problem. The equation describing the quantum mechanically induced positional uncertainty in the ground state (that is, the positional uncertainty in the limit of low temperature, when thermal noise has no effect) is:

σ2 = ℏ / 2 sqrt(m ks)

where ℏ is Planck’s constant ℎ divided by 2 π, m is the mass, and ks is again the restoring force or spring constant. Numerical evaluation of this equation again shows that molecular machines are entirely feasible (which is a good thing, as none of us would be here otherwise).

The Born-Oppenheimer approximation

Another perspective on quantum effects is to use the Born-Oppenheimer approximation. This is fundamental to the entire field of molecular mechanics, which models the behavior of individual atoms and molecules.  In this approximation, we note that the nuclei of atoms are much more massive than the electron. The carbon nucleus, for example, is over 20,000 times as massive as an electron. As a consequence, the positional uncertainty of the nucleus of a carbon atom is much less than the positional uncertainty of an electron. As a consequence, we can reasonably approximate the carbon nucleus as a point mass. While we must still deal with the electrons quantum mechanically, the electronic contribution to nuclear motion is simply an applied force. That is, while the electrons are themselves quantum mechanical entities, the impact they have on nuclear motion is through electrostatic attraction: the positively charged nuclei are attracted to the electron clouds that surround them. Further, in the ground state, the electronic structure is almost exclusively a function of the positions of the nuclei. This, however, means that knowledge of the nuclear positions lets us determine the forces acting between the nuclei by the following reasoning: the electronic ground state is determined by the positions of the nuclei, and the nuclei in turn experience an attractive force to the electrons – the strength and direction of the force being determined by the shape of the electron cloud, i.e., the electronic ground state.  Thus, the forces acting between nuclei are determined, to a good approximation, by the locations of the nuclei. While determining these forces requires a quantum mechanical calculation, the forces themselves are classical. As a consequence, nuclear positions and internuclear forces can be well described for many purposes by classical mechanics. The approximations required are that the electronic structure be in its ground state, that the nuclei can be reasonably approximated by point masses, and that nuclear motions are slow with respect to electronic motions.

In summary, the Born-Oppenheimer approximation permits the use of classical mechanics in modeling and thinking about molecular and atomic motions. Needless to say, this greatly simplifies the conceptual framework required for thinking about molecular machines. While the limitation to ground state electronic structures means this approach has limited applicability to electronic devices (which typically depend on the excited states of at least some electrons), it is ideal if we are considering nuclear motions: and pretty much by definition, manufacturing is concerned with the movements of atoms.

Computational chemists, who deal routinely with the issues of electronic structure, molecular mechanics, molecular dynamics, ab initio quantum chemistry and the like are uniformly supportive of the feasibility of molecular machines, both biological and non-biological. They can and do model proposed molecular machines and are quite uninhibited about drawing conclusions about their feasibility, in either direction. Molecular machines that don’t work are shot down without the slightest hesitation. What’s interesting is that some proposals for artificial molecular machines can survive this gauntlet.

A slight variant of the “quantum mechanics says molecular machines are impossible” is the claim that Heisenberg’s uncertainty principle makes molecular machines impossible.  This argument, too, is most compactly refuted by pointing at living systems. It is also addressed by the discussion of the Born-Oppenheimer approximation, which, because it is based on the fundamental equations of quantum mechanics (i.e., Schrodinger’s equation) automatically takes into account the uncertainty principle.

For a brief introduction to the Born-Oppenheimer approximation and its applications to computational chemistry, see http://www.zyvex.com/nanotech/compNano.html#molecularmechanics or type in “Born-Oppenheimer approximation” to your favorite search engine.

Power

While concerns about thermal noise and quantum effects form the basis for most of the sweeping arguments against the feasibility of molecular machines, a lesser argument is “You can’t provide power.” This argument is seen less often because it is fairly obvious that power does not create a fundamental problem. Molecular machines that are fixed in place (either on a surface or within a three dimensional structure) can be powered electrically. Molecular machines in the body (useful for medical applications) can be powered by glucose-oxygen fuel cells. Molecular machines floating in a tank can be powered by either chemical energy (using, for example, a fuel cell), by light, or by acoustic energy. While not exhaustive, these approaches cover a wide range applications. The fact that biological molecular machines have already demonstrated the feasibility of several of these approaches is again the most compact argument in favor of feasibility.

Biological versus artificial

The arguments outlined above largely disappeared from serious discussions several years ago, but somewhat less sweeping variants persisted for some time. These are of the general form “Well, yes, living systems exist and so molecular machines are feasible, but your proposals don’t work!”

The simplest form of this argument involves casually reading some account about nanotechnology, imagining a molecular machine that doesn’t work, and then announcing that this unworking machine was actually proposed by someone. Again, given most people’s (and most scientist’s) lack of familiarity with molecular machines, and the many unworking designs that can be readily imagined when compared with the smaller number of designs that actually do something useful, it is fairly easy for the neophyte to get confused.

As these arguments are based on the fact that living systems and proposed artificial molecular machines are not the same, we consider some of the likely differences in the following sections.

Making diamonds

One of the major distinctions between biological systems and nanotechnology will be the latter’s ability to synthesize various non-biological materials, notably diamond, graphite, and variants thereon. The chemical reactions involved in the chemical vapor deposition (CVD) growth of diamond have been extensively studied and have provided the general reaction mechanisms on which proposals for the positional assembly of diamond have been based (see http://www.zyvex.com/nanotech/diamondCVD.html for some information available on the web). The proposals for artificial controlled synthesis of complex diamondoid molecular structures involve positionally controlled highly reactive species (radicals, carbenes and the like) in a vacuum. These reaction mechanisms have been investigated using standard computational chemistry tools (molecular mechanics, molecular dynamics, ab initio quantum chemistry, etc). (See, for example, http://www.zyvex.com/nanotech/Habs/Habs.html for some references and further reading).

The simplest way to get confused is to casually read a newspaper account about assemblers “picking up individual atoms” and not read either the actual proposals or the published analyses of the reactions. This leads to statements of the general form “these assemblers can’t possibly work because isolated atoms are highly reactive and will react with the solution around them.” The error in logic illustrated here is readily seen: the statement ignores the actual proposal (which uses vacuum specifically to avoid undesired side reactions) and substitutes instead the author’s faulty reconstruction of the proposal based on popular accounts.

Filling in the blanks

This general mechanism is seen repeatedly: arguments against the feasibility of what has been proposed are based on a misunderstanding. In debate, this approach is called a “strawman,” but in the context of this discussion it occurs when someone either reads an inaccurate description or misinterprets an accurate description.

People normally “fill in the blanks” when presented with information that is either incomplete or not immediately interpretable. This is most dramatically seen in the visual system, where the brain interprets a few strokes on paper as a person, a bird, or some other complex object. While this mechanism generally serves us well, it is prone to fairly spectacular failures: when we are groping to understand a new idea or concept the brain can “fill in the blanks” in ways that are wildly inappropriate. The new mental framework is not understood, while the old and well understood mental frameworks don’t apply. Using the old mental frameworks to “correct” omissions, errors, or misperceptions of the new idea typically result in an inconsistent muddle. This mental muddle is then attacked as incorrect – as indeed it is – but is also incorrectly attributed to someone else.

Those familiar with conventional chemistry usually assume reactions take place in solvent – the idea that reactions could take place in vacuum is unfamiliar, and hence rejected – not at a conscious level, but unconsciously. While this specific misinterpretation is not that common, when a new proposal contains many aspects that are different from the conventional then the probability of misinterpreting at least one component of the proposal can be fairly high.

Positional assembly and self assembly

The “standard” proposal for an assembler uses vacuum and positional assembly of highly reactive molecular tools. All three aspects are at odds with the more common solution based self assembly of molecular components that are selectively reactive. Self assembly of highly reactive compounds won’t work, for the simple reason that highly reactive compounds, if allowed to bounce around at random in solution, will tend to react with each other, with the solvent, and with the container in which they are confined. This conclusion has been advanced as an argument against assemblers. However, this claimed failure mode is avoided by the use of positional assembly in vacuum.  Highly reactive intermediate structures can’t bounce around at random because they are held in place, much as a hot soldering iron won’t randomly burn holes in a circuit board because it is likewise held in place.

Fear

Erroneous interpretations can take place without any obvious emotional bias – the confusion seems to arise from simple misinterpretation rather than any dislike or distaste for the conclusion. However, this is not always the case.  Recent discussions about the potential dangers posed by self replicating systems have created highly charged emotional environments. In particular, if self replicating weapons systems could be developed, they could potentially cause unlimited damage. Researchers working on nanotechnology have obvious reasons for wishing to avoid this conclusion. Not only would this force them to review their own motives for doing research that might potentially be harmful, it might also have an adverse impact on funding. If nanotechnology research might lead to new and novel weapons systems of great power, then public support for this research might be jeopardized, and funding might shrink.

The combination of innocent misunderstandings of new concepts and a highly charged emotional environment has, as might be expected, produced additional erroneous conclusions. As an example, the section heading “Universal assemblers” can be found in Drexler’s popular book, Engines of Creation. This has been misinterpreted to mean that Drexler has proposed devices able to make absolutely anything (that is, they are “universal” in a very strong sense). Although the notes in Engines of Creation clearly state that assemblers will not be able to make everything that might be possible, the confusion about a “Universal assembler” persists. (As an aside, the section heading “Magic paper made real” can be found later in Engines of Creation – although no one has yet accused Drexler of proposing that hypertext and the world wide web should be based on magic). In debate, this kind of reinterpretation of the opposing position would be called a “reductio ad absurdum,” a distortion that transforms a reasonable proposal into a superficially similar but much weaker (or often false) proposal. In the present context, this is simply the consequence of a misinterpretation of a new idea. In the absence of any prior background or familiarity with the concept of an assembler, one can readily imagine someone skimming through Engines of Creation to get an overview of Drexler’s proposals, seeing the section heading, failing to read the more in depth analysis in the notes, never reading the denser Nanosystems (Drexler’s technical introduction to nanotechnology) and arriving at a false idea of what had been proposed.

If we consider the combined effects of misinterpretations and emotional bias against the feasibility of self replicating weapons systems, the result is not hard to imagine: a vigorous argument against the feasibility of “Universal assemblers” coupled with the unstated assumption that this implies that self replicating weapons systems are also infeasible. Of course, the feasibility or otherwise of “Universal assemblers” is independent of the feasibility of self replicating weapons systems. A self replicating weapon need not be universal in any sense of that term to be quite nasty.

While the general thrust of this paper is how simple misinterpretations can result in false conclusions, some erroneous statements do not appear to have any obvious antecedents. An example of this is the claim that assemblers are impossible because chemistry is the concerted motion of ten or more atoms. This claim raises some immediate and obvious questions: why ten? Why not nine? Or eleven? How can this statement be made compatible with the observation that hydrogen gas burns when it is mixed with fluorine gas? This reaction (and many others in chemistry) involves fewer than ten reacting atoms. How is it that this statement is supposed to apply to assemblers, but not to living systems? While it is often possible to diagnose the cause of an erroneous statement, this is not always the case. Perhaps further discussion will shed light on the origins of this particular statement, or perhaps it will simply fade away and be forgotten.

Does replicating imply living?

Besides the emotionally charged issues surrounding weapons systems, there are also the emotions that center around the confusion between assemblers and living systems. We know that living systems replicate, and hence we are prone to the erroneous conclusion that replicating systems are living. To a great extent, this is because the most common examples of replicating systems are living. Few people (indeed, very few people) have seriously studied the issues surrounding artificial programmable self replicating manufacturing systems.  While there is a body of literature on such systems extending back to von Neumann, this work is little known outside of a small research community. Zyvex has proposed a replicating system that is not self-replicating and which is very clearly mechanical in nature (rather than being in any way related to living systems, see http://www.zyvex.com/Research/exponential.html) but this kind of clear illustration of the distinctions between living systems and replicating systems is again not widely known.

This combination of factors makes it easy for discussions about assemblers to get badly derailed into irrelevancies and error, as has happened to something as simple as their estimated design complexity. The erroneous logic proceeds as follows: first, assume that living systems are necessarily incredibly complex (although the fact that the genome of Mycoplasma genitalia would fit on a tenth of a floppy disk contradicts this assumption).  Second, assume (explicitly or implicitly) that assemblers are living, and must therefore themselves be incredibly complex. And finally, assert the impossibility of designing something that is incredibly complex (with overtones of dismay that anyone could be so filled with hubris as to think they could create life!).

As assemblers are not living, and in particular as they need not possess the adaptability of living systems, they can be very much simpler. Replication, by itself, does not imply great design complexity. A living system such as a horse is able to fuel itself from sugar lumps, carrots, hay, potatoes, grass, carrots, or a huge number of other potential foodstuffs. A car, by contrast, uses a single source of energy: gasoline. Feed it sugar lumps and it will promptly stop working. Cars are not required to cope with the natural world. If we provide reasonably paved roads and gas stations, then a car can provide an economical means of transportation. It is neither infinitely complex nor living, and can be designed by mere mortals (though recent designs are clearly moving towards the more complex regions of the design space). Fears that it might go feral and suck sap from trees are simply not plausible, yet this misperception is not uncommon.

This principle, that a replicating system can be simplified by providing it with a more controlled environment in which to operate, has very general applicability. It is possible to simplify the design of the replicating component by providing external support. An example of this is the broadcast architecture. We know that living systems carry their own blueprints (in the form of DNA). Artificial systems, however, could function just as well if they had no on-board blueprints and instead received instructions from an external source. This eliminates the need for on-board instructions and the various mechanisms that read, copy, and error check them. Simplified architectures for assemblers have been proposed that use acoustic broadcast of instructions (see http://www.zyvex.com/nanotech/casing.html) even though this approach is very non-biological. The broadcast architecture simplifies many design issues, permits the rapid redirection of the manufacturing activities, reduces the size of the replicating component, and is inherently safe. This last is caused by the inability of the replicating component to do anything without a continuous stream of broadcast instructions. Once beyond the range of the broadcast, or if the broadcast is turned off, the device is incapable of any further coherent function.

For further discussions about replicating systems, see http://www.zyvex.com/nanotech/selfRep.html.

Conclusions

Given this kind of swirling interplay of confusion, misinterpretation, emotion and simple ignorance it is remarkable that new ideas are ever accepted – and indeed, we have examples of human civilizations that stagnated for long periods, rejecting new ideas altogether. What is remarkable is not the span of many decades that is required before a new idea can gain acceptance, but that it can gain acceptance at all.

As a society we have much to gain by improving our ability to analyze new ideas, more rapidly adopting those that are correct and more rapidly rejecting those that are wrong. The standard of living we enjoy was built on centuries and millennia of technological advance. Life in the middle ages was short, brutish and nasty and would be our lot today had we as a society followed the advice of those who ridiculed and attacked all that was new and different. If we wish to improve our standard of living and that of future generations, we must learn to accurately evaluate new ideas and new concepts, filtering out the emotional biases and confusion that seem to inevitably surround our perceptions of them.

References

Drexler, K.E. (1981) Molecular Engineering: an approach to the development of general capabilities for molecular manipulation, Proc. National Academy of Sciences USA, 78, pp. 5275-8.

Drexler, K. E. (1986) Engines of Creation, Doubleday.

Drexler, K.E., (1992) Nanosystems: molecular machinery, manufacturing, and computation, Wiley&Sons.

Feynman, R.P. (1960) There's Plenty of Room at the Bottom, Caltech's Engineering and Science, February.

Musgrave, C.B., Perry, J.K., Merkle, R.C., and Goddard, W.A. (1992) Theoretical Studies of a Hydrogen Abstraction Tool for Nanotechnology, Nanotechnology, 2, pp. 187-195.

Von Neumann, J. and Burks, A.W. (1966) Theory of Self-Reproducing Automata, University of Illinois Press.