Self Replicating Systems and Molecular Manufacturing

by

Ralph C. Merkle

Xerox PARC, 3333 Coyote Hill Road, Palo Alto, CA 94304.

This version is adapted and revised from the original version, published in the Journal of the British Interplanetary Society, Volume 45 (1992) pages 407-413.

Further information on self replicating systems is available at http://www.zyvex.com/nanotech/selfRep.html.

Copyright 1992 by Xerox Corporation. All Rights Reserved.

Abstract

Self replicating systems have long been seen as an economical method of exploring space. For example, the costs involved in the exploration of the galaxy using self replicating probes would be almost exclusively the design and initial manufacturing costs. Subsequent manufacturing costs would then drop dramatically.

The desirable economic properties of self-replicating systems are also applicable to more mundane applications. In particular the (at present theoretical) concept of molecular manufacturing uses the idea of a self-replicating system to achieve low cost in the manufacturing process. Molecular manufacturing merges computer-controlled self-replicating systems with atomically precise structural modifications. This should let us create a low cost manufacturing technology able to build almost any product that is (a) specified with atomic precision and (b) is consistent with the laws of chemistry and physics.

Introduction

Design concepts for the use of self replicating systems in space exploration have been discussed for several years [12]. In the NASA LMF (Lunar Manufacturing Facility)[10], a small cross section of conventional manufacturing technology, along with facilities for mining and refining lunar soil, would have been packed into a "seed" of perhaps a hundred tons and then launched to the moon. The obvious advantage of such a facility is the reduced need to launch further equipment manufactured with earth-bound systems: equipment required on the moon could be made as needed using the available lunar material, eliminating further launch costs.

These proposals draw on a body of theoretical work, started by von Neumann, on the design of self replicating systems. A wide range of methods have been considered[10, particularly page 190 et sequitur "Theoretical Background"]. The von Neumann architecture for a self replicating system is perhaps the ancestral and archetypical proposal[11, 13].

The von Neumann architecture for a self replicating system

Von Neumann's proposal consisted of two central elements: a Universal Computer and a Universal Constructor (see figure 1). The Universal Computer contains a program that directs the behavior of the Universal Constructor. The Universal Constructor, in turn, is used to manufacture both another Universal Computer and a Universal Constructor. Once finished, the newly manufactured Universal Computer was programmed by copying the program contained in the original Universal Computer, and program execution would then begin.

Figure 1.

This is rather an abstract proposal and covers a wide variety of specific implementations. Furthermore, both the design of the system and the environment in which it is intended to operate must be specified. The range of systems that would successfully operate on the ocean floor would be different from those that would operate in either deep space or an abstract mathematical space that had no physical existence. Von Neumann worked out the details for a Constructor that worked in a theoretical two-dimensional cellular automata world (and parts of his proposal have since been modeled computationally[13]). The Constructor had an arm, which it could move about; and a tip, which could be used to change the state of the cell on which it rested. Thus, by progressively moving the arm and changing the state of the cell at the tip of the arm, it was possible to create "objects" that consisted of regions of the two-dimensional cellular automata world which were fully specified by the program that controlled the Constructor.

While this solution demonstrates the theoretical validity of the idea, von Neumann's kinematic constructor (which was not worked out in such detail) has had perhaps a greater influence, for it is a model of self replication which can more easily be adapted to the three-dimensional world in which we live. The kinematic constructor was a robotic arm which moved in three-space, and which grasped parts from a sea of parts around it. These parts were then assembled into another kinematic constructor and its associated control computer.

An important point to notice is that self replication, while important, is not by itself an objective. A device able to make copies of itself but unable to make anything else would not be very valuable. Von Neumann's proposals centered around the combination of a Universal Constructor, which could make anything it was directed to make, and a Universal Computer, which could compute anything it was directed to compute. This combination provides immense value, for it can be re- programmed to make any one of a wide range of things. It is this ability to make almost any structure that is desired, and to do so at low cost, which is of value. The ability of the device to make copies of itself is simply a means to achieve low cost, rather than an end in itself.

Drexler's architecture for an assembler

Drexler's assembler follows the Von Neumann architecture, but is specialized for dealing with systems made of atoms. The essential components in Drexler's Assembler are shown in figure 2. The emphasis here (in contrast to von Neumann's proposal) is on small size. The computer and constructor both shrink to the molecular scale, while the constructor takes on additional detail consistent with the desire to manipulate molecular structures with atomic precision. The molecular constructor has two major subsystems: (1) a positional capability and (2) the "tip chemistry."

Figure 2.

The positional capability might be provided by one or more small robotic arms, or alternatively might be provided by any one of a wide range of devices that provide positional control[14]. The emphasis, though, is on a positional device that is very small in scale: perhaps 0.1 microns (100 nanometers) or so in size.

As an aside, current SPM (Scanning Probe Microscope) designs employ piezoelectric elements for positional control[21]. A rather obvious question to ask is: why prefer mechanical positioning systems over piezoelectric or other electrostatic devices? The reasons for using basically mechanical devices at the molecular scale are similar to the reasons that mechanical devices are employed at the macroscopic scale: the desire for compactness and high positional accuracy (e.g., high stiffness). This weighs against electrostatic and piezoelectric devices. Molecular mechanical devices, on the other hand, can employ very stiff materials and, with appropriate design, can have joints that can rotate easily but which at the same time provide high stiffness in other degrees of freedom[1,20]

The "tip chemistry" is logically similar to the ability of the Von Neumann Universal Constructor to alter the state of a cell at the tip of the arm, but here the change in "state" must correspond to a real world change in molecular structure. That is, we must specify a set of well defined chemical reactions that take place at the tip of the arm, and this well defined set of chemical reactions must be sufficient to allow the synthesis of the class of structures of interest.

The assembler, as defined here, is not a specific device but is instead a class of devices. Specific members of this class will deal with specific issues in specific ways.

Specifying an assembler

An example might help clarify these concepts. We can view a ribosome as a degenerate case of an assembler. The ribosome is present in essentially all living systems (we will ignore the marginal or debatable cases, such as viruses). It is programmable, in the sense that it reads input from a strand of messenger RNA (mRNA) which encodes the protein to be built. Its "positional device" can grasp and hold an amino acid in a fixed position (more accurately, the mRNA in the ribosome selects a specific transfer RNA, which in its turn was bound to a specific amino acid by a specific enzyme). The one operation available in the "well defined set of chemical reactions" is the ability to make a peptide bond.

Of course, if we really want to specify how a self-replicating system works, we must provide more information. For example, the ribosome functions correctly only in a specific kind of environment. There must be energy provided in the form of ATP; there must be information provided in the form of strands of mRNA; there must be compounds such as amino acids; etc. etc. If the ribosome is removed from this environment it ceases to function.

More generally, to specify an assembler one needs to specify (1) the type and construction of the computer, (2) the type and construction of the positional device, (3) the set of chemical reactions that take place at the tip, (4) how compounds are transported to and from the tip, and how the compounds are modified (if at all) before reaching the tip, (5) the class of structures that can be built, (6) the environment in which it operates and (7) the method of providing power.

An additional requirement is often significant. While it is true that a ribosome will function in a certain type of environment, it is also true that the environment in which a ribosome functions is found inside the cell wall and not outside. That is, the environment inside the cell is a special kind of environment not normally found in the world. There is a barrier (the cell wall) which maintains an internal environment to which the ribosome is directly exposed, and which separates the internal environment from a less-well-controlled external environment. Of course, having introduced a barrier to maintain an internal environment, it is necessary to introduce some means of transporting the needed compounds across this barrier. Bacteria, for example, have specific transport mechanisms to absorb nutrients from their environment and to maintain appropriate concentrations of those nutrients in their internal environment.

If we wish to adopt this model we need to specify four additional elements of the assembler: (8) the type of barrier used to prevent unwanted changes in the internal environment in the face of changes in the external environment, (9) the nature of the external environment (we assume that item 6 above now refers to the internal environment). (10) the transport mechanism that moves material across the barrier and (11) the transport mechanism used in the external environment.

Finally, it is often useful to be able to transmit instructions to the assembler. We will therefore include a final element: (12) a receiver that allows the assembler to receive broadcast instructions. The presence of a receiver is not mandatory for the manufacture of interesting structures (as demonstrated by a fertilized egg), but it is extremely useful if we are considering a general purpose device able to make a wide variety of different products. As we will see later, not only does a receiver allow us to re-program the assembler for specific tasks, it also allows us to significantly simplify the computational element.

These, then, are the 12 major elements that must be specified before we can discuss the behavior and properties of an individual member of the class of assemblers. To some extent, item (5), the class of structures that can be built, is implicit in the capabilities of the computer, the positional device, and the set of chemical reactions that can be used during synthesis. In fact, it is possible to argue that item (5) is rather poorly defined. For example, ribosomes can make people, and people can make integrated circuits. Do we then say that ribosomes can make integrated circuits? In some sense, this is true. In another sense, and in the sense we will use in this paper, it is false. It is fairly obvious how to program a ribosome to make a protein, but it requires a rather complex and non- obvious program to induce a ribosome to make an integrated circuit. We will limit the class of structures that can be built to those that are "obvious," and will not examine the term "obvious" too closely.

The utility of an assembler

Now that we know how to specify an assembler, we might ask which assemblers are valuable? The ribosome is an obviously valuable member of this class: it allows us to exist! In general, though, what features of an assembler might be viewed as enhancing or reducing its value? What do we want, and what should we avoid?

There are some rather obvious points. First, the external environment in which an assembler operates, item 8 on our list, must be an environment which is physically achievable in some reaonably economical way.

Second, the class of structures that the assembler can build, item 5, should include the assembler itself. If we are to achieve low cost through the ability of the assembler to make copies of itself, it's useful to include this capability on our list of "value enhancing features." This feature is not absolute, for it might be that a "specialized assembler" would be unable to make a copy of itself, but would still be able to very efficiently manufacture simpler structures. Such a specialized device would still be quite valuable, provided that more general devices existed which were able to manufacture it. In general, it is not actually necessary that any specific device be able to make a copy of itself, but it is necessary that a system composed of many different devices be able to manufacture any individual device in the system. Conceptually, however, it is simplest to imagine a "general purpose" device able to make copies of itself, and "special purpose devices" that are deficient in this regard, but can more efficiently make some narrower class of devices. The "general purpose" device also makes a convenient design target and is likely to be a reasonable way to implement a general purpose molecular manufacturing capability.

Third, the class of structures should include many things that we view as valuable. Ideally, the class of structures should include as broad a range as possible. There are, however, tradeoffs involved between the range of things that can be built and the complexity and cost of the manufacturing process. If we wish to build structures that include large amounts of radioactive isotopes, then our assembler must be able to function in the face of high radiation levels. If we forego this ability and limit ourselves to building structures that are not radioactive, then we can use a (presumably simpler and cheaper) assembler that is less able to tolerate radiation.

Fourth, the costs of the feedstock upon which the assembler operates should be low. Cost, of course, is a concept which is foreign to science. The cost of a pound of carbon cannot be found in the periodic table, nor is it even a constant. Fundamentally, however, our objective is to design and build a device which will be useful in the real world, and in the real world cost is a major concern. While our design effort is more complex and will take a longer time than the design of a new car or a new computer, our efforts are still fundamentally cast in the same mold and driven by the same imperatives. While we cannot today make the cost tradeoffs involved in the design of an assembler as precisely or as accurately as an engineer at Boeing can weigh the tradeoffs between aluminum and composites, we can still seek to avoid approaches that seem inherently more costly and favor approaches that are likely to be less expensive.

Fifth, we wish to reduce the total energy involved in the manufacturing process. While this is also a relative goal (we don't know the exact cost of energy, nor the precise ratio of energy costs to raw materials costs) we can again favor the approaches that reduce energy consumption while avoiding approaches that are obviously wasteful.

Sixth, it should be easy to re-program the assembler to fabricate different structures. If our objective is the flexible ability to make a wide variety of products, then re-programming is mandatory. If this process is unduly time consuming or complex, we will have lost significant value.

Seventh, the design should eliminate the risk of extraordinary dangers. Self replicating systems, like other systems, might fail to work correctly and as a consequence cause damage. Unlike ordinary systems, they can theoretically inflict an unlimited amount of damage. They could theoretically, for example, replicate unchecked and destroy the planet[3]. This is clearly unacceptable. The design must completely eliminate this risk. This requires more than a simple "program check" or some possibly circumventable safeguard. We require that the design be inherently safe; i.e., not only must the device as designed be safe, it must continue to be safe even in the face of accidental design errors, errors in handling or transmitting the instructions, etc. It must be robustly safe[30].

A useful class of structures

The above criteria still include a very wide range of possibilities, so it's useful to provide a more specific example to illustrate what we mean.

Diamond is every material scientists dream material. It is stronger, stiffer, has a wider bandgap, better thermal conductivity, etc. etc. It is not surprising, therefore, that we want our assembler to be able to make diamond. We will go further, and require the ability to make structures that are similar to diamond. A simple example would be a block of diamond with atomically precise impurities. The electronic properties of a block of diamond in which impurities were located at specific lattice sites (instead of being diffused statistically throughout the crystal as is done today) would rather clearly be of great value in electronic, semiconductor and computer applications.

A second class of useful "diamondoid" structures would be mechanical. Here, rather than controlling the location of internal impurities, we need to control the precise structure. Mechanical parts must have certain shapes and surface properties. We would like the ability to control the shape and surface properties with atomic precision. Desirable structures would have specific surface structures, specific crystal orientations, and specific patterns of dislocations to provide simultaneous control over the shape and surface crystal orientation. An example of a simple mechanical structure is the bearing shown in figure 3. In essence, this bearing consists of two thin, flat sheets of diamond that have been bent into two tubes. The smaller tube has been put inside the larger tube (this is only a description, not a suggested manufacturing method). While such bearings have not been synthesized (though nested "bucky tubes" have been[7]), both theoretical analysis and computer modeling strongly support the idea that appropriately designed bearings should have barriers to rotation that are well below kT (thermal noise) at room temperature [20] while simultanesouly having high stiffness in other directions[1]. In the particular example shown, the bearing surfaces are modified diamond (100). Note that the surface termination uses atoms other than carbon; such as hydrogen, oxygen, nitrogen, sulfur, and other elements that form relatively strong covalent bonds with carbon. This eliminates dangling bonds at the surface. We include structures with such surface terminations in the broad class of "diamondoid," and won't attempt to define this class too rigorously.

Figure 3. A molecular bearing. There is also a video of this bearing (~630 kb mpeg).

Rather obviously, further extensions are both possible and will almost certainly prove highly desirable. For our present purposes, however, we can restrict ourselves to the class of structures for which we have already provided examples. This class includes structures with extraordinarily good mechanical and electrical properties, and provides sufficient flexibility to include the kinds of structures that will be needed in an assembler, e.g., a molecular computer, a molecular mechanical positional capability, etc. This class should be sufficient to satisfy both our second and third criteria.

Tip chemistry

What will our set of "well defined chemical reactions" look like for this class of structures? Today, diamond is often synthesized at relatively low temperature and pressure using CVD (Chemical Vapor Deposition). In this process a highly reactive low-pressure gas is passed over the growing diamond surface. The gas typically includes atomic hydrogen and various species of hydrocarbons. The growing diamond surface is usually terminated with hydrogen. The growth mechanisms have been the subject of fairly extensive studies[22, 23, 24]. A fairly common mechanism for growth involves (1) the abstraction of one or more hydrogen atoms from the surface, leaving one or more dangling bonds, followed by (2) reaction of the dangling bond(s) with a carbon containing species (e.g., CH3, C2H2, etc). This cycle of abstraction and deposition is repeated indefinitely.

Obviously, if we wish to make an atomically precise diamondoid structure, we cannot rely on reactive molecules in a gas. A gas molecule can react with the surface at any location, and so the synthesis process is statistical and largely uncontrolled. If we are to achieve atomically precise control over the product, we need highly precise positional control over the reactants. The first thought would be to mount a highly reactive molecule on the tip of a positional device. For example, hydrogen abstraction can take place when an atomic hydrogen in the reactive gas strikes a hydrogen bonded to the growing surface. In some cases, the result will be an H2 molecule leaving the surface and a dangling bond at the site where the H was removed. Unfortunately, atomic hydrogen is difficult to hold without making it inert. Bonded to the tip of a positional device, a hydrogen atom is non-reactive. If it is not bonded to some structure, then it is difficult to control its position with the required precision. What we desire in a hydrogen abstraction tool is a structure which (a) has one end which is fairly stable, and so can be bonded into a larger "handle," and (b) has a highly reactive end that has a high affinity for hydrogen.

The propynyl hydrogen abstraction tool shown in figure 4 has exactly these properties[5, 19]. The carbon atom at the tip is triply bonded to the middle carbon atom, which is in turn singly bonded to the carbon atom at the base. The tip atom is a radical. If the tip atom were bonded to hydrogen, the resulting bond would be very strong: about 130 kcal/mole. Quantum chemical calculations [19] strongly support the idea that this tool will be able to easily abstract hydrogen from most diamond surfaces, and in particular that the barrier to the abstraction will either be small or non-existent. The base of the tool would be bonded into an extended diamondoid "handle" which could then be mechanically grasped and positioned. This tool could be used to remove hydrogen in a site-specific fashion from a diamondoid work-piece, thus setting the stage for a carbon deposition tool to bond one or more carbon atoms at that site. Once the tool had performed its task it could either be thrown out and a new tool created from an appropriate pre-cursor, or the tool could be "refreshed" by removal of the abstracted hydrogen atom[5, 19].

Figure 4. A hydrogen abstraction tool.

Several proposals for carbon deposition tools have been made[5]. The general idea, however, is to use tools which bond a gas-phase growth species to a positionally controlled tip, and then to employ a reaction mechanism similar to that which would occur for the gas-phase species during deposition on the surface. It is useful to note that positional control also provides a convenient and very controlled mechanism for providing the activation energy sometimes needed: it is possible to apply mechanical force (push). This option is not normally available in chemistry and so opens up a rich new set of reaction mechanisms[5].

Molecular mechanical systems can be modeled with sufficient accuracy using, appropriately enough, molecular mechanics [8, 9]. Chemical reactions involving the deposition or removal of individual atoms or small clusters of atoms can be modeled by using ab initio quantum chemistry programs[19]. A wide range of molecular structures can be modeled using the software tools of computational chemistry. This lets us think about, design, and model the molecular structures that will be needed in future molecular manufacturing systems[15, 29]. Such "computational experiments" will speed the development of such systems. Just as Boeing can build better airplanes more quickly by first "building" and "flying" them in a computer, so too the development of molecular manufacturing will happen more quickly and with fewer false starts if we use the available computational resources to speed the design and analysis of such systems.

Maintaining the internal environment

The use of highly reactive species in the "tip chemistry" (item 3 in our list) was dictated by our desire to synthesize diamondoid structures. In its turn, it forces the internal environment (item 6 in our list) to be highly inert. If the internal environment were not inert, then the highly reactive compounds used for the tip chemistry would react in an uncontrolled way with the environment. The simplest inert environment is a vacuum (although helium or some other noble gas might also suffice). We will adopt the working assumption that the internal environment is a vacuum.

Our desire that the external environment be economical suggests the use of a simple industrial tank containing the appropriate feedstock in the form of relatively simple compounds in solution. While we might impose the requirement that this feedstock be reasonably clean, we do not wish to assume that it is free of all impurities. This also provides a convenient method of providing energy: the tank can contain fuel as well as feedstock.

If we have an internal environment which is vacuum and an external environment which is liquid, and if we are able to build diamondoid structures, then the obvious design for the barrier between the internal and external environments is a diamondoid wall. Such a wall should effectively prevent entry of unwanted contaminants from the external environment into the internal environment. Theoretical calculations tend to support the idea that even atomic hydrogen would be unable to penetrate such a barrier[25]. (Of course, the concentration of atomic hydrogen in even relatively impure fluids is very low: it is quite reactive. As a consequence, even if it could penetrate a diamondoid barrier, if its concentration were sufficiently low this problem would still be of no concern. If hydrogen penetration does pose a problem, we could use a non-protonated inorganic solvent for our external environment).

Transport of compounds across the barrier requires a specific transport mechanism. While an airlock mechanism could be used, a simpler proposal was made by Drexler[5]. In this proposal, a rotating cylinder, embedded in the barrier, would have binding sites or pockets on its exterior surface that would alternately be exposed to the exterior environment and an interior chamber by the rotation of the cylinder. Each binding site would selectively bind a specific molecule when it was exposed to the external environment. When the binding site had been rotated to expose it to the interior chamber, the affinity of the binding site for the molecule would be reduced (by, for example, mechanically pushing the end of a rigid rod into the binding site). In this way, the desired molecule would be released into the interior chamber (see figure 5).

A multi-stage cascade would produce extremely high purity in the output of the final stage. The final stage would deliver the desired molecule and only the desired molecule to the interior of the assembler, bound tightly to a high-affinity binding site, in a specific orientation.

While diffusive transport of compounds is precluded internally in the assembler by the nature of the internal environment (e.g., high vacuum, highly ordered), a number of other transport mechanisms are feasible. Conveyor belts, for example, provide a simple and flexible method of transport[5].

In general, modification of the compounds provided in the feedstock might be required before they are suited for any particular use within the assembler. If the compounds provided in the feedstock are provided directly in the form in which they will be used (either for use as tools at the positionally controlled tip, or for some other purpose), then they would require little or no chemical modification and could be transported directly to the point of use. If the feedstock consists of relatively simple compounds unsuited for direct use, they would have to be modified. To simplify the design, it might be desirable to provide all compounds in the feedstock in essentially the form in which they would actually be used. This would place the burden of synthesizing the specific compounds on whoever made the feedstock. If, on the other hand, we wished to provide only a relatively modest number of simple compounds in the feedstock, then there would have to be a corresponding increase in the complexity of the internal processing in the assembler in order to synthesize the required complex compounds from the simple compounds that were provided.

For example, in biological systems it is common to have a relatively complex intermediary metabolism which transforms simple molecules (sugars, inorganic ions) into more complex molecules (amino acids, nucleotides, lipids, etc.). The need for this intermediary metabolism could be sharply reduced or eliminated by providing the desired compounds (amino acids, etc) directly in the feedstock. This would free the system from the need to manufacture them from scratch, and would simplify the design. On the other hand, it might prove less expensive in some specific case to synthesize a specific compound when it is required, and not bother to provide it in the feedstock. This "make versus buy" decision will initially be dictated by design complexity (e.g., it's easier to buy parts ready made than to build them from scratch). Early designs will likely have limited synthetic capabilities and will be almost completely dependent on the feedstock for the needed compounds. As time goes by, some of the compounds provided in the feedstock will instead be synthesized directly within the assembler from simpler pre-cursors. Ultimately, those compounds that can be synthesized more economically in bulk will not be synthesized internally by the assembler but will simply be provided in the feedstock. Those compounds which are more easily and economically made internally by the assembler will be eliminated from the feedstock.

The computer

While we might wish to use an electronic computer employing diamond as the semiconductor, the size of such a computer might prove larger than we desire. Drexler has proposed a simple molecular mechanical computer which is both small (a "lock," roughly the equivalent of a transistor, occupies less than 5 cubic nanometers), operates at room temperature, and is reasonably fast (50 picosecond switching times should be feasible)[27]. A broad range of designs for mechanical computers exist[17, 28]. The components of the molecular mechanical computer could be made from the class of diamondoid structures.

It might well be that electronic designs will provide the best performance. Future devices using individual electrons to represent 0's and 1's should eventually be feasible[26], and by using reversible logic[16] such electronic devices should be able to achieve remarkably low energy dissipation. Single-electron devices, however, might require low temperatures for reliable operation. The mechanical proposals are conceptually simple, can operate at room temperature, and seem likely to be much smaller than even the most advanced of future electronic devices. The author knows of no electronic proposals which can even approach the 5 cubic nanometers per logic device that Drexler's rod logic should be able to achieve. Therefore, if small size is a major design constraint, then the molecular mechanical proposals will dominate. A likely outcome is that each approach will prove most suitable for some range of applications but less suitable for other applications. High density memories that are slower but store more information per unit volume have their place, as do high speed logic devices that occupy a larger volume but are faster.

Whatever we eventually choose for the computer, we can state with confidence that some form of molecular computation will be feasible. If nothing else, the molecular mechanical proposals will suffice.

Broadcasting instructions

Finally, we require some form of broadcast transmission that an individual assembler can receive. Many approaches are feasible, but a simple mechanical proposal, that interfaces easily to the mechanical computational system and also works well in a liquid environment is to transmit information accoustically. Each assembler would be equipped with a pressure sensor, and would note the rise and fall of the local pressure. Transmission rates in the megahertz range should be relatively easy to achieve and would be sufficient for the purposes considered here[5].

Broadcasting instructions to the assemblers also means that the memory on board the assembler can be greatly reduced. Each assembler need only remember the small portion of the broadcast which pertains to its particular actions, and can ignore the rest. If the broadcast instructions are repeated periodically then each assembler need remember only enough instructions to keep it busy until the next broadcast is received. In the limit, each assembler could simply execute instructions intended for it as they were broadcast, and do nothing when instructions for other assemblers were being broadcast. Such an architecture is similar to the SIMD architecture of the Connection Machine[31], which decodes and broadcasts instructions from a central source to a large number of extremely simple processors with limited memory and limited capabilities. This approach minimizes the memory that each assembler requires.

If the assembler were not able to receive broadcast instructions, then it would be necessary for each assembler to have sufficient on-board memory to remember (a) how to build a second assembler and (b) how to build some useful product. In this scenario, a single appropriately programmed "seed" assembler would then have to replicate itself, manufacturing a large number of similarly programmed copies of itself. Only after this large number of assemblers had been built would it then be possible to start manufacturing the product. Manufacturing a new and different product would be slow and awkward, for it would involve initializing a new assembler with a new set of blue-prints, then allowing that assembler to replicate into a large number of similarly programmed assemblers, and finally having all the assemblers start building the new product. While feasible, this approach is clearly less economically desirable.

With the use of broadcast instructions, we can build a large number of assemblers before we know what product we want to build. When we finally specify the product to be built, it is a relatively simple matter to broadcast appropriate instructions to the large number of (already manufactured) assemblers, thus directing each one of them to perform the correct task.

Summary of design approach

The picture that emerges from these considerations is an assembler which:

Review of design approach

This approach satisfies our basic requirements. Requirement 1 is met, for the external environment is simply a tank that provides a suitable concentration of a reasonably pure industrial feedstock and fuel. Requirement two is met, for the assembler is itself made of a diamondoid structure, which can be synthesized by the assembler. Requirement three is met, for the class of diamondoid structures includes many structures that are quite valuable (notably strong materials and powerful computers). Requirement four is met, for the feedstock consists of relatively simple compounds that can be provided in bulk. Requirement five is met, for the fuel source can also be a relatively inexpensive industrial compound which is provided in bulk. Requirement six is met, for the accoustically broadcast instructions will allow the assembler to be easily re-programmed.

Requirement seven is also met. Although the assembler can replicate in the controlled environment of an industrial tank, it requires feedstock and fuel that cannot be found in a natural environment. Further, it is dependent on a stream of broadcast instructions. It has insufficient memory to store the complete blueprints for itself, so without the accoustically broadcast signals it would be unable to replicate itself. Flushed down the drain the assembler would be cut off from fuel, feedstock, and directions. Such an assembler would not pose the extraordinary risks that are theoretically feasible with self replicating systems.

Conclusion

We should be able to achieve a general purpose molecular manufacturing capability sometime early in the 21st century. No matter what it is called; nanotechnology, molecular nanotechnology, the manufacturing technology of the 21st century, molecular engineering, or whatever; it will revolutionize manufacturing. Molecular computers, with individual logic gates and complex interconnection patterns that are literally made with atomic precision will be manufactured and manufactured cheaply. The same molecular manufacturing technology should let us inexpensively manufacture almost any other product that is consistent with physical and chemical law, even products with highly complex molecular structures that are both non-repeating and atomically precise.

References

1. Nanomachinery: Atomically precise gears and bearings, by K. Eric Drexler, in IEEE Micro Robots and Teleoperators Workshop, Hyannis, Cape Cod, November 1987

3. Engines of Creation, by K. Eric Drexler, Doubleday, 1986.

4. There's Plenty of Room at the Bottom, a talk by Richard Feynman at an annual meeting of the American Physical Society given on December 29, 1959. Published in Caltech's Engineering and Science, February 1960.

5. Nanosystems: molecular machinery, manufacturing, and computation, by K. Eric Drexler, Wiley 1992.

7. Helical microtubules of graphitic carbon, by Sumio Iijima, Nature Vol 354, November 7 1991, pages 56-58.

8. DREIDING: A Generic Force Field for Molecular Simulations, by Stephen L. Mayo, Barry D. Olafson, and William A. Goddard III, The Journal of Physical Chemistry, 1990, 94, pages 8897-8909.

9. Molecular Mechanics, by Ulrich Burkert and Norman L. Allinger, ACS Monograph 177, 1982.

10. NASA Conference Publication 2255: Advanced Automation for Space Missions, edited by Robert A. Freitas, Jr. and William P. Gilbreath, National Technical Information Service, U.S. Department of Commercie, Springfield, VA; N83-15348

11. Theory of Self-Reproducing Automata, by John von Neumann, edited and completed by Arthur W. Burks, University of Illinois Press, 1966.

12.Self-Replicating System - A Systems Engineering Approach, by Georg von Tiesenhausen and Wesley A. Darbro, NASA technical memorandum TM-78304, Marshall Space Flight Center, Alabama, July 1980.

13.How a SIMD machine can implement a complex cellular automaton? [sic] A case study: von Neumann's 29-state cellular automaton, by Jacqueline Signorini, Proceedings Supercomputing `89, ACM Press, 1989.

14.Robotic Engineering: an Integrated Approach, by Richard D. Klafter, Thomas A. Chmielewski, and Michael Negin, Prentice Hall 1989.

15.Molecular Nanotechnology, by Ralph C. Merkle, in Frontiers of Supercomputing II: A National Reassessment, edited by Karyn R. Ames, 1992, University of California Press (maybe).

16. Reversible Electronic Logic Using Switches, by Ralph C. Merkle, Nanotechnology 4 (1993) pages 21-4.

17. Two Types of Mechanical Reversible Logic, by Ralph C. Merkle, Nanotechnology Nanotechnology 4 (1993) pages 114-131.

18.Molecular Engineering: an approach to the development of general capabilities for molecular manipulation, by K. Eric Drexler, Proc. National Academy of Sciences USA, 78, pages 5275-8.

19.Theoretical Studies of a Hydrogen Abstraction Tool for Nanotechnology, by Charles B. Musgrave, Jason K. Perry, Ralph C. Merkle and William A Goddard III, Nanotechnology 2, 1991 pages 187-195.

20. A Proof About Molecular Bearings, by Ralph C. Merkle, Nanotechnology Volume 4, 1993, pages 86-90

21.Scanning Tunneling Microscopy and Atomic Force Microscopy: Application to Biology and Technology, by P. K. Hansma, V. B. Elings, O. Marti and C. E. Bracker. Science, Vol 242, October 14 1988, pages 209-216.

22.Mechanism for diamond growth from methyl radicals, by Stephen J. Harris, Appl. Phys. Lett. 56 (23), June 4 1990, pages 2298-2300.

23.Growth Mechanism of vapor-deposited diamond, by Michael Frenklach and Karl E. Spear, J. Mater. Res. 3 (1), Jan/Feb 1988, pages 133-140.

24.Detailed surface and gas-phase chemical kinetics of diamond deposition, by Michael Frenklach and Hai Wang, Physical Review B, Vol 43, No. 2, Jan. 15 1991-I, pages 1520- 1545.

25.Bond-Centered Hydrogen or Muonium in Diamond: The Explanation for Anomalous Muonium and an Example of Metastability, by T. L. Estle, S. Estreicher and D. S. Marynick, Physical Review Letters, Vol. 58 No. 15, April 13 1987, pages 1547-1550.

26.Single Electronics, by Konstantin K. Likharev and Tord Claeson, Scientific American Vol 266 No. 6, June 1992, pages 80-85.

27.Rod Logic and Thermal Noise in the Mechanical Nanocomputer, by K. Eric Drexler, pages 39-56, in Molecular Electronic Devices, Third International Symposium, edited by F. L. Carter, R. E. Siatkowski, and H. Wohltjen, North-Holland 1988.

28.Charles Babbage: On the Principles and Development of the Calculator, edited by Philip Morrison and Emily Morrison, Dover Publications, 1961.

29.Computational Nanotechnology, by Ralph C. Merkle, Nanotechnology 1992.

30. Risk Assessment, by Ralph C. Merkle, in Nanotechnology: Research and Perspectives, edited by B. C. Crandall and James Lewis, MIT press 1992.

31.The Connection Machine, by Daniel Hillis, MIT Press, 1986.



This page is part of the nanotechnology web site.