Q: Isn't this potentially dangerous? Could it lead to runaway self-replicating systems wreaking havoc or devouring everything? Is it wise to do this?

A: These could be valid issues to be concerned about: acknowledging that is key to avoiding such scenarios. If, however, those receptive to taking such concerns seriously adopt the stance of relinquishment [1], this will cede all leverage over the development of these powerful technologies to those who disregard them. Instead, I adopt the strategy of vigorously developing the safest implementations possible earliest to preempt anyone who would do otherwise. Here are the basics:

How to Make Sure
Programmable Self-Sufficient, Self-Replicating Technology
Will Be Safe

As with all powerful technologies, the potential for mistake and misuse exists with both self-replicating technology and molecular nanotechnology, and with these such concerns are particularly heightened due to the possibility of self-amplification or runaway processes.

To ensure that these technologies are used safely and not misused, two broad measures are adopted.

The solution to avoiding such potentially catastrophic scenarios lies in (1) rigorously maintaining the simplicity of control and analyzability of systems, and (2) requiring the inclusion of accountable humans at all key decision points.

First, self-replicating systems will be designed with the simplest control subsystems possible in any given case so that all their possible behaviors and actions can be well and clearly understood. Generally, this means that finite-state machines are the most preferred control subsystems, and that these are designed to include human oversight at key checkpoints.

A finite-state machine is a simple type of information processing system with a definite set of possible states and well defined rules for transitions between states. A very simple example of a finite-state machine is a cuckoo-clock: each time the clock can represent is a defined state, the rule for state-change is simple advance from one state to the succeeding state, and specific actions are carried out in specific states (on the hour). No surprising transitions or actions are ever possible.

Second, at the levels of engineering, design and operation, a community of vetted, trusted peers not subject to such demands as those arising from profit maximization or political agendas must approve any decision or step that could pose any significant risk.

While self-replicating nanotechnology will increase the computing power (and expand the types of feasible computing architectures) facilitating greater progress in artificial intelligence (AI), it would be a mistake to put real-world self-replicating systems under the control of AI, since that would combine complexity of control (creating difficulty for humans to analyze and predict the behavior of such systems in new situations) with the capacity to multiply, hence creating the danger of proliferation of unpredictable and powerful systems which have outstripped our ability to control them.

Similar arguments can be made for evolutionary techniques. While these are useful tools for solving some problems, including engineering problems, it would be one thing to use the results of such techniques employed in simulations to create non-evolving architectures (after careful analysis of the results), and quite another to endow real systems with the capacity to evolve. The latter would combine unpredictable problem-solving with self-replication, which again would create potentially dangerous situations. In contrast, the self-replicating systems described here will be designed with high precision error detection and correction applied to all copying of information, and structural and functional tests of products to ensure that nothing like a mutation can ever arise (because any product that ever fails to meet these criteria will never be activated but instead be recycled).

Thus, the self-sufficient, self-replicating programmable systems introduced here will only be directly controlled by finite-state machines with human operators in the loop at key points, and designs, programming and procedures for operational control will be reviewed by a community of accountable peers with the highest degree of transparency possible.

[1] In a Wired article entitled "Why the Future Doesn't Need Us" in 2000, Bill Joy put forward relinquishment, the thesis that for at least those technologies he held to pose the greatest dangers (e.g. AI, biotechnology, nanotechnology), the best way to avoid these dangers is to relinquish any aims to develop those technologies in the first place. Two critical problems with this approach are that compliance would need to be universal (and it would then be more likely that violations of this would be committed by bad actors, whether regimes, groups or individuals, against whom we would be left defenseless), and that it fails to weigh risks (having consequences that could be large but which can be made extremely unlikely) against a broad range of enormous benefits, including the avoidance of other risks such as catastrophic climate change, and prolongation of the mass extinction event already underway. Note that at present there are no efforts at relinquishment of either AI or biotechnology as a whole with significant chance of success.