Can You Build a Locomotive out of Helium?
First published in Cryonics, 4th Quarter 2011
Robert Ettinger on Substrate-Independent Minds
Introduction and Afterword by Aschwin de Wolf
Introduction
Robert Ettinger, the “father of cryonics,” was cryopreserved on July 23, 2011. While Ettinger’s book Man into Superman (1972) is considered an important contribution to transhumanism, he increasingly came to recognize that most people do not desire a hard break with the past and resist radical transformation. During the last years of his life he became a vocal critic of ‘mind uploading’ as a means of personal survival and spent a considerable amount of time refining his arguments why mind uploading is not likely to work. This document organizes excerpts from his last book Youniverse and mailing list messages on the topic of substrate-independent minds. In the afterword, I make a brief attempt to place his contributions in a broader philosophical context.
The title of this document refers to a message that Robert Ettinger sent to the Cryonics Institute mailing list on July 21, 2011. In response to the claim that the human mind is a machine, and that the function of any machine can be duplicated by a machine built of another material, Ettinger asked, “Can you build a locomotive out of helium?”
Mind Uploading
“A large and burgeoning group of scientists, including some of the brightest, believe that—in principle—computers will fairly soon be able to think in the fullest sense of the word. They will be living, conscious entities with feelings and subjective experiences.
“A corollary—many believe—is that your persona could be uploaded into a computer and you could then live an incomparably bigger and better life as a simulation or emulation.
“I think the uploading thesis is probably wrong, although (as usual) it’s too soon to be sure. But the issue is a significant part of modern philosophy, and potentially has enormous practical importance.
“…I am among the radicals in the expectations for AI. But intelligence is not life. It is by no means proven that life as we know it with subjective experience can exist on an arbitrary substrate, such as silicon.” (Youniverse)
Information
“One extreme school of thought holds that information and its processing constitute everything that is important. In particular, you are essentially just a collection of information, including a program for processing that information. Your ‘hardware’—the nervous tissue that embodies and handles the information—is only secondary.
“My conclusion will be that it is not necessarily possible—even in principle—for consciousness to exist on an inorganic substrate, and in fact that it is unlikely.
“Sometimes the doubters are accused of dualism—the increasingly discredited belief that the living and inanimate worlds, or the material and the spiritual worlds, are separate.
“This certainly is not true of me or of many others who question the information paradigm. I am a thoroughgoing materialist and reductionist. I will not feel in the least dehumanized if it turns out the information paradigm is right…I have strong doubts, but they are based entirely on the evidence, or lack thereof.
“The most radical of the ‘strong AI’ people believe that all thinking is information processing, and all information processing is thinking; and they appear to believe that consciousness is just an expression of complexity in thinking.
“People who talk this way must be admired for boldness and strength of conviction, but I think not for clarity of thought.
“The point is, all physical phenomena, all interactions, involve information processing in some sense. But that isn’t all they do. A computer, or a person with pencil and paper, could figure out—describe or predict—what the atoms do, and that would be an analog of the information processing part of the phenomenon; but only the actual, physical atoms can form an oxygen molecule. And to anthropomorphize or analogize ‘feelings’ and ‘thoughts’ into these phenomena is simply unjustified. It amounts to declaring, by fiat, that thinking and feeling are inherent in information processing; but saying so doesn’t make it so.” (Youniverse)
Turing Tests and Zombies
“Alan Turing was a brilliant mathematician and computer pioneer. He played an extraordinary part in winning World War II through his work in cryptography for British Intelligence. He also showed many of the potential capabilities of general computers. But one of the works for which he is most famous is badly flawed or has been badly misused—the ‘Turing test’ for intelligence/- consciousness.
“Again, I am a firm materialist and reductionist: I readily concede the possibility that a machine could (conceivably) have life and consciousness. But I deny that we can assume that (inorganic) machines have this potential; and with still more help from Turing I think I can make the case persuasive.
“‘Uploaders’ or ‘upmorphists’ or patternists generally maintain that our identity resides in our information content. Their most extreme position is patently absurd—that ‘we’ literally persist, in some degree, if any of the information about us is preserved, even our writings or biographical data. (Shades of Woody Allen! ‘I don’t want to live on in my works; I want to live on in my apartment.’) Anyone who believes this needs more help than I can provide.
“Turing ingeniously showed that a strip of paper tape marked in squares, with zeroes or ones marked on the squares according to certain rules, along with a simple mechanism for moving the tape and making or erasing marks, could be a universal information processor—i.e., it could accomplish any information processing task that any digital computer (serial or parallel) could do, given enough time. It could even produce any result that a quantum computer might, albeit at a teeny-tiny fraction of the speed.
“You certainly can’t claim that a paper tape (even when it is moving) is alive or conscious! Yet that tape, in theory, could produce any response that a person could to a particular stimulus—if by ‘response’ we mean a signal sent to the outside world, suitably coded. It could converse with perfect fidelity to an individual’s character, and over a teletype could fool that person’s husband or wife.
“My original objection to the uploading assumption was simply that we don’t know anything about consciousness or feeling, hence it is premature to assume that it can exist other than where we know it exists, viz., in organic brains. It is entirely possible that meat machines (as opposed to machines of silicon or metal etc.) have some unique quality that allows the emergence of feeling and consciousness. Until we can isolate and define the mechanisms of feeling—of the subjective condition—we must reserve judgment as to the possibility of inorganic people. (Youniverse)
“Uploaders tend to put faith in the Turing Test for human intelligence, and to believe that zombies cannot exist. Let’s take a quick look.
“Communicating (say) by email, a testor tries to determine whether the testee is a human or a computer program. Passing the test supposedly proves the testee is human or equivalent. But the test is clearly worthless, since it produces both false positives and false negatives. As much as 50 years ago Eliza, a program pretending to be a psychiatrist, fooled many people—false positives. And of course a child or a retarded person could perform below par and produce a false negative. The Turing test is baloney.
“In similar vein, uploaders tend to believe that something which outwardly behaves like a person must be a person. They reject the possibility of zombies, systems that by their actions appear to be sentient but are not. Yet it is often easy to fool people, and, as already noted, programs have fooled people even though no one claims the programs were alive.” (Cryonics Institute Mailing List, September 9, 2010).”
Imperfect Simulations
“..any simulation created in the foreseeable future will be imperfect, because it will necessarily reflect current theories of physics, and these are known to be incomplete and almost certainly in error to some extent or in some domains. Whether this would necessarily result in material deviations of the simulation from the course of nature, and in particular whether it would preclude feeling, we don’t yet know. But we do know that the simulation would be wrong, which in itself is enough to justify withholding judgment on the possibility of living computers.” (Youniverse)
Analog Failures
“The uploading thesis depends on the assumption that any organic process in the brain can be duplicated by analog in some other medium but this not only isn’t obvious; it’s nonsense.
“For example, suppose a certain process depends on magnetism, and all you have to work with are the mechanical forces transmitted by rigid bodies. Can you make an electric motor out of tinker toys? Can you build a synchrotron out of wooden boards and nails? Uploaders think a computer (of the electronic variety) can be a person: how about a Babbage mechanical computer made of rods and gears? Presumably, any kind of information processing and storage can be done by a collection of rods and gears but could rods and gears conceivably be conscious? I doubt it; not all media are created equal. So it is entirely possible that organic brains have potentialities not realizable anywhere else in the universe.” (Youniverse)
“Just ask yourself what consciousness is—what physical condition or process constitutes consciousness. You don’t know, hence you cannot know that a simulation fills the bill.” (Cryonics Institute Mailing List, September 16, 2010)
Petitio Principii
“It seems to me that all the computer-metaphor people… keep making the same error over and over again—assuming as a premise the very hypothesis they are trying to establish. When the premise is the same as the conclusion, naturally the conclusion follows from the premise. They refer repeatedly to ‘all computational devices’ etc., implying that the brain is just that—another computational device—when in fact that is precisely what is at issue: Is the brain possibly something more than a computational device? The computer metaphor is plausible (and I am not in the least uncomfortable with it) but plausibility isn’t proof.” (Youniverse)
The Map is not the Territory
“Adherents of the ‘information paradigm,’ I believe, are deceived in part by glibness about ‘information’ and hasty ways of looking at it.
“Apprently it needs to be said again and again: a description of a thing or a process—no matter how accurate and how nearly complete—is not the same as the thing or the process itself. To assume that isomorphism is enough is just that—an assumption, not self-evidently permissible.
“Even though (for example) a computer program can in principle describe or predict the behavior of a water molecule in virtually all circumstances, a water molecule for most purposes cannot be replaced by its description or program. If you pile up 6.02 x 1023 computers with their programs, you will not have 18 grams of water, and you will have a hard time drinking it or watering your plants.” (Youniverse)
“Eliezer Yudkowsky (and other uploaders) claim that mapping a system results in a map that effectively has the same properties as the original. Well, look again at one of my counter-examples. I write down with pencil and paper the quantum description of a hydrogen atom in its ground state. It could hardly be more obvious that the marks on paper do not constitute a hydrogen atom. And if you put side by side two papers describing two hydrogen atoms, they will not combine to form a hydrogen molecule. In principle, of course (the math is difficult) you could write down expressions corresponding to the formation of hydrogen molecules from hydrogen atoms, but you will still have just marks on paper.
Once more, a simulation is just a coded description of a thing, not the thing itself.” (Cryonics Institute Mailing List, September 18, 2010)
Identity
“The term ‘identical’ is used in different ways by different people. To some, two systems are identical if they differ only in location, e.g. two hydrogen atoms in ground state. But I have pointed out that a difference in location necessarily implies other differences as well, such as gravitational fields. Hence my position is that, if the question arises, are A and B identical, then they are not.
“If two systems differ in spatial or temporal location, then they may be identical to most observers for most purposes, but survival of one does not imply survival of the other. Suppose you, as you are now according to local observation, also exist at a great distance in space or time (either past or future), just by accident. I see no reason for the survival of B to imply the survival of A.” (Cryonics Institute Mailing List, September 16, 2010)
Afterword
Robert Ettinger presented a number of distinct arguments (no fewer than fifteen, by his own count!) against mind uploading and I cannot pretend to have presented them all in this document. I think there are a number of core positions associated with Ettinger’s argument that can be stated quite succinctly, however.
- Whether mind uploading is possible is ultimately an empirical question and cannot be settled conclusively by analogies or thought experiments.
- A description of a material object is not necessarily the same as the object.
- A simulation must be erroneous because the program necessarily is based on our incomplete knowledge about physics.
- Consciousness may be substrate-dependent.
- A copy of a person may not constitute personal survival.
The common denominator that runs through Ettinger’s critique of substrate-independent minds is a thorough empiricism about knowledge. Ettinger does not categorically rule out the feasibility of mind uploading but takes people to task for dogmatic claims on these topics in absence of empirical corroboration.
Ettinger was particularly irritated by the claim that materialism commits a person to the acceptance of mind uploading. He could not see how a rejection of the soul excludes the view that certain materials are uniquely suitable, or even exclusively suitable, for a certain function. One might add that it is even conceivable that the mind is substrate-independent but that existing organic chemistry provides the most versatile basis for advanced consciousness and survival.
Most of the issues that Ettinger was concerned about may be resolved by the time he will be resuscitated but it is possible that some of the issues that are at stake in this debate are ultimately un-falsifiable or even pseudo-problems. For example, how could we settle the question of whether a copy is “really you?” Obviously, a copy of something will always confirm that (s)he is really him- or herself but that is of little help in resolving the question. Similarly, we may never be able to conclusively verify (or falsify) that a computer has consciousness or feelings. Is it even conceivable that new super-intelligent life forms will replace humans without being conscious or having feelings! Evolution selects for fitness, and whether this implies consciousness is an open question.
So who is right, Robert Ettinger or his critics? I think what captures Ettinger’s perspective the best is to say that if you expect an answer right now, you have not paid close attention to his argument.