Search for the Logo of the Cryonics Society of New York

Recently I received an email from Aschwin De Wolf. The New York cryonics group, he said, wants to revive the old Cryonics Society of New York that dates back to July 1965 and was the first organization to use the term “cryonics” in its title. (One of the principals back then, Karl Werner, is the one who coined the term “cryonics,” at first just intended as part of the company name. Soon however it was adopted more generally for the practice that bears this name today.) So Aschwin was making a simple request: could I find the old logo of CSNY and send it on?

Well, that proved a challenge. There really wasn’t much in the way of any official “logo” of CSNY that had any widespread use, at least I couldn’t find it from newsletters or other sources. I should note that CSNY had a sister organization, Cryo Span, that handled actual freezing and storage of patients, while CSNY was a non-profit member organization people could donate their remains to without complications, to be dealt with, when the time came, by Cryo Span. Cryo Span did indeed have a logo, as shown below, but Cryo Span was not CSNY. (On the other hand, there is no wish to revive Cryo Span since the physical details of cryonics preservation are now handled by organizations like Alcor and CI.)

Back to the putative logo, the closest approximation I could find was a shoulder patch you can see Curtis Henderson, principal founder of CSNY, wearing in some old photos. That, I decided, would have to do, there’s nothing else to put in its place. 

Below you see some of the best images I could come up with of this patch, and they aren’t very good. The leftmost image (b-w) is from a brochure and shows the best details (albeit a little of the right portion is cut away) but not the color scheme. For that I found an old, partially finished film by Beverly Greenberg (who used the pen name Gillian Cummings) that was copied to a low-definition video format. It shows the freezing of her father Herman at CSNY/Cryospan’s facility in West Babylon, NY starting in May 1970. Curtis is very prominent in this film, and you often see the patch but most crudely in the low-definition images; some idea of the color scheme can still be discerned, however. 

The design actually is very simple in broad detail: a phoenix with outstretched wings in a circular field, with “C. S. N. Y.” in large letters arching overhead. The phoenix itself is similar to the emblem then used by the Phoenix, AZ Police Department and widely copied in early cryonics literature. A good example is from CSNY’s publication, Cryonics Reports, back cover, Jul. 1968. (This design actually differs slightly from what can be seen in the b-w image but is reasonably close.) From these sources I’ve reconstructed the design shown, hoping this will be of use, maybe tweaked a bit. 

From left: Cryo Span logo, brochure and patch; CSNY patch from brochure; CSNY patch from film; ditto; reconstruction by author of (approximate) original design.

The First Cryonics Newsletter

The cryonics movement is often thought to have been launched with the publication of Robert Ettinger’s The Prospect of Immortality by Doubleday, which occurred on June 5, 1964. Before that, however, there was a fledgling cryonics movement (though the term “cryonics” itself would not be coined until August 1965). The driving force behind this first, public foray into the physical struggle for immortality was an enigmatic figure named Evan Cooper, who would eventually be lost at sea shortly after he had destroyed his private papers.

Cooper, who was born in 1926, began to think about the freezing idea sometime around 1957, and over the next five years completed a short book, Immortality Physically, Scientifically, Now (PDF) which was privately circulated in a small quantity late in 1962. Sometime in the next six months he heard of Ettinger’s independent efforts, which had resulted in a preliminary version of Prospect around the time Cooper finished his book, and the two corresponded. Word was received of Doubleday’s intentions to publish an expanded version of Ettinger’s book, and plans were laid for an organized effort to promote the idea. The first organizational meeting (which saw the creation of the Life Extension Society or LES), was to coincide as far as possible with publication of the book. But delay in the publication led to the meeting being held some months earlier, Dec. 28-29, 1963, in Washington, D.C., where Ev Cooper lived. This was followed by the first issue of the first newsletter devoted to the freezing idea, which was dated January 1964 and bore the title, Life Extension Society Newsletter. Three pages in length, it mainly dealt, as might be expected, with the conference that had just been held and the organization that was formed as a consequence. In January the following year the title would metamorphose to the inspiring imperative, Freeze-Wait-Reanimate, where it remained.

For over two years LESN/FWR was the only newsletter devoted to the cryonics idea, and it endured for several more years, until issue no. 60 dated September 1969. Many important events would be chronicled in its pages, including the successes with revival of frozen cat brains (September 1965 and later), the wild cryonics conference in early 1966, the first human freezing later that year, and the first freezing under controlled conditions, that of James Bedford in 1967. For awhile the newsletter enjoyed a huge success, with circulation said to number around 1,000—quite a generous figure even by today’s (2011) standards. This however would not continue for long. LES was beset with problems which became increasingly severe as interest in it waned and volunteer help became scarce. When it effectively shut down the newsletter died with it, its issues remaining as an important legacy of early cryonics history.

The complete back issues of the Life Extension Society Newsletter and Freeze-Wait-Reanimate are available in PDF (400 MB) here.

From “For The Record,” Cryonics, December 1990, updated by the author, January 2011.

Is a life worth starting? Some personal views

For life—the life of any sentient creature—to be worth living, there must, as Robert Ettinger has often said, be a preponderance of satisfaction over dissatisfaction. If this overall slant toward good rather than bad is maintained, it seems reasonable that one stands to gain by continued existence. I am not sure what fraction of the human (or other sentient) population achieves this positive balance and will not speculate except to note that by appearances there are many humans who do achieve it, along with other creatures, pets in particular, so at least for them, life is worth continuing. To say that life once started is worth continuing does not, as David Benatar points out, imply that it was worth starting in the first place, or should have been started. But I think that, barring certain problematic cases,  it is fair to conclude that a human life at least is worth starting, if there are responsible prospective parents who would like to start it. Here I think it is reasonable to expect that the resulting person will feel that life is overall a benefit, and additionally, that others, the parents in particular, will stand to gain from the new life that has entered their lives. I don’t accept Benatar’s arguments that by and large life is pretty terrible and people delude themselves who think otherwise.

Also I reject his “asymmetry” argument, that it is “good” if a life that would be bad does not come into existence, but merely “not good” rather than “bad” if a life that would be good does not come into existence. (It is easy to see how this asymmetry supports the argument that life should not start in the first place and Benatar refers to it often.) Benatar’s main rationale for this argument seems to be that, while we would consider someone morally at fault for deliberately bringing into existence someone who would be miserable and just want to die, we would not similarly hold someone culpable who elected not to bring into existence someone who would be happy and want to remain alive. This I think should not be the only consideration, for it is based only on the idea of when we should regard an action as bad, and not at all on when we should regard it as good and commendable. (Why this particular asymmetry?) Instead, weighing both sides of the issue as I think is justified, I would opt for the fully symmetric position that it is “not bad” if a life that would be bad does not come into existence, and similarly, “not good” if a life that would be good does not come into existence. On the other hand, I question and doubt whether a life that comes into existence would be bad in the long run, given the prospect of immortality, which I think is a possibility through science (see below).

Life does, of course, have its problems, death in particular, that might call in question whether it is worthwhile after all and thus, whether the life of any sentient being is worth starting.  For this one problem there are a number of possible answers that will be satisfying to different people, and thus can serve as ground for a feeling that life is worthwhile and was worth starting despite one’s own mortality. There is the famous Epicurean argument that death is not really a problem because before it happens it causes no harm, and after it happens there is no victim. There is the Buddhist argument that, more fundamentally, the self is an illusion anyway, so that in fact no persons exist and death never really happens, though bliss can still occur through states of enlightenment which thus are worth seeking. There are various religious traditions that promise an afterlife and a happy immortality for those who prove worthy, or, in some versions, all who are born. Then there is scientific immortalism, which holds that at least substantial life extension through science and technology is possible, so that, irrespective of any supernatural or mystical process, persons of today have more to hope for as they get older than the usual biological ruin and oblivion.

The scientific possibilities for overcoming death come in different varieties that each have their own advocates. Some of these hopefuls, particularly younger ones, focus on the prospect that aging and now-terminal illnesses will be remedied in their natural lifetime, so that they will escape clinical death and need not specially prepare for it. Others who are not so confident have made arrangements for cryopreservation after clinical death, in hopes of resuscitation and cure of aging and diseases when the requisite technology becomes available. Still others hold out for advances on a more cosmic scale that will eventually make it possible to raise the dead comprehensively. (Some possible scenarios for this using multiple, parallel time streams rather than revisiting or recovering a hidden past are considered in my book, Forever for All, and the article at http://www.universalimmortalism.org/resurrection.htm.) The three possibilities are not mutually exclusive, so that, for example, persons who have chosen cryonics may also place varying hopes in the other two. In fact, my personal viewpoint as a scientific immortalist grants some validity to all three possibilities, but I think it is imperative now to be engaged in cryonics, which is almost unique and the clear favorite as a proactive, interventive strategy against death. Passive acceptance of the dying process simply does not feel right, whatever the prospects for near-term medical progress, or on the other hand, resurrections in a more distant, technologically superior future. It goes without saying that I also think future life will be worth living—it should be possible to make it so, if future developments can provide the opportunity.

Review of 'Better Never to Have Been'

Review of  Better Never to Have Been: The Harm of Coming into Existence by David Benatar. New York: Oxford University Press, 2006

“Would that I had never been born” is a lament sometimes voiced in the depth of misfortune, a cry of despair we hope may be soon be stilled by something more positive, when the bad things, whatever they are, have run their course. Enter David Benatar, a respected professor of philosophy at the University of Cape Town, South Africa. In the volume here reviewed he offers the extreme view that in fact it would have been better, all things considered, if not one of us had ever existed, or even any sentient life whatever. Life is that bad, he says, and he bases this judgment on certain logical principles along with empirical evidence of the allegedly poor quality of life that most of us are forced to endure in this world. Among the consequences is that no more humans should be born, and the human race (and other sentient creatures) ought to become extinct.

Antinatalism—the viewpoint that birth of sentient life, human in particular, is bad and ought not to happen, is a recurring one theme history, a noted proponent being the philosopher Arthur Schopenhauer (1788-1860). It can also be founded, as Benatar proposes, on certain assumptions considered reasonable by many people today, particularly those of a scientific, materialist outlook who are not inclined to over-optimism. Among the assumptions are that anyone’s life, overall, is an exercise in futility. Death—eternal oblivion—is the eventual fate of each person, and will happen through the normal aging process if not sooner. (Thus there is no serious prospect of a religious afterlife. Though not stated in the book, it is clear also that radical life extension, whether by imminent medical breakthroughs or through an initial “holding action” such as cryonics, is discounted.) Moreover, the human species will eventually die out, as is the fate of all biological species, so the extinction advocated by Benatar must happen in the end regardless. Another important presumption, in this case justified at length, is that in most people’s lives sorrow and misery predominate heavily over joy and happiness, so that their lives are not worth living.

Benatar denies that any good is done in any act of procreation, even if the life of the offspring is predominantly happy and if that person expresses gratitude for having been given life. The very best that could happen, Benatar says, is that no harm would be done, but only if the offspring never experienced anything bad in his/her entire life, an unlikely prospect. Even then, no good would be done or moral credit accrue in bringing that person into existence—good is done only in not bringing into existence any person who, in the course of his/her life, would at least experience some amount of bad. Harm is done, and in any likely circumstance, unacceptably serious harm, in bringing anyone into the world.

Such arguments seem unpersuasive for any of a number of reasons, and many will also find them offensive. In the matter of family planning, the prospective parents will be motivated by thoughts such as a child would bring them joy even as they in turn strive to provide the child with a happy home life and a good upbringing. Overall the child can be expected to be grateful both during the period of childhood and later in life, something that seems borne out in practice, even if hardship also occurs. As tough as the going may be at times, most people do not feel their parents were morally at fault for having had them, and are not ready to end their lives over any perceived shortcomings in their present situation or future prospects.

Benatar devotes a chapter of his book to arguing, nonetheless, that actually life as most people live it is very bad, suggesting that those who disagree don’t realize just how bad it is and are suffering some kind of delusion. But this begs the question of who is to judge. Turning the argument around, is it not possible that Benatar himself is suffering from depression that clouds his judgment? Natural selection of course favors a brighter outlook: Benatar’s thinking is not conducive to reproductive fitness. Beyond that, it is hard to see that his point of view is more “logical” than a more life-affirming one, both being based, when the rhetoric has run its course, on basic gut feelings about what is pleasant or worthwhile or isn’t, in what relative amounts, and how the mix that occurs in life should be assessed.

Despite life’s alleged wretchedness, Benatar himself is not ready to commit suicide but insists that life once started, his in particular, may be worth continuing even if it should not have been started in the first place. (Sometimes this sort of argument is reasonable. A woman should not be raped, but a child born as a consequence should not be killed.) More generally Benatar’s stance is passive rather than proactive: having children should be legal, even though no one should have them, much as we might favor allowing smoking even though it is medically and socially inadvisable.

Benatar is aware that, despite these limited concessions, his stance will be unpopular and devotes much attention to defending it against various possible lines of attack. Still it is doubtful his arguments will persuade many who are not already strongly leaning his way. The rest of us, surely a robust majority of humanity, will find our varied reasons to demur. Religious people will argue that life is a gift of God, children are a blessing, hardships and sorrows happen but can and will be remedied, all will be well in the end. Secular humanists and others of scientific bent may believe with Benatar that their lives must permanently end, and even accept the eventual extinction of all earthly life, yet still remain optimistic, one of their arguments being that “since life is finite, even sometimes very short, each moment of life, handled rightly, is precious.” Scientific immortalists who are hoping for radical life extension will also discount Benatar’s pessimism, though possibly in an odd way supporting the end of the present human species—in this case, however, by replacing it with something better that includes themselves in an enhanced form.

Meanwhile, an antinatalist movement has grown up that has simple, passive annihilation of the human species as its goal, endeavoring as far as possible to discourage everyone from having more children. In addition to a claimed humanitarian purpose—eliminating suffering as Benatar proposes—there is an environmental motive some endorse, arguing that the earth’s biosphere would greatly benefit if there were no humans to befoul it, as they generally do. Potentially a conflict could erupt between antinatalists and immortalists, who hope to be in the world for a very long time. My feeling, though, is that the antinatalist movement is both unpopular and self-limiting—on both counts, natural selection so wills it. Immortalists in any case are not so much trying to populate the planet as trying to endure as individuals. So probably we should not worry too much. Instead let’s talk to these people. Some of them (Benatar included?) may be willing to rethink their position.

———————————————————————————————————————————————————————————————————————————————————————————————————

About the author: David Benatar is professor of philosophy and head of the Department of Philosophy at the University of Cape Town in Cape Town, South Africa. Though best known for his advocacy of antinatalism in his book Better Never to Have Been, he is also the author of a series of widely cited papers in medical ethics. His work has appeared in such journals as Ethics, Journal of Applied Philosophy, Social Theory and Practice, American Philosophical Quarterly, QJM: An International Journal of Medicine, Journal of Law and Religion and the British Medical Journal.

Ken Hayworth on straight freezing in cryonics

Ken Hayworth’s idea of promoting a fixation-based alternative to brain cryopreservation is something I am highly sympathetic to overall, and I hope some progress in this direction results from the work he is doing and trying to induce others to do. That said, I wanted to comment on Hayworth’s remarks about straight freezing of brain tissue.

Figure 1B shows the horrific damage (destroyed cells) that occurs when such a slice is “preserved” using a freezing technique typical of those employed early in cryonics. Such damage is clearly irreversible by any future technology and it should come as no surprise that such techniques were flatly rejected by the scientific and medical community.

While it’s true that straight-frozen tissue as shown looks pretty awful I think it’s too strong a statement to say that “such damage is clearly irreversible by any future technology” unless you have further supporting arguments. To invoke a relevant analogy, we could run a phone book through a garden-variety shredder found in many offices, and still be able to reconstruct it from the resulting debris. The fact that there is debris remaining with the frozen tissue (as opposed to the cases of decay or burning) means we cannot, without further argument, rule out some sort of reconstructive process using future technology, including nanotechnology. It is also worth noting that with imperfect chemical fixation you run a risk of tissue loss over time that does not occur with cryopreservation; even debris resulting from straight freezing will remain as-is so long as cryogenic temperatures are maintained.

I also note that Hayworth says his proposed plastination could only be done properly if you start with a living patient with still-beating heart to distribute the initial fixative.

It is important to understand that the standard fixation and plasticization protocol is started while the animal is still alive. If the animal’s heart is allowed to stop for even a few minutes before the glutaraldehyde is perfused into the vasculature, then the quality of the preservation is markedly reduced. This fact will also be true for any whole brain protocol based on perfusion.

This of course would be problematic for any procedure to be used on humans; you’d have to treat it as some form of euthanasia.

Deconstructing Deathism

Deconstructing Deathism: Answering a Recent Critique and Other Objections to Immortality

First published in Physical Immortality 2(4) 11-16 (4th Q 2004)

Sour Grapes and Sweet

In Aesop’s ancient fable, the fox seeks the juicy grapes to quench his thirst on a hot, sunny day. Finding them out of reach, however, he concludes “they must be sour.”

The thirst for longer life and better health, which would hopefully extend to a happy existence of indefinite duration, is basic to human nature. Just about everyone has been tempted by these appealing “grapes,” notwithstanding that a substantial extension of maximum human life-span, healthy or not, is quite out of reach at present, and always has been. Mortality is a basic feature of earthly life. Yet humans, who seem to be the first life forms on the planet to understand this, are not happy with it. Yes, it’s “natural,” but our instincts tell us it’s still not “okay.”

The roots of our irrepressible immortalism stretch well into prehistoric times, as is suggested, for example, by the burial of artifacts such as hunting implements with the dead. In more recent though still ancient times, the feeling flowered into major religions that promised the sought-for immortality and a happy future existence. Many of these belief systems are still with us and their adherents total perhaps about half the humans alive today. We see then how the wish for existence beyond the biological limits has survived the intractable difficulties that its practical realization has offered. In recent years, moreover, hopes for death-transcendence have taken on new life through scientific advances that offer possibilities of addressing the problem directly. The mechanisms of aging are being unraveled and eventual, full control of the aging process and known diseases is anticipated by some forward-looking people, along with other life enhancements not previously known. People can meanwhile arrange for cryopreservation in the event of death, in hopes that resuscitation technology will eventually be developed, along with the means to reverse or cure any affliction they may have suffered, including aging itself.

Not everyone, of course, can be counted among the immortality-seekers or supporters, even when the new scientific perspective is taken into account. Among those who freely reject the “grapes” of life extension are a predictable fraction who would find them sour as well. These critics defend a counterproposal of deathism, namely, that not only is one’s eventual demise inevitable and final (the grapes are out of reach) but that this should be seen in a positive light (but sour anyway, so not to worry). A recently published essay in this vein, The Immortal’s Dilemma: Deconstructing Eternal Life by George Hart, (1) offers the opinion that “life can have meaning only if it must end” and argues the case against the prospects for immortality on logical grounds. Such criticism is useful, for it points up difficulties that must be solved if immortality is ever to be realized. On the other hand, the possibility that immortality can be realized, and realized as a desirable and rewarding endeavor for an individual life (so the grapes are reachable and sweet and juicy after all) is not refuted by such arguments, as I shall maintain here. (And yes, I must confess to being one of those whose hopes rest on these grapes being in some way reachable, with emphasis, in my case, on scientific approaches to the problem.) In addition to Hart’s own critique I will also consider more briefly some other deathist arguments that have made their appearance over the centuries. But first some comments are in order about what I think immortality should encompass.

Here I am largely in agreement with Hart himself who (along with many others who have commented on the issues) is not only a materialist and a rationalist, but is also sensitized to certain difficulties of an informational nature that, I think, especially must be addressed. Thus I discount any idea of immortality “outside of time” or any supernatural or mystical process or entity taking part. A person, to exist at all, must always remain part of physical reality as revealed and understood scientifically. I also discount any idea of immortality, whether scientific or not, based on attaining a “final” mental state or a limited repertoire of states and remaining in that condition without significant change. That would amount to what is called an Eternal Return, in which one has only a finite number of subjective experiences, even if repeated endlessly—not true immortality in my view (or Hart’s, once again). An immortal life must avoid this problem of stagnation, instead becoming an endless process of personal growth which, among other things, would allow for continual recall of a growing body of past experiences. Endless personal growth would mean our immortal is continually changing—though not arbitrarily. Actually, this will cause certain arguments against immortality that easily come to mind to lose force, as we shall see, though also raising an additional, challenging problem.

It is worth remarking here that a suitable habitat for endless growth would have to exist, an expanding or already-infinite domain. Ultimately it would seem to resolve into whether information encoding memories, dispositions, and a general record of the past characteristics of the individual could be suitably recorded and organized on an ever-expanding scale. It is not known at present whether our own universe, though it appears to be expanding, could support such a process, but the possibility is not ruled out. To reasonably accommodate one immortal being, such a growth process should also extend to an entire, large population of developing immortals, so that each individual is progressing in more-or-less similar fashion. (This would also allow the addition of new, developing individuals from time to time in unending succession, though the rate of addition, as well as the growth itself, would have to be managed to be consistent with available resources. I should also add that the growth process of each person could survive temporary reversals including some losses or corruption of information, so long as overall trends were suitably robust. Basically, a subset of the information taken in by the individual should accumulate without limit and never be permanently lost or altered.) We shall return to this subject briefly later, in connection with the idea of multiple universes, which, if accepted, will be seen to further strengthen the prospects for some form of immortal habitat.

The developing immortal, then, would acquire experiences which would from then on be available for recall. Such recall would have to happen repeatedly, otherwise a given experience would drop out of consciousness forever at some point and not be part of that individual. A growing body of experiences would have to be recalled or reviewed infinitely often over infinite time to avoid stagnation. This, however, will be seen to raise a further difficulty, as Hart also notes, a problem of dilution. An experience or set of experiences might be very seldom recalled even if the recall is infinitely often, in view of the growing body of other material demanding attention. In this way substantial portions of one’s past, or ultimately all portions, may, for practical purposes be lost from consciousness and not part of the “self.” But I will argue that this problem too is manageable or at least cannot be shown not to be. Thus one could either cultivate a tolerance for an increasingly infrequent recall of a given past experience, or actually eliminate the problem by a suitable scheduling of the time spent reviewing personal archives. We shall now examine Hart’s main arguments in more detail.

Stagnation and the Death Wish

An immortal being must persist for an infinite length of time. Hart argues that, during that interval, such a being must at some point find life unbearable and wish to die, and indeed, by implication this must happen infinitely often. So, even if one always changed one’s mind later and again wanted to live, an ordeal of misery and frustration would have occurred, and moreover, must recur, over and over, infinitely often. Why is this? “It is logically possible,” he says, “and given our nature as human beings, it is also empirically possible.” On this basis he concludes that, “[g]iven an infinite period of time, what remains possible during that period of time is certain to occur.” His reasoning is that “[a] possibility that remains open by definition is certain to happen given enough time; otherwise it is meaningless to say that it remains an open possibility if it might never happen even in an infinite period of time.”

My answer to this starts with the concession that, since even an immortal being must be subject to the physical laws that govern reality, the wish to die must remain both logically and empirically possible throughout time—here I agree with Hart. Yet the conclusion that such a wish must occur (and must recur) is still fallacious, because of the assumption of personal growth which, as we noted, is necessary to avoid the problem of stagnation. A person, seen as a developing entity, would not simply be a static construct with fixed probabilities of certain things happening. With a fixed probability an event of given type, assumed independent of other events of the same type, is guaranteed to happen eventually, according to a predictable scheme. For example, suppose a devastating flood has a one percent chance of happening in any one year in a certain locale whose topography is assumed to be fixed. Then the chance of its happening in 100 years is about 64%, and the chance of its happening at least once in 1,000 years is about 99.995%, that is to say, near certainty. (For longer time intervals we come ever closer to perfect certainty.) But by taking proper precautions it would be possible to change the relevant probabilities so that an undesirable occurrence such as this becomes increasingly unlikely. People could, for instance, shore up a system of levees (slightly changing the “fixed” topography) to make a bad flood less likely, and might do so repeatedly or make other changes to further reduce the likelihood.

In the case of wanting to die, one would naturally be interested in reducing the likelihood of such a state of despondency (or reckless curiosity?). Furthermore, the sort of personal growth I envision, which would encompass the whole human—or formerly human—population, should result in ever-increasing, widespread levels of intelligence and capability to deal with problems of all sorts. This is not to say that problems will not occur and persist, and in fact, some problems could become more acute with the increasing levels of sophistication, much as we humans may be said to have more in the way of psychological problems than an earthworm. But certainly the prospect of dealing successfully with the problems cannot be ruled out. So, for example, our immortals could get happier and happier, or more and more firmly resolved to stay the course of living, or both. The likelihood, after a certain point, of a suicidal impulse ever occurring could then be vanishingly small, even though it would never drop strictly to zero.

As an illustration, we may imagine that at some future time the probability of a serious suicidal spell has been reduced to one percent per annum, and that it undergoes a further, exponential decay over time, due to the attention paid to it and the quality of research or personal dedication. With a half-life of 100 years, so that the probability reduces by half every century (though again, it never goes all the way to zero), the probability of there ever being such an episode is not 100 but only 77 percent. A half-life of 50 years will bring the probability down to 52 percent, and one of 30 years will cut it to 35 percent. Going back again to the case of the 100-year half life, the chance of at least one bad episode happening in 1,000 years is very nearly the same as its ever happening at all, or about 77 percent, but the chance of its happening after this first 1,000 years is minuscule, only about a thousandth of its ever happening at all, or .08 percent. With a 50-year half-life, the chance of a bad episode in 500 years is similarly very close to the 52 percent figure for all future time, but the chance of its happening after the 500 years is again a thousandth, in this case, .05 percent. And so on. We see then how a favorable outcome—no bad episodes at all over infinite time—becomes a near certainty with the passage of time, even though there is always some tiny chance of the contrary. (I will add that here we have assumed an exponential decay of probabilities, which makes calculations easy, but such a specific falloff is not essential; many other falloff curves will do as well.)

The same sort of argument could be applied against other “inevitable” consequences such as simple physical destruction. Developing individuals will naturally occupy larger and larger volumes of the universe, or at any rate, a larger volume in some cyberspace storing information (extending ultimately to larger spatial volumes). They thus should be able to make themselves progressively immune to such destruction, through storage of backup information and the like, even though a minuscule and diminishing probability of such destruction will always remain.

Dealing with Dilution

The second major argument Hart raises against the feasibility of immortality invokes what I have called the problem of dilution. Basically, the growing individual must eventually dwarf any previous version of itself, both informationally and, since information requires storage space, physically as well. One consequence is that, in one way or another, an immortal must develop far beyond the human level. It is easy to see how this could create problems, though we must also ask if these problems must necessarily be insoluble.

The main problem would seem to be that of a simple outweighing of earlier information and thus, of the characteristics that defined one’s identity at a particular age. The first century of the life of the individual, for example, will be represented by a finite record of, say, N bits. This archival record must occupy an increasingly small part of the total information content of the individual, say it is M bits, as growth occurs and M increases. (The N bits could also be copied repeatedly over time as insurance against loss, but would still amount to N bits of real information, so I leave it at that.) In time the N bits will be an utterly insignificant portion of the M bits, say a trillionth part or less. It is an easy conclusion that the significance to the individual of the N bits must be correspondingly tiny. In other words, the first century of your life will be as nothing to what you will have developed into—so the early person—including yourself today—will essentially be dead—even though information to reconstruct this version of you still survives. (Doing that, however, would not solve the problem long-term, because dilution would only recur as the new instance of “you” developed and accreted information. Trying to keep “you” alive by periodic recreations, on the other hand, would not work either, because of the problem of stagnation—“you” would just run through a limited repertoire of experiences before dilution once again set in and shut “you” down.)

But wait a minute—must we conclude that dilution would have to be such a problem? Surely not, if we imagine our advanced person has a certain respectful attitude toward the full collection of its past information, and the relationships between the various parts, forming a coherent whole. (This would recount both good and bad times, capture emotional as well as factual content, and be valued for lessons learned through sometimes painful mistakes along with remembered enjoyments.) A librarian does not necessarily think less of the books already on the shelves even when many more titles are acquired. This might hold all the more if the librarian is a scholar who has assembled a well-organized personal library of specially valued books that are consulted and studied from time to time. The scholar may in turn be a historian, and the “books” may include manuscripts and other memorabilia which provide information about historical periods of interest. True, if the library is extensive it may take a while before a given item in the collection is consulted once more, but that would not make it acceptable to discard that item, or necessarily lessen the item’s influence on what the scholar is doing. Finally, if we suppose that some or much of that history is personal history, our “scholar” is starting to resemble our hypothetical immortal. In short, we are not justified in assuming that an infrequent perusal of information necessarily negates the importance of that information in whatever manner it is used, including the complex activities that might be involved in expressing and experiencing one’s identity.

Today we consult books in a library by physically lifting them off the shelf and opening them up, but that is beginning to change with electronic data bases, which can be scanned much more rapidly by computer. In the future it should be possible for us to scan our own memories much more rapidly and reliably than at present, to lessen the time between scans of particular archival material. At the same time, as we grow our thought processes should also deepen, so that more in the way of processing will be required for many commonplace mental activities. This in turn would offer more opportunity for interleaving the occasional references to times past which will better anchor our sense of who we are by reminding us of where we have come from.

It seems reasonable that past versions of the self would “survive” as we remember the events of times past, that is to say, our episodic memories, and this would have importance in our continuing to persist as what could be considered the “same” albeit also a changing, developing person. But in addition to this mnemonic reinforcement I imagine there would be a more general feeling of being a particular individual, an “ambience” derived from but not referring to any specific past experiences. Ambience alone would not be sufficient, I think, to make us who we are; episodic memories would also be necessary, yet it could considerably lessen the need for frequent recall and thus alleviate the problem of dilution.

Another interesting thought is that certain items might consistently be consulted more frequently than others. (Indeed, would this not be expected?) In this way it would actually be possible to bypass the dilution effect and instead allow a fixed fraction of time for perusal of any given item, even as more items were added indefinitely. A simple way of doing this could be first to allow some fixed fraction of the time for day-to-day affairs and other non-archival work (“prime time”), and spend the rest of the time on perusal of personal archives (“archive time”). The exact apportioning of prime versus archive time is not important here, but it will be instructive to consider how the archive time itself might be subdivided. A simple, if overly simplistic, strategy would be to have half this time devoted to the first century’s records, half the remainder to the second century, and so on. (Since there would only be a finite number of centuries, there would be some unused archive time at the end, which could be spent as desired. Note, however, that in the limit of infinite total time covering infinitely many centuries, the usage of archive time would approach but not exceed 100%.) In this way, then, there would be a fixed fraction of archive time, 2–n, spent on the nth century’s records, regardless of how many centuries beyond the nth were lived or how many records accumulated. True, this way of apportioning time might not be much good beyond a few centuries; only about one trillionth the total time would be spent on the 40th century, for instance, around 1/300 sec per 100 years. (Possibly a lot could be covered even in this brief interval of about 3 million nanoseconds, however.) But the apportionment scheme could be adjusted.

A more interesting and plausible, if slightly harder-to-describe scheme would be to choose a constant c > 0 and allow the fraction c(1/(n+c–1) – 1/(n+c)) to the nth-century records. It is easy to show that the time for all centuries will add up to 100% as before, whatever positive value of c we start with. Starting with c=10 will get 10% of the total time spent on the first century, with subsequent centuries receiving a diminishing share as before, but the rate of falloff will be much slower, so that the 40th century will still receive 0.4%, or about 5 months per 100 years, that is to say, 240 million nanoseconds per minute. If we suppose that our immortal settles eventually into a routine in which 10% of the time overall is archive time, there would be 24 million nanoseconds available each minute of life for the 40th century’s memories alone, if desired, with many other centuries getting more or less comparable or greater amounts of attention, and none omitted entirely. This, I think, makes at least a plausible case that a reasonable sense of one’s personal identity could be sustained indefinitely.

In the above examples the greatest proportion of archive time falls to the earlier records, which might be fitting since these should be the most important as formative years for the prospective immortal, thus the most important for identity maintenance. (Memory recall would also naturally occur during prime time; the emphasis here could be on recent events, to maintain a balance overall.) In summary, then, we have considered ways that the problem of dilution might be successfully managed. Relatively infrequent perusal of memories might still suffice to maintain the necessary continuity with past versions of the self, or proper scheduling could stabilize the frequency of recall and bypass the dilution effect, or both. We see in any case that the problem is not what it may seem at first sight. We have no guarantee, of course, that it would not get out of bounds, but certainly some grounds for hope.

More could be said, but the difficulties are formidable, trying as we are to anticipate the possible future before it happens, and how we will deal with our problem of memory superabundance when many new options should have opened up. In that hopefully happy time a “science of personal continuation” should have taken shape to properly deal with the matter. Nay-sayers like Hart try to discount any such prospects once and for all, based on today’s perspectives with their inevitable limitations. We must look to future enlightenment to overturn such summary judgments. I will have a bit more to say on this issue, in the process addressing some other notable, pro-death thinking. But first it will be worthwhile to consider a few additional points raised in Hart’s essay. These again I think offer no fundamental, demonstrated difficulties to the idea of immortality.

Earlier we noted Hart’s bringing up the problem that the would-be immortal may at times undergo a change of feeling and wish for death. While I think we have disposed of his claim that the death wish, to remain an open possibility, would have to actually occur and recur at a serious level, it is also significant that he would allow the option of suicide, supposing such a wish did occur. And here I agree with him, if reluctantly, since a person should have that right. As an aside he seems to think of choosing to be “mortal” as an alternative different from suicide, though he does not explain how. To kill oneself with a slow-acting poison or microbe would still be suicide; would that not hold even if the process took decades and is now “natural,” as in the aging process? Choosing to age and die as we do today when aging can be reasonably controlled and prevented strikes me as a suicidal choice. But, in fairness to Hart, the delay could have significance inasmuch as the subject could undergo a change of views meanwhile, and opt for a reversal or cure. More generally, though, the rather morbid dwelling on a putative, recurring death-wish suggests that Hart may not be so happy with his own life but instead in some degree yearning for an “honorable” way out. Such an outlook is all too common among people, intelligent thinkers included. All such people should take seriously the prospect of becoming joyful geniuses—or of enhancing their already-existing genius and joyfulness—which future advances should make increasingly feasible.

True, many such people might object that doing this would make them so different it would no longer be them, they would be dead for all intents and purposes—the new person would be someone else. But I seriously doubt this would have to be so, and wish I could persuade these nay-thinkers to give more thought to the matter. A change of mind and heart need not add up to a change of person, with the old dead and gone, but can also be seen as a fulfillment of the old, which is thereby helped to become better than before, as it continues to survive, progress, and enjoy.

Morbidity and Its Remedies

The impression of morbidity in Hart’s thinking is reinforced by his opinions on very long life. “In theory you can imagine without contradiction what it would be like to be alive for a trillion or even a trillion trillion years from now. This thought experiment creates its own horror, one that is mind-numbing and nauseating.” Personally, I find the thought experiment not nauseating but exhilarating! What incomparable wonders one might explore in such long periods, what fascinating problems one might solve! What endearing relationships one might have with others of sympathetic but still differing minds, what great good one might do, with reciprocal rewards for the well-enlightened! Hart offers the thought that life ought to be like a book, which has a beginning, middle sections, and an end. In this way one’s life is “properly framed,” says he, and only in this way can it have meaning. A big problem I see with his analogy is that, while you can appreciate the “framing” and thus the meaning of a book by reading it through to the end, to do it right requires some thoughtful deliberation after you have finished the book. This is not an option you can exercise with your own life, if it too must come to a final stopping-point.

The dreary thought that one’s life needs a “conclusion” seems wrong and misguided to those of us who would like it to continue without end. (A life rightly lived is never rightly, permanently ended, we say in earnest rebuttal.) Yet it does beg the question of what meaningful activity would demand and occupy an infinite future, one in which we can and must progress indefinitely, yet continue always to respect and, in some appropriate measure, identify with our much humbler beginnings. How would an infinite existence be made worthwhile and necessary? Certainly it sounds like a tall order, but is it such an impossibility, assuming of course that the necessary technological advances will occur to at least permit escape from the biological limits that now confine us?

Indeed, from one point of view the issue seems transparently simple. Life ought to be worth living. If life is worth living, it should not come to an end, therefore one ought to be immortal. This, of course, overlooks the details of what one might be doing with one’s life as well as such other features as what sort of society would emerge if individuals were immortal. These matters are impossible to second-guess in detail, but some things can be said with reasonable confidence.

Whatever the details of a life may be, they should be such as to produce meaning and fulfillment—including, most importantly, a reason to continue, to find something always new, interesting, exciting, something from which one can learn. This applies to our limited existence today; it should apply all the more in a hoped-for immortal future. Life should be habit-forming! With the prospects for future betterment, I think it will be, both because there should be so much of interest to experience and know about, and because our means to deal with the problems of lack of interest and other negatives will itself be much greater and more refined.

Another aspect of life being worth living is that it should be worth remembering, as we have already noted, this in particular being necessary to retain a sense of continuity with one’s past to reasonably sustain one’s personal identity. Pleasure alone thus is not enough. The nature of one’s experiences should be such that thinking of them later causes enjoyment too—a requirement that, I think, should not prove too difficult in the sort of future that seems possible, even though people today often do not seem to value the remembered past.

Finally, what is worth remembering is also worth sharing. Life should be something shared with others so that all in the end will mutually benefit. Of course it must be the “right” others, which will follow if individuals are well-disposed and develop in reasonable ways.

So we see that commonsense notions that apply to life today, even with its present limitations, lead to the conclusion that immortal life, properly conducted, would be good and desirable. This is also bolstered by considering the opposite viewpoint. Could we learn to make peace with death? Could we see in it something other than final ruin and frustration? Could we find meaning in spite of (or because of) the thought of an eventual, permanent conclusion, a restitution once and for all of all our striving and cares? I think all attempts to do so must ring hollow. Knowledge of one’s mortality and its apparent inevitability is not an easy burden for the rational mind to carry. I doubt if belief in one’s impermanence can inspire much real satisfaction, except perhaps for those who view life, fundamentally, as a burden that ought to end. As one such thinker, Hart is hardly alone; a few of the others will now be worth examining, starting with the ancients.

Other Deathist Thinking

The Stoics, prominent in the early centuries c.e., insisted that fear of death, rather than death itself was the real evil, so that “man must learn to submit himself to the course of nature.” (2) Now, of course, we know that our nature is substantially malleable through our own efforts. The sort of meek submission advocated in earlier times is becoming untenable, and increasingly will be so.

The related, roughly contemporary Epicurean doctrine held that stagnation would invalidate a limitless survival. “[T]here are only a limited number of gratifications, and, once these have been experienced, it is futile to live longer.” (3) To me, this conclusion seems especially specious, even if we limit consideration to a purely intellectual discipline such as mathematics. There are infinitely mathematical truths to explore, each a separate and unique “gratification” to the rightly disposed, with no simple way to characterize them all—Gödel’s famous undecidability results establish this last property about as solidly as one could ask. Again, too, our nature is malleable thus allowing for increases in the “number of gratifications” along with other enhancements, to track the reality that obligingly refuses to be trivial.

Buddhism, also very ancient (and still quite active today) considers the “wish for continued existence” a form of “defilement.” (4) This, then, is a moral objection to immortalism, one with which we may respectfully disagree. Buddhism strongly advocates enlightenment; more enlightenment should be possible the longer one lives. In time, I conjecture, such enlightenment will lead to a recognition of the individual person as a coherent concept and something whose continued existence is to be valued and sought.

Turning to recent times, Bertrand Russell, a leading twentieth-century British philosopher, was firmly convinced of the inevitability of death, based on cosmological considerations. If nothing else, he thought, life must eventually and uniformly come to an end in the Heat Death or “running down” of the universe. Not just individuals were doomed but species, civilizations, and in short, the whole enterprise that we know as life, whether earthly or elsewhere in our cosmos, if it should exist there. Russell was not happy with this state of affairs but thought it must be accepted, arguing that “…only on the firm foundation of unyielding despair, can the soul’s habitation…be safely built.” (5) His solution was to downplay the issue. The thought that “life will die out…is not such as to render life miserable. It merely makes you turn your attention to other things.” (6) But this too rings hollow in the minds of many of us. In particular, it invites the question of whether painless, immediate suicide would not be a better alternative than prolonged and distracting efforts at “other things.” Russell does deserve credit for attempting to assess reality as it is, and make the most of what to him inspires “unyielding despair.”

It is worth remarking that Russell’s conclusions about eventual Heat Death with its apparent stifling of all life in the universe have never been ruled out but are not by any means firmly established. The recent discoveries of an apparently accelerating universal expansion have raised new, unanswered questions about the ultimate fate of the universe and any firm conclusions are premature. On the other hand, if we suppose the universe is destined to go down, taking us with it, we can ask if this is the absolute end. Barring the supernatural, many would say yes. However, suppose we accept the idea of pattern survival—that “you” could survive as a duplicate of yourself, possibly located in another universe entirely (and no one has ruled this out). Then clearly the options for survival are broadened so that even a hostile cosmology may not be able to end your existence. Life, not death, could be the ultimate outcome for any individual, who must then make the most of it rather than seeking solace in a cares-erasing oblivion. (7)

John Hick, a prominent contemporary theological philosopher, has also aired misgivings on the issue of eternal survival. His hangup is a variant of the problem of dilution. There must be a limit, he says, to how much we can identify with earlier states in which we were very different. In addition to logistical difficulties of the sort we addressed earlier, Hick considers the diary he composed as a fifteen-year-old (emphasis original): “…I know that it is my diary, and with its aid I remember some of the events recorded in it; but nevertheless I look back upon that fifteen-year-old as someone whose career I follow with interest and sympathy but whom I do not feel to be myself.” (8) This sort of dissociation is, I think, very common and perhaps a majority viewpoint among people today, though not universal. (I for one feel able to identify with my earlier person-stages, even going back to early childhood, despite the many changes.) It is noteworthy that Hick says he does not feel he can identify with his earlier self.

It is not likely that any of the arguments offered here would soon change such a viewpoint. If we must continually change so that, in time, our earlier experiences were of someone very different this might indeed prove a fatal impediment, but I do not think it must or will be so. The arguments we have already considered offer a starting point for a more hopeful outlook, but we can go a bit farther informally, something I find inspirational. Let us consider, then, what sort of beings we might be expected to develop into over a long stretch of time, in which today’s physical limitations would not apply.

Likeable, Joyful Immortals All

Clearly there are many possibilities, but I conjecture that personality types capable of and desiring very long survival will not be so varied or inscrutable as to baffle our understanding today. Instead they should basically be profoundly benevolent, desirous of benefiting others as well as themselves, and respectful of sentient creatures in general. They will acknowledge that enlightened self-interest requires a stance with a strong element of what we would call altruism. They will be intensely moral, but also joyful in the exercise and contemplation of their profound moral virtues—for an element of joy will be essential in finding life worth living, even as it is today. These joyful, good-hearted beings, then, will be the types to endure, and will refine their good natures as time progresses, so as to increasingly approximate some of our ideas of angelic or godlike personalities, as endless wonders unfold to their growing understanding.

Beings of good will who are seeking what is right and best and to develop in wonderful and rewarding ways over unlimited time, always with love, respect, and consideration for others, should not find it hard to feel a kinship with past versions of themselves which also had these attributes. Love must conquer all. The conjectured disinterest with one’s more distant past, then, will be swallowed up in the universal affection and regard for persons in general, past as well as present, which must logically extend to versions of oneself along with others. If we are good enough, then, our everlasting survival, as separate though interacting and considerate selves, becomes morally mandatory and recognizable as such by the advanced beings we shall become. So it is this high calling we must aspire to, and it may well be necessary to our survival. And, I submit, being virtuous and considerate will also make us more accepting of our earlier selves, even if they were less enlightened and rather “different,” or even, in more extreme cases, evil and horribly misguided. The bad in our earlier selves can be acknowledged when we are confident it is cured.

In the future there should be wonders aplenty for the searcher and many paths to pursue in a vast architecture of possibilities. So each of us should be able develop in interesting and unique ways, with joy accompanying our efforts, including those occasions when we reflect on where we’ve been before and how far we’ve come, something that should both comfort and inspire. Joy will thus help us maintain a reasonable sense of our identity as time goes by. If this course of development can be pursued, the rich diversity of individuals will, I submit, produce greater benefits overall than if all were subsumed in a vast collective enterprise, with individuality devalued or obliterated. As a possible precedent, we may consider how collective enterprises in our own history, and particularly totalitarian governments with centrally planned economies, have been unable to compete with more decentralized, democratic systems. The separate, developing, considerate, immortal ego, then, should have more to offer all around than some form of “nonself” or a fused consciousness.

In our advancement, of course, we should make use of whatever discoveries and technologies may be applicable. Inevitably this will involve risk but “nothing ventured, nothing gained.” In fact I think our deepening understanding will make adaptations possible that would otherwise be out of the question. The elimination of aging and biological death should be accompanied by increased understanding of the psychological difficulties connected with immortalization, with a proliferation of possible remedies. People should have numerous means to deal with various “illnesses” they may have inherited from the mortal past, along with the difficulties they encounter in the course of a hopefully unbounded future.

Notes:

1. URL: http://www.secweb.org/asset.asp?AssetID=333

2. Gerald Gruman, “A History of Ideas about the Prolongation of Life.” Transactions of the American Philosophical Society 56, no. 9 (December 1966), 15.

3. Ibid., 14.

4. Hammalawa Saddhatissa, Life of the Buddha. New York: Harper and Row, 1976, 31.

5. Bertrand Russell, Why I Am Not a Christian, and Other Essays on Religion and Related Subjects. Ed. Paul Edwards. New York: Simon and Schuster, 1957, 107, as quoted in Frank J. Tipler, The Physics of Immortality: Modern Cosmology, God, and the Resurrection of the Dead. New York: Doubleday, 1994., 69.

6. Ibid., 11, as quoted in Tipler, Physics of Immortality, 70.

7. See R. Michael Perry, Forever for All: Moral Philosophy, Cryonics, and the Scientific Prospects for Immortality, Parkland, Florida: Universal Publishers, 2000.

8. John Hick, Death and Eternal Life, Louisville, Ky.: Westminster/John Knox Press, 1994, 410, as quoted in Perry, Forever for All, 470.

A Freezing Before Bedford’s

Physical Immortality 2(2) 7 (2nd Q 2004)

James Bedford’s freezing in January 1967 is usually regarded as the first true cryonic suspension, done immediately after legal death under controlled conditions which, though primitive by today’s standards, may have opened the possibility of eventual reanimation. Yet there was an earlier freezing that, while more problematic from the standpoint of viability, was nonetheless important in the beginning cryonics movement.

An unidentified woman in her 60s from the Los Angeles area who died sometime around February 1966 was placed in liquid nitrogen storage April 22 that year, by technicians of Ed Hope’s Cryocare Equipment Corporation, at their facility in Phoenix, Arizona. Ted Kraver, one of the technicians who handled the freezing, has given a detailed recounting.

Prior to freezing the patient had been embalmed after being dead about 18 hours. She was refrigerated “maybe four or five days later,” and stored for approximately two months at slightly above freezing temperature. Outwardly, at the time of her freezing, she presented an appearance of good preservation. “There was no deterioration we could notice except for a little bit of discoloration in the fingers.” (The brain, however, is a very delicate structure that would not be well-preserved under the conditions that reportedly occurred.) The woman was frozen at any rate, placed in a horizontal, cylindrical capsule, and maintained at Hope’s facility for several months by periodically adding liquid nitrogen. At about the time Bedford was frozen relatives of the woman who had been funding her freezing changed their minds, and she was thawed and buried. (As it turned out, Bedford himself would be placed, initially, in another Hope capsule, a more advanced model. He would change capsules several times over the coming decades, always remaining frozen; he has been in his present housing with Alcor since May 1991.)

Some additional details Kraver relates are interesting. In September 1965 he and another engineer, Frank “Rick” Rickenbacker, were working in the technical services department of the AiResearch Manufacturing Company in Phoenix. They had built a large cryogenic test facility for their company and had just completed a year of testing of components for the Saturn S IV B missile. Both were taken by the whole cryonics concept and decided to get involved with a startup effort by the local entrepreneur, Ed Hope. Over the next two months Ted and Rick constructed their first cryogenic capsule, a large, double-walled, insulated cylinder capable of holding a human being. It was shown at the annual conference of Ev Cooper’s Life Extension Society, held January 1 the following year in Washington, D.C. The second model, with some improvements including aluminized mylar for insulation in place of aluminum foil and glass matte, was finished in time to be used for the freezing just noted. Further details will be found in an article in Cryonics, March 1989.

Despite its problematic nature, the first freezing triggered some jubilation in the fledgling cryonics movement; after years of frustration and some near-misses someone had finally been cryogenically preserved. It was hoped that more progress and more freezings would soon follow.

Historical Steps Toward the Scientific Conquest of Death

Physical Immortality 1(1) 7-10 (3rd Q 2003)

This article is adapted from Chapter 2 of my book, Forever for All, which includes references

The eighteenth-century Enlightenment was specially significant for its emphasis on progress, which extended to ideas about lengthening life. Earlier it had been thought that extra-long life had already been achieved either in remote antiquity (before the biblical flood, for instance) or in faraway places, or possibly closer to home with the aid of elusive assistance such as a “fountain of youth.” Cornaro and other Renaissance hygenists had begun to develop a new outlook, emphasizing approaches that were more commonplace and accessible (thus more likely to have substance at all). Now this viewpoint was dramatically extended by such thinkers as Benjamin Franklin, William Godwin, and Antoine Condorcet, who saw new possibilities for future betterment through a scientific approach, including great prolongation of life by eliminating aging. Science had by then begun to make advances in the direction of extending life. For example, Leeuwenhoek in 1702 had revived rotifers after stopping the life process through desiccation. Science clearly was progressive, and, these thinkers hypothesized, in the future should be able to secure benefits not then possible. Humanity thus might become godlike, shedding its frailty and limitations for something unprecedented and far better. But there was a negative. The  world of near-godhood was a world of the future, something that was not imminent. Except for the eventual divine intervention that was widely believed in, there would be no deliverance for those of their time.

This position, personally pessimistic but collectively optimistic, was echoed more starkly in the following century. In the 1870s British explorer-philosopher Winwood Reade, in The Martyrdom of Man, saw a coming age of immortality through the scientific control of biology but denied a personal God or the possibility of resurrection or other escape from death (hence the “martyrdom”). Similar sentiments were expressed a generation later by American physician and neurologist C. A. Stephens, whose book, Natural Salvation, elaborated a philosophy of the same name. Stephens too believed that all those then living must be lost forever—an especially painful thought in view of what would be open to future generations.

A contemporary of Reade and Stephens with a more optimistic outlook was Russian moral philosopher Nikolai Fedorov (1829–1903). Fedorov was a self-taught itinerant schoolteacher who became librarian of the Rumyantsev Museum in Moscow. His manner of life was ascetic, and he regularly turned down more lucrative but distracting employment while taking pains to assist needy students with the funds and provisions he could spare. Fedorov was among the first to seriously consider the possibility of a physical resurrection of the dead through scientific as opposed to supernatural means. He also based his entire life and work around his ideas of resurrection and developed them into an extensive philosophy. His proposed methods were doubtful by today’s standards but not at variance with known science. For example, individual atoms might be tracked down and repositioned with very sophisticated future devices to reconstruct deceased individuals and restore them to a living state. His focus, however, was understandably not on the technical details, but on the implications for the meaning and purpose of life and the ordering of society. The resurrection, if carried out in full, as Fedorov believed it should be, would restore the bad along with the good. An evil nature, however, is a curable affliction. So when all diseases and disorders, physical or mental, had been cured, all would live forever in a state of love, harmony, and unity. Fedorov saw the resurrection as the “common task” that would unite all humankind in a final, everlasting era of peace and brotherhood.

It was necessary, Fedorov believed, for the resurrection to be engineered by humanity, through rational, scientific means, rather than by a supernatural or transcendent intervention, and to be realized here, in the visible universe, and not some mystical elsewhere. His arguments in this case were moral ones. Fedorov was no atheist but a committed Christian, believing in a transcendent Godhead. He felt, however, that a resurrection brought about by such a power would render humanity’s God given gifts superfluous. Similarly, if the resurrection must occur somewhere outside this world then this world is a mistake. The proper role of the Christian Trinity then was to inspire or admonish our species, not solve our problems for us. For this reason the role of the supernatural is really not critical, and Fedorov can be credited with the first philosophy of life in which the important promises of traditional religion, including resurrecting persons of the past, were to be realized through nonmystical means.

Fedorov’s philosophy of the common task, which became known as Supramoralism, was dismissed as impractical or nonsensical. The decades following his death witnessed the bloodiest human confrontations that have ever occurred, the turmoil being especially violent in his homeland of Russia. A widespread horror and distrust of technology (which has never lacked its vocal critics) was nurtured, and many in the turbulent twentieth century longed for a “simpler time” or went so far as to champion the view that there is something necessarily evil about our species and our works.

Not everyone succumbed to pessimism, however, and some even saw in technology a road to salvation that was otherwise lacking. One such optimist was Robert C. W. Ettinger, who grew up around Detroit, Michigan. As a boy in his father’s store he would read the pioneering science fiction periodical, Amazing Stories. The July 1931 issue contained a story by Neil R. Jones, “The Jameson Satellite.” In it, professor Jameson’s body is chilled at death and placed into Earth orbit, to be revived millions of years later by an alien race, which has also conquered aging and other ailments. To the twelve-year-old Robert, the resuscitation of a human in a future without aging and illness held a fascination that would not be forgotten in the decades to come.

In 1944 Ettinger was wounded in battle in Germany and spent several years recuperating in an army hospital in Battle Creek, Michigan. This offered him the opportunity to write a science fiction story of his own. Published in the March 1948 Startling Stories, “The Penultimate Trump” is about a wealthy man, H. D. Haworth, who is frozen at death and eventually resuscitated, with youth and health restored. In two important respects Haworth’s reanimation differs from Professor Jameson’s: (1) it is planned for by Haworth himself (Jameson simply intended to be well-preserved, not eventually brought back to consciousness); and (2) it is carried out by humans and not through a chance encounter with aliens. To Ettinger this seemed a plausible, real-life approach to personal life extension and betterment. He expected that others with better scientific credentials would soon be working on the freezing idea.

In fact the idea was not new but had a venerable if somewhat checkered history. Ancient Roman writers such as Ovid and Pliny the Elder noted that fish trapped in ice and apparently frozen and dead could sometimes return to life. Experiments in the controlled freezing of organisms were carried out as early as the 1600s, one researcher being English scientist Robert Boyle. He reported the successful reanimation of fish and frogs after brief exposure to subfreezing temperatures, though he was unable to achieve the same results after longer exposures. In the next century English surgeon John Hunter also thought that human life might be extended by this method. In 1768 he reported his experiments on reanimating frozen fish by simple thawing—but these had failed. Still there was progress, both with freezing and with the related technique of desiccation. Both could achieve a limited sort of reversible suspended animation, or anabiosis. By the early 1900s many small creatures such as worms, tardigrades, and rotifers had been revived from an inert and “lifeless” state induced by extreme cold or drying. A Russian experimenter, Porfiry Bakhmetiev (1860–1913), started research with hypothermic mammals, and successfully revived bats cooled below 0° C, but he died before the work had progressed very far.

By the 1940s some modest additional progress had been made. An important innovation with deep freezing was the addition of a protective agent such as glycerol beforehand to reduce the severity of damage. Single cells could then be frozen and cooled to very low temperature with successful resuscitation much more likely, though still not guaranteed. Larger organisms, including mammals such as hamsters, would soon be partly frozen and recovered. A new field, cryobiology, was born.

But beyond such initial success, progress was slow. Little serious attention was paid to the fantastic possibility that Ettinger and others before him had envisioned, of cryogenic storage as a means of defeating death. So in 1960 Ettinger, who had by then earned master’s degrees in both physics and mathematics and become a college professor, set to work again. His first, modest effort was to circulate a short summary of his ideas to a few hundred people in Who’s Who. Response was minimal, so he then set out to write The Prospect of Immortality, which advocated the idea of freezing people and storing them for later reanimation. The first draft of the book was completed in 1962, and an expanded version was offered commercially in 1964. Many thus became aware of the freezing idea. Eight years later Ettinger produced a sequel, Man into Superman, that explored some possibilities for becoming more-than-human. During this time the first freezings of humans for intentional reanimation occurred, a practice that became known as cryonics.

Meanwhile another immortalist pioneer, Evan Cooper, had also hit on the freezing idea and in 1962 had written a short book of his own, Immortality: Physically, Scientifically, Now. Never commercially published, the typed, mimeographed manuscript was privately circulated to a few. Ettinger responded enthusiastically, noting the similarities with his own just-completed book. Cooper’s independent effort contained some original thinking too, drawing inspiration from The Bedbug, a 1928 play by Russian Vladimir Mayakovsky in which a man is frozen by accident and resuscitated decades later using new technology. Another of Cooper’s sources was The Human Use of Human Beings, a nonfictional study by cybernetics pioneer Norbert Wiener in which the human personality is compared to a computer program. The program representing the living person might be transmitted to another body or, in more recent parlance, “uploaded.” The new body could be a natural, biological product or an artificial device, opening considerable vistas for shedding old limitations and entering upon new modes of existence. This, it should be added, is among the possibilities Cooper considered without claiming dogmatic certainty that any of them would come to pass. More generally, a cautious, if optimistic, scientific stance became a hallmark of the developing immortalist movement.

In December 1963 the Life Extension Society was founded in Washington, D.C., with Cooper as president, to promote the freezing idea. The September 1965 issue of the LES periodical Freeze-Wait-Reanimate carried stirring headlines: ASTOUNDING ADVANCE IN ANIMAL BRAIN FREEZING AND RECOVERY …. Dr. Isamu Suda and colleagues, at Kobe University in Japan, had detected electrical activity in a cat brain that had been frozen to –20° C ( 4° F) for more than six months and then restored to body temperature. The cat had been anesthetized and the brain removed. The blood was replaced with a protective solution of glycerol prior to freezing; the glycerol was again replaced with blood on rewarming. Not only did the brain revive and resume activity, but the brain wave pattern did not appear to differ greatly from that of a live control. Here, then, was dramatic evidence that cryonics might work, especially if possible future advances in repair techniques were taken into account.

But despite such successes and widespread media exposure, cryonics was a difficult practice to get started. Ettinger and Cooper played pivotal roles, and critical contributions were made by others, yet the problems were great. Few who were dying wanted to be frozen, nor did their healthier contemporaries show much interest; support and funding were meager. As for the activists, there was a steady turnover among those initially eager who later lost interest and quit. The casualties even included Cooper himself. Active for a few years, his LES could never complete a primary mission of establishing a cryonics facility, though others succeeded. Cooper left the movement and, indulging a passion for sailing, was tragically lost at sea in 1982.

Progress in actual human freezings, the all-important end product, was slow and uncertain. In April 1966, after several years of failed promotion, a success of sorts finally occurred. An embalmed body was frozen—but only after weeks of above-freezing storage, which was highly damaging to any prospect of reanimation. Relatives maintaining this preliminary suspension gave up after a few months, and the body was thawed and buried. A much better freezing occurred in January 1967 by a team organized by a California businessman, Robert F. Nelson. In this first, true cryonic suspension, an elderly cancer patient in Glendale, California, was placed in dry ice shortly after death and transferred to liquid nitrogen a few days later. Nelson’s group, the Cryonics Society of California, would freeze several more people over the next few years. But his operation did not meet expenses; nine cryonics patients thawed and were lost, and when relatives sued, Nelson and an assistant were ordered to pay nearly $1 million in damages. Another operation, the Cryonics Society of New York, also folded, though without legal recriminations and despite the heroic efforts of its principals, Curtis Henderson and Saul Kent. Bitter though they were, these failures inspired greater and more careful efforts.

Alcor Foundation was started in 1972 by Fred and Linda Chamberlain after they broke with Nelson’s group. In coming years it would establish a strict funding policy so that suspensions no longer depended on the financial backing of relatives and would also pioneer head-only freezing. (The rationale is that technology that could repair a brain and resuscitate frozen tissue could probably also recreate the missing body from DNA and other clues. Human heads or “neuros” are less expensive to maintain, and none to date has been lost through thawing.)

Progress also brought a new level of effectiveness to the procedures used in cryonic suspension, which must go far beyond simple freezing to protect the tissues as far as possible from the damage of cooling to low temperature. Jerry Leaf and Michael Darwin pioneered better techniques of perfusion with higher concentrations of glycerol prior to freezing. Work by Leaf, Darwin, and Hugh Hixon of Alcor, and Drs. Paul Segal, Harold Waitz, and Hal Sternberg of rival Trans Time, demonstrated the reversibility of the early stages of such procedures. (This was a follow-up of similar work in the 1960s performed by noncryonicist Gerald Klebanoff.) Test animals, chilled to near the freezing point and left cold and apparently lifeless for hours (though not actually frozen), were revived without ill effects. Confidence increased that deep-frozen large organisms, including humans, could also eventually be recovered.

Then suddenly a crisis loomed over legal issues. In December 1987 Saul Kent had his eighty-three-year-old mother, Dora, frozen as a head-only. The woman, in fact, had died at Alcor’s facility in Riverside, California, which prompted a coroner’s investigation. When the frozen head was demanded for autopsy and could not be located, several Alcor officials were taken into custody but were later vindicated in court. A judge ruled that the head was not needed to decide the cause of death and there was no evidence of foul play. A few months after this there was an attempt by the California Health Department to have cryonics declared illegal—also eventually rebuffed in court. The legal challenges cost the small and privately funded Alcor dearly. But cryonics gained respectability both in and outside the state, and it was clear that some were willing to struggle very hard to keep the practice going and keep individual patients frozen.

The legal battle over Dora Kent involved a personal confrontation. I was one of the six Alcor personnel placed in handcuffs on January 7, 1988, and taken to the local police station. There we remained some hours until an attorney determined there was no proper legal ground to hold us—whereupon our restraints were unlocked and we were set free. (One of our number, Carlos Mondragon, alerted the media during the arrest and helped manage this crisis.) There would be anxious days, weeks, and months, however, before the matter would finally be resolved in Alcor’s favor. In general, cryonics has been fortunate to escape the fierce persecution that has often accompanied the more unusual, freethinking movements of the past. But this incident and the subsequent struggle over legality in California were sobering events. Cryonics, a heroic, rational attempt to save and extend the lives of human beings, was not well received in certain “mainstream” quarters. Opponents tried to stop it through legal sanctions rather than recognize its life-affirming potential. Thankfully, their efforts did not succeed.

Another legal battle of a different sort concerned the wish of one person to be frozen. Thomas Donaldson, a Ph.D. mathematician, was diagnosed with a brain tumor in 1988. The tumor, an astrocytoma, was a particularly virulent sort that is usually fatal within a few years. Donaldson had been active in cryonics for many years and wanted to be frozen before he sustained substantial brain damage, though not immediately—radiation treatments had brought at least a temporary remission. But the freezing procedure, when needed, would have to be started while he was still alive. By current legal criteria it would be deemed assisted suicide or perhaps homicide. Donaldson went to court. Unfortunately, narrow legal definitions prevailed and he did not get his wish. (Thankfully, the tumor stayed in remission and Donaldson is still alive and active at this writing; other cryonicists with brain malignancies have not been so lucky.) The case also generated much favorable publicity for cryonics and helped dramatize the plight of those who wish to choose, without interference, the circumstances of what others consider their death.

A tiny yet vigorous and growing cryonics movement now exists, and several organizations, most based in the United States, offer their services. Robert Ettinger was instrumental in starting one of these, Cryonics Institute, and remains active as do others whose involvement stretches back decades, though some, like Jerry Leaf, and (very recently) Paul Segall, have “fallen asleep” and been frozen. Rivalries and contention have sometimes been fierce, as might be expected among the strong-minded individualists that cryonicists typically are and have split more than one organization, including the largest, Alcor. Still there is consensus that facing the common enemy—death—requires respect for others and a willingness to tolerate diverging views.

Research continues, though still privately funded due to continuing public disinterest in anything so radical. The ambitious “Prometheus Project” was organized in 1996 by Paul Wakfer to unite the various factions in work toward a common goal, in this case a demonstrated technique for full, reversible suspended animation through low-temperature storage. The project faltered before any research could begin, but subsequent work at California-based Twenty-first Century Medicine, financed by Saul Kent and William Faloon and endorsed by “Prometheans” and others, has yielded significant results. Assisted by this effort, Alcor in 2000 pioneered the use of “vitrification” in cryonic suspensions, in which the damage from freezing is greatly reduced. Work on vitrification continues at Twenty-First Century Medicine, along with a parallel effort at Cryonics Institute in Michigan led by cryobiologist Dr. Yuri Pichugin.

James Bedford, the first person cryonically suspended, remains frozen, along with Dora Kent and approximately four-fifths of the one hundred or so who have been preserved at low temperature. Almost everyone, in fact, who was frozen after 1973 is still frozen today, and probably about a thousand are now signed up for the procedure.

Through cryonics a small part of Fedorov’s great project of resurrection may actually be completed in the relatively near future (thoughtful estimates allow anywhere from 30 to 150 years). It seems clear, to those of us who have accepted it, that cryonics offers a better approach to death than the conventional one of allowing or causing the remains to disintegrate. But as yet very few of the many thousands who die each day are frozen. Concern with the welfare of humanity demands that cryonics—or some form of biostasis—become universal, at least until the happy time that death is no longer a threat. Thus cryonics itself could become a “common task” to reorder society along the lines of peace and life rather than war and death. Though it would take a large investment of resources to maintain many millions of people in frozen storage, it does not appear beyond the productive capacities of the world, particularly if the less-expensive neuro option is used. (Lower-cost possibilities such as high-quality chemical preservation may also offer benefit.) The outcome of such a program could be far more beneficial to humanity than, for example, the diversion of resources into technologies of destruction, something that has occupied a fearful world for a very long time.

Along with cryonics are some related developments that help make its case more credible and offer support to those who might be interested. Work of the pioneering futurist writer and cryonics advocate F. M. Esfandiary should be mentioned. FM, as he liked to be known, was a novelist of some note when he started a new series of futuristic, nonfiction books, advocating transformation of the human species to a higher, transhuman form. Technology would be instrumental, but must be guided by enlightenment. People must come to see themselves as not bound by old ties of race, nationality, or even family and marriage, but instead, as the enduring features of a global community. Titles in the series were Optimism One (1970), Up-Wingers (1973), Telespheres (1977), and Are You a Transhuman? (1989). To minimize traces of his own nationality (Iranian, though he abhorred all national boundaries), he legally changed his name to FM-2030 for the year in which he would celebrate his hundredth birthday. Though cancer claimed him in 2000 he rests in frozen sleep and may yet awaken to celebrate, if not his hundredth, at least his two-hundredth birthday!

Other efforts focused more on anticipated technological tools. Eric Drexler’s 1986 book, Engines of Creation, argued the case for nanotechnology. This atomic-scale manipulation violates no laws of physics and seems perfectly feasible, in principle, to many thoughtful people, though it has critics too. But it also has many potential applications, among which would be a kind of minute archaeology of a frozen organism. Damaged cells or subcellular structures should be repairable, missing parts replaceable, and the whole restorable to a functioning state, through swarms of tiny, intelligently controlled devices or other tools capable of acting at small scales of distance. A more technical book by Drexler, Nanosystems (1992), offers mathematical arguments for the feasibility of atomic-scale manipulators. An ambitious effort has since been undertaken by Robert Freitas to explore the prospects for curing diseases and extending human life span through developing nanotechnology. The first, massive volume of his projected, three-volume work, Nanomedicine, was published in 1999. Publication of part A of a second volume is now projected for July 2003. Meanwhile the case for nanotechnology is continually being strengthened by the progress being made, particularly with devices such as scanning probe microscopes that can track and position individual atoms and alter individual chemical bonds.

The Foresight Institute was organized by Drexler to promote nanotechnology and publish the latest developments. Other notable developments are cryonics-leaning organizations such as Extropy Institute and the Society for Venturism—both U.S.-based—and the Russian Vita Longa Society. There is also a proliferation of cryonics-related communication through the rapidly burgeoning electronic mail services, including the forum Cryonet. Philosopher and cryonicist Max More, who co-founded Extropy Institute, in 1995 completed a dissertation, The Diachronic Self, that explores issues of personhood and favors cryonics as a means for extending life. The First Immortal, a novel by Jim Halperin, realistically explores the idea of resurrecting people who were frozen, and shows how a coming age of immortality would make life happier and more meaningful.

Forever for All (2000), a philosophical treatise by R. Michael Perry, attempts to tie in cryonics with a larger, cosmological picture. A pattern-based or informational theory of personhood is developed that allows for survival through duplicates or copies. In principle, then, the dead could be resurrected scientifically by recreating or guessing the appropriate pattern or structure, even in the absence of original material or knowledge of its details. Serious difficulties both philosophical and technical would have to be confronted before a resurrection project such as that imagined by Fedorov could be attempted, but it is not ruled out in a more advanced future, and even is arguably inevitable. The conventional wisdom in cryonics that people are “gone forever” unless well preserved at death thus is challenged, in one possible way, on nonmystical grounds.  Cryonics nevertheless is strongly advocated on grounds of the expected benefits that would follow from the more straightforward, “historical” resurrection that it should make possible. The wise and well-disposed will choose it, and to such as these will belong the future.