What if senolytics fail?

There is a school of thought within the life extension movement that favors prioritizing the promotion of cryonics over anti-aging efforts. There are a number of arguments for this. A technical argument has been put forward in Thomas Donaldson’s seminal article “Why Cryonics Will Probably Help You More Than Anti-aging.”

The most rigorous test to determine whether an anti-aging therapy works entails giving it to a group of people and determining whether these people live longer (without any detrimental side-effects). The timescales entailed do not permit rapid progress in a field. Aiming for outright rejuvenation might be a better strategy because it allows for more short-term objective metrics to be used. Some of these metrics are common sense (athletic performance, skin appearance, cognitive tests etc.), others are more controversial (biochemical “biomarkers” of aging).

Wherever one comes down in this debate, it cannot be denied that cryobiological research can be pursued in a more precise, time-efficient manner. For example, if you want to determine whether a vitrification solution resists freezing when it is cooled to cryogenic temperatures, you need no more than a day to perform the experiment and document the results. This vitrification solution can then be introduced to an organ to determine whether the organ can be vitrified and recovered without ice formation.

This is not just conjecture. Since the mid-20th century a small number of dedicated cryobiologists have solved the problem of designing cryoprotectants that do not freeze at realistic cooling- and warming rates. Major progress has been made in mitigating toxicity and chilling injury of those cryoprotectants as well. It is important to keep this in mind when cryonics advocates are taken to task for not making as much progress as the people in the anti-aging field.

Another advantage of the field of cryobiology is that most of its findings are observed in all popular mammalian animal models. Phenomena such as cryoprotectant toxicity and cryoprotectant-induced brain shrinking are observed in both small- and large animal models. In aging research, however, the important role of evolution and genetics makes translating results from small animal models to humans a lot trickier. After all, an evolutionary perspective on aging needs to explain different lifespans in different animals and species (and even within). An intervention that prolongs the life in a small animal may only have minor health benefits in humans (like caloric restriction).

On a conceptual level the major figures in the life extension advocacy field cannot even agree on what aging is (put Aubrey de Grey, Michael Rose, and Joshua Mitteldorf in one room and see the sparks fly!) and the field is not immune to succumbing to one fad after another (while believing that this time it is for real). Part of this problem is related to the lack of objective, short-term measures to determine the effectiveness of an anti-aging treatment in humans. If it would be possible to asses the effectiveness of an anti-aging therapy in a quick and unambiguous manner, one theory of aging might be more easily favored over another.

Recent developments in the field of biomarkers of aging and “aging clocks” have given hope to those who believe that it now will be easier and time-efficient to determine the effectiveness of an anti-aging intervention. As of writing, there are several different biomarkers of aging and there is no consensus if these measures capture all the important aspects of aging. In fact, whether one clock is favored over another is itself reflective of one’s perspective on what aging is, which brings us back to the fundamental disagreements over aging that continue to divide biogerontologists. One thing that these biomarkers of aging will not be able to tell is whether an intervention is effective and safe in the long run. Or whether the maximum human life span would be altered by a specific intervention.

It cannot be emphasized enough that, as of writing, there is not one single anti-aging biotechnology that has been demonstrated to produce extension of the maximum human lifespan, let alone unambiguous evidence of rejuvenation. This should have a sobering effect on dispassionate observers of the field but it is no exaggeration to claim that many movers and shakers in the field are are not dispassionate and actually prone to embracing the next big thing, which often generates (predictable) cycles of great enthusiasm and disillusion.

The current big thing in the anti-aging field is the identification and validation of senolytics. Since the clearing of senescent cells is one of the pillars of the SENS program, the success of this approach will have important consequences for the “aging as damage accumulation” school of aging. Billions are flowing into this field in the anticipation of successful biomedical applications. So far, the results in small animal models look modestly encouraging and human trials have shown mixed results. The failure of a major phase II study for knee osteoarthritis is not encouraging and no doubt supporters will claim that systemic administration of senolytics is the way to go. Or that this is the wrong kind of senolytic. Or that the dosage and administration frequency is not right. Or that senolytics are necessary but not sufficient to produce meaningful anti-aging results etc. Not to speak of the possibility that senescent cells can also play a positive role (like the much dreaded “free radicals” of older anti-aging efforts).

At some point it would behoove the life extension community to seek a better balance between the funding of anti-aging therapies and the funding of (applied) cryonics research. Many wealthy people prefer to fund anti-aging research because it captures their hope that they do not have to die at all. Anti-aging therapies also offer a more attractive investment potential, which is often mistaken as the field being further advanced than biopreservation technologies. And let us not ignore the obvious point that that for many very old people the rejuvenation approach will not come in time.

Given enough time, all people will suffer a fatal accident, major trauma, or a type of (infectious) disease for which there is no treatment available (yet). For this reason alone, a comprehensive life extension plan should include arrangements for biopreservation to survive long-term.

What if senolytics fail? I suspect this will produce a major disillusion of the growing anti-aging biotechnology field and the SENS program in particular. A prudent approach would be to work from the premise that many of these therapies won’t work, or only have modest effects, and also invest in an evidence-based cryonics infrastructure so that, in principle, all people can access rejuvenation technologies regardless of health condition or age. One of the attractive features of medical timetravel is that it can transport today’s people to a time when rejuvenation biotechnologies are fact, not hope.

[In part 2 of this series, we will delve deeper into the field of biogerontology, its complexities, and how to prevent wasteful research spending….]

Who’s Leaving Whom?

It is well established that cryonics can be a formidable source of division within families. A classic example is the claim that a person who makes cryonics arrangements has reduced the amount of money available to spend on other goods and services—and will ultimately leave less money behind after passing away. We may think that such a perspective leaves little room for financial autonomy and tolerance in a family but experience confirms that many families operate exactly like that.

A related non-financial argument is that a person who makes cryonics arrangements is “selfish” by going it alone and leaving his family to die. Naturally, this argument can be turned on its head. A friend of mine once stated that, given the interest of her boyfriend in cryonics, the decision not to make cryonics arrangements herself would be akin to a decision to (eventually) abandon him. From the perspective of a cryonics advocate this argument can be further strengthened. If one believes that a cryonics patient is not dead, the decision not to make cryonics arrangements would be akin to walking away from someone who is critically ill (or in a coma).

In the examples so far we have faced a situation in which one person responds to the decision of another person. In many cases, however, the decision whether to make cryonics arrangements is the subject of joint deliberation. If we approach the subject from the perspective of not wanting to abandon a loved one there are a number of good reasons to decide in favor of a family making cryonics arrangements.

First of all, the decision not to make cryonics arrangements will lead to a predictable outcome: death (at least for the foreseeable future). And death is not a joint experience but the cessation of a family as a living entity. Why would a family voluntarily put a predictable expiration date on its existence?

Secondly, family members usually do not die at the same time. This not only applies to children but to couples as well. Couples think that the best they can do is to stick together “till death do us part.” In principle, cryonics can break with this tradition by placing one person in cryopreservation (and eventually both of them). While the relationship of the “survivor” to the cryonics patient is not identical to both being alive it is a whole lot better than throwing them in a hole or burning them because today’s medicine is not able to sustain them.

But what if we consider a whole family making cryonics arrangements and some will make it and others do not? This is indeed a heart wrenching scenario but these kinds of things happen in mainstream life, too. Survivors usually do not respond by taking out the whole family but mourn, remember, and pick up the pieces. A more dispassionate response is to say that some family members surviving is still preferable in that the surviving person’s situation is improved (compared to being clinically dead) without worsening the situation of the non-survivors (who are now non-existent). It is also important to emphasize here that survival is not an external event that “just happens.” We can do a lot to improve the probability that a whole family sticks together by executing the right paperwork and ensuring that younger family members will be able to take advantage of rejuvenation biotechnologies.

There are examples of individuals and families who made cryonics arrangements pretty much upon hearing about the idea. Most families take a little more time (or never get to it). One good piece of advice is to take out life insurance for the whole family while rates are still affordable (especially for very young children). For most families there are very good general reasons to take out life insurance, such as providing financial stability for the surviving partners and children. So getting life insurance is a good idea while the conversation about the subject continues. It is not trivial to make last-minute cryonics arrangements, but it is impossible to get life insurance for a person who is dying or already dead and most people cannot pay for cryonics in cash.

Originally published as a column in Cryonics magazine, May, 2014

Multiple Sclerosis and Human Enhancement

Multiple sclerosis is a disease that raises a lot of interesting questions for people interested in biogerontology, human enhancement, and even cryonics. It raises questions about immunosenescence and draws attention to possible immune improvements for biological human enhancement. Biotechnologies to induce myelin repair may even be useful for the repair of cryopreserved brains. Before I discuss multiple sclerosis from these perspectives, let us take a closer look at this medical condition.

Multiple sclerosis (MS) is an inflammatory autoimmune disorder of the central nervous system that results in axonal degeneration in the brain and spinal cord. In simple terms, multiple sclerosis is a disease wherein the body’s immune system attacks and damages the myelin sheath, the fatty tissue that surrounds axons in the central nervous system. The myelin sheath is important because it facilitates the conduction of electrical signals along neural pathways. Like electrical wires, neuronal axons require insulation to ensure that they are able to transmit a signal accurately and at high speeds. It is these millions of nerves that carry messages from the brain to other parts of the body and vice versa.

More specifically, MS involves the loss of oligodendrocytes, the cells responsible for creating and maintaining the myelin sheath. This results in a thinning or complete loss of myelin (i.e., demyelination) and, as the disease advances, the breakdown of the axons of neurons. A repair process, called remyelination, takes place in early phases of the disease, but the oligodendrocytes are unable to completely rebuild the cell’s myelin sheath. Repeated attacks lead to successively less effective remyelinations, until a scar-like plaque is built up around the damaged axons.

The name multiple sclerosis refers to the scars (sclerae—better known as plaques or lesions) that form in the nervous system. These scars most commonly affect the white matter in the optic nerve, brain stem, basal ganglia, and spinal cord or white matter tracts close to the lateral ventricles of the brain. The peripheral nervous system is rarely involved. These lesions are the origin of the symptoms during an MS “attack.”

In addition to immune-mediated loss of myelin, which is thought to be carried out by T lymphocytes, B lymphocytes, and macrophages, another characteristic feature of MS is inflammation caused by a class of white blood cells called T cells, a kind of lymphocyte that plays an important role in the body’s defenses. In MS, T cells enter the brain via disruptions in the blood-brain barrier. The T cells recognize myelin as foreign and attack it, which is why these cells are also called “autoreactive lymphocytes.”

The attack of myelin starts inflammatory processes which trigger other immune cells and the release of soluble factors like cytokines and antibodies. Further breakdown of the blood–brain barrier in turn causes a number of other damaging effects such as swelling, activation of macrophages, and more activation of cytokines and other destructive proteins. These inflammatory factors could lead to or enhance the loss of myelin, or they may cause the axon to break down completely.

Because multiple sclerosis is not selective for specific neurons, and can progress through the brain and spinal cord at random, each patient’s symptoms may vary considerably. When a patient experiences an “attack” of increased disease activity, the impairment of neuronal communication can manifest as a broad spectrum of symptoms affecting sensory processing, locomotion, and cognition.

Some of the most common symptoms include: numbness and/or tingling of the limbs, like pins and needles; extreme and constant fatigue; slurring or stuttering; dragging of feet; vision problems, especially blurred vision; loss of coordination; inability to walk without veering and bumping into things; weakness; tremors; pain, especially in the legs; dizziness; and insomnia. There are many other symptoms, as well, such as loss of bowel or bladder control, the inability to process thoughts (which leads to confusion), and passing out. Some MS patients lose their vision and many lose their ability to walk. The symptoms are not necessarily the same for all patients and, in fact, an individual MS patient does not always have the same symptoms from day to day or even from minute to minute.

One of the most prevalent symptoms of MS is extreme and chronic fatigue. Assessment of fatigue in MS is difficult because it may be multifactorial, caused by immunologic abnormalities as well as other conditions that contribute to fatigue such as depression and disordered sleep (Braley and Chervin, 2010). Pharmacologic treatments such as amantadine and modafinil have shown favorable results for subjective measures of fatigue. Both drugs are well tolerated and have a mild side-effect profile (Life Extension Foundation, 2013).

It is estimated that multiple sclerosis affects approximately 85 out of every 100,000 people (Apatoff, 2002). The number of known patients is about 400,000 in the United States and about 2.5 million worldwide (Braley & Chervin, 2010). In recent years, there has been an increase of identified multiple sclerosis patients with about 50 percent more women reporting the disease. Indeed, between two and three times as many women have MS than men. Most patients are diagnosed between the ages of 20 and 50 but MS can strike at any age (National Multiple Sclerosis Society, 2013).

Incidence of multiple sclerosis varies by geographic region and certain demographic groups (Apatoff, 2002; Midgard, 2001). There is evidence that worldwide distribution of MS may be linked to latitude (Midgard, 2001). In the U.S., for instance, there is a lower rate of MS in the South than in other regions (Apatoff, 2002). Data regarding race shows 54 percent of MS patients are white, 25 percent are black and 19 percent are classified as other (Apatoff, 2002).

There are four disease courses identified in MS:

Relapsing-Remitting: Patients have clearly defined acute attacks or flare-ups that are referred to as relapses. During the relapse, the patient experiences worsening of neurologic function—the body or mind will not function properly. The relapse is followed by either partial or total recovery, called remissions, when symptoms are alleviated. About 85 percent of MS patients fall into this category (National Multiple
Sclerosis Society, 2013).

Primary-Progressive: The disease slowly and consistently gets worse with no relapses or remissions. Progression of the disease occurs over time and the patient may experience temporary slight improvements of functioning. About 10 percent of MS patients fall into this category (National Multiple Sclerosis Society, 2013).

Secondary-Progressive: Patient appears to have relapsing-remitting MS, but after time the disease becomes steadily worse. There may or may not be plateaus, flareups, or remissions. About half the people originally diagnosed with relapsing remitting will move into this category within 10 years (National Multiple Sclerosis Society, 2013).

Progressive-Relapsing: Quick disease progression with few, if any, remissions. About 5 percent of MS patients fall into this category at diagnosis (National Multiple Sclerosis Society, 2003).

The cause(s) of multiple sclerosis remain unknown although research suggests that both genetic and environmental factors contribute to the development of the disease (National Multiple Sclerosis Society, 2013; Compston and Coles, 2002). The current prevailing theory is that MS is a complex multifactorial disease based on a genetic susceptibility but requiring an environmental trigger, and which causes tissue damage through inflammatory/ immune mechanisms. Widely varying environmental factors have been found to be associated with the disease, ranging from infectious agents to Vitamin D deficiency and smoking. The debate these days revolves primarily around whether immune pathogenesis is primary, or acts secondarily to some other trigger (Braley & Chervin, 2010).

Risk factors for multiple sclerosis include genetics and family history, though it is believed that up to 75% of MS must be attributable to non-genetic or environmental factors. Infection is one of the more widely suspected non-genetic risk factors. A commonly held theory is that viruses involved in the development of autoimmune diseases could mimic the proteins found on nerves, making those nerves a target for antibodies. The potential roles of several viruses have been investigated including herpes simplex virus (HSV), rubella, measles, mumps, and Epstein Barr virus (EBV). The strongest correlation between a virus and MS exists with EBV—virtually 100% of patients who have MS are seropositive for EBV (the rate in the general public is about 90%)— but potential causality remains strongly debated (Ludwin and Jacobson, 2011).

It is important to keep in mind that infectious agents such as viruses may, in fact, have nothing to do with causing MS. The association of a virus with MS is based on increased antibody response and may be epiphenomenal of a dysregulated global immune response. “Proving” causality will require consistent molecular findings as well as consistent results from well-controlled clinical trials of virus-specific antiviral therapies (as yet to be developed). In the end, any theory concerning causality in MS should also account for the strong association with other environmental factors such as Vitamin D deficiency and smoking. Indeed, a landmark study found that, compared to those with the highest levels of vitamin D, those with the lowest blood levels were 62% more likely to develop MS. Additionally, a literature review evaluating more than 3000 MS cases and 45,000 controls indicates that smoking increases the risk of developing MS by approximately 50% (Life Extension Foundation, 2013).

Recently, researchers have pinpointed a specific toxin they believe may be responsible for the onset of MS. Epsilon toxin—a byproduct of the bacterium Clostridium perfringens—is able to permeate the blood brain barrier and has been demonstrated to kill oligodendrocytes and meningeal cells. Loss of oligodendrocytes and meningeal inflammation are both part of the MS disease process, and may be triggered by exposure to epsilon toxin.

The fact that females are more susceptible to inflammatory autoimmune diseases, including multiple sclerosis, points to the potential role of hormones in the etiology of multiple sclerosis. Interestingly, the course of disease is affected by the fluctuation of steroid hormones during the female menstrual cycle and female MS patients generally experience clinical improvements during pregnancy (Life Extension Foundation, 2013). Additionally, pregnancy appears to be protective against the development of MS. A study in 2012 demonstrated that women who have been pregnant two or more times had a significantly reduced risk of developing MS, while women who have had five or more pregnancies had one-twentieth the risk of developing MS compared to women who were never pregnant. (The increase in MS prevalence over the last few decades could reflect the fact that women are having fewer children.) A growing body of evidence supports the therapeutic potential of hormones (both testosterone and estrogens) in animal models of multiple sclerosis, but more research is needed to understand the pathways and mechanisms underlying the beneficial effects of sex hormones on MS pathology (Gold and Voskuhl, 2009).

No single test gives a definitive diagnosis for MS, and variable symptoms and disease course make early diagnosis a challenge. Most diagnoses are presumptive and are based on the clinical symptoms seen in an acute attack. Supporting evidence of these presumptions is then sought, usually from a combination of magnetic resonance imaging (MRI) of the brain, testing the cerebrospinal fluid (CSF) for antibodies, measuring the
efficiency of nerve impulse conduction, and monitoring symptoms over time.

As there is still much work to be done in understanding the nature of multiple sclerosis, a cure has yet to be discovered. Conventional medical treatment typically focuses on strategies to treat acute attacks, to slow the progression of the disease, and to treat symptoms. Corticosteriods such as methylprednisolone are the first line of defense against acute MS attacks and are administered in high doses to suppress the immune system and decrease the production of proinflammatory factors. Plasma exchange is also used to physically remove antibodies and proinflammatory factors from the blood.

The use of beta interferons is a longstanding MS treatment strategy, originally envisioned as an antiviral compound. Beta interferons reduce inflammation and slow disease progression, but the mechanism of action is poorly understood. Other immunosuppressant drugs such as Mitoxantrone and Fingolimod also slow disease progression, but are not used as first-line treatments due to their severe side effects. More recently, researchers at Oregon Health & Science University have noted that an antioxidant called MitoQ has been shown to significantly reverse symptoms in a mouse model of MS (Mao, Manczak, Shirendeb, and Reddy (2013).

Besides pharmacological treatments, MS patients may benefit from therapies (such as physical and speech therapy) and from an optimized nutritional protocol. Supplementation with Vitamin D, Omega-3 and -6 fatty acids, Vitamin E, lipoic acid, Vitamin b12, and Coenzyme Q10 appear to be of particular potential benefit (Life Extension Foundation, 2013). Until a definitive cause for MS can be defined and a cure developed, such strategies, including hormone therapy, offer possible ways to improve quality of life over the course of disease progression.

Unlike Alzheimer’s disease, there does not appear to be a Mendelian variant of MS that will invariably produce the disease in people who have the gene. A somewhat puzzling variable is that MS predominantly tends to occur between the ages of 20 and 50. This appears to exclude approaching MS as a form of immunosenescence. After all, if MS would be a function of the aging immune system, we would see progressively more cases of MS as people get older (or in AIDS patients), ultimately involving many very old people. More likely, MS is a non age-related form of dysfunction of the immune system that is triggered by environmental factors (such as a viral infection). While many discussions about the role of viruses in debilitating diseases like Alzheimer’s and MS still suffer from an incomplete understanding of cause and effect, it seems reasonable to conclude that enhancement of the human immune system can greatly reduce disease and improve the quality of life, even in healthy humans.

One potential treatment for MS is to induce remyelination (or inhibit processes that interfere with efficient remyelination). Stem cells can be administered to produce oligodendrocyte precursor cells to produce the oligodendrocyte glial cells that are responsible for remyelination of axons. While the myelin sheaths of these remyelated axons are not as thick as the myelin sheaths that are formed during development, remyelination can improve conduction velocity and prevent the destruction of axons. While the dominant repair strategies envisioned for cryonics involve molecular nanotechnologies that can build any biochemical structures that physical law permits, it is encouraging to know that specific stem cell therapies will be available to repair and restore myelin function in cryonics patients as damage to myelin should be expected as a result of (prolonged) ischemia and cryoprotectant toxicity.

An interesting possibility is that remyelination therapies may also be used for human enhancement if these therapies can be tweaked to improve conduction velocity in humans or to induce certain desirable physiological responses by varying the composition and strength of the myelin sheath in various parts of the central nervous system.

References

Apatoff, Brian R. (2002). MS on the rise in the US. Neurology Alert 20(7), 55(2).

Braley, Tiffany J., Chervin, Ronald D. (2010). Fatigue in Multiple Sclerosis: Mechanisms, evaluation, and treatment. Sleep 33(8), 1061-1067.

Compston, Alastair, and Coles, Alasdair (2002). The Lancet 359(9313), 1221.

Gold, Stefan M., and Voskuhl, Rhonda R. (2009). Estrogen and testosterone therapies in multiple sclerosis. Progress in Brain Research 175: 239-251.

Life Extension Foundation (2013). Multiple Sclerosis, in: Disease Prevention and Treatment, 5th edition, 947-956.

Ludwin, SK, and Jacobson, S. (2011). Epstein- Barr Virus and MS: Causality or association? The International MS Journal 17.2: 39-43.

Mao, Peizhong, Manczak, Maria, Shirendeb, Ulziibat P., and Reddy, P. Hemachandra (2013). MitoQ, a mitochondira-targeted antioxidant, delays disease progression and alleviates pathogenesis in an experimental autoimmune encephalomyelitis mouse model of multiple sclerosis. Biochimica et Biophysica Acta Molecular Basis of Disease 1832(12): 2322- 2331.

Midgard, R. (2001). Epidemiology of multiple sclerosis: an overview. Journal of Neurology, Neurosurgery and Psychiatry 71(3), 422.

Originally published as an article (in the Cooler Minds Prevail series) in Cryonics magazine, February, 2014

Though She Isn’t Really Ill, There’s a Little Yellow Pill…

Humans have been ingesting mindand mood-altering substances for millennia, but it has only rather recently become possible to begin to elucidate drug mechanisms of action and to use this information, along with our burgeoning knowledge of neuroscience, to design drugs intended to have a specific effect. And though most people think of pharmaceuticals as “medicine,” it has become increasingly popular to discuss the possibilities for the use of drugs in enhancement, or improvement of “human form or functioning beyond what is necessary to sustain or restore good health” (E.T. Juengst; in Parens, 1998, p 29).

Some (transhumansits) believe that enhancement may not only be possible, but that it may even be a moral duty. Others (bioconservatives) fear that enhancement may cause us to lose sight of what it means to be human altogether. It is not the intention of this article to advocate enhancement or to denounce it. Instead, let’s review some of the drugs (and/or classes of drugs) that have been identified as the most promisingly cognitive- or mood-enhancing. Many of the drugs we will cover can be read about in further depth in Botox for the brain: enhancement of cognition, mood and pro-social behavior and blunting of unwanted memories (Jongh, R., et al., Neuroscience and Biobehavioral Reviews 32 (2008): 760-776).

Of most importance in considering potentially cognitive enhancer drugs is to keep in mind that, to date, no “magic bullets” appear to exist. That is, there are no drugs exhibiting such specificity as to have only the primary, desired effect. Indeed, a general principle of trade-offs (particularly in the form of side effects) appears to exist when it comes to drug administration for any purpose, whether treatment or enhancement. Such facts may constitute barriers to the practical use of pharmacological enhancers and should be taken into consideration when discussing the ethics of enhancement.

Some currently available cognitive enhancers include donepezil, modafinil, dopamine agonists, guanfacine, and methylphenidate. There are also efforts underway to develop memory-enhancing drugs, and we will discuss a few of the mechanisms by which they are proposed to act. Besides cognitive enhancement, the enhancement of mood and prosocial behavior in normal individuals are other types of enhancement that may be affected pharmacologically, most usually by antidepressants or oxytocin. Let’s briefly cover the evidence for the efficacy of each of these in enhancing cognition and/or mood before embarking on a more general discussion of the general principles of enhancement and ethical concerns.

One of the most widely cited cognitive enhancement drugs is donepezil (Aricept®), an acetylcholinesterase inhibitor. In 2002, Yesavage et al. reported the improved retention of training in healthy pilots tested in a flight simulator. In this study, after training in a flight simulator, half of the 18 subjects took 5 mg of donepezil for 30 days and the other half were given a placebo. The subjects returned to the lab to perform two test flights on day 30. The donepezil group was found to perform similarly to the initial test flight, while placebo group performance declined. These results were interpreted as an improvement in the ability to retain a practiced skill. Instead it seems possible that the better performance of the donepezil group could have been due to improved attention or working memory during the test flights on day 30.

Another experiment by Gron et al. (2005) looked at the effects of donepezil (5 mg/day for 30 days) on performance of healthy male subjects on a variety of neuropsychological tests probing attention, executive function, visual and verbal short-term and working memory, semantic memory, and verbal and visual episodic memory. They reported a selective enhancement of episodic memory performance, and suggested that the improved performance in Yesavage et al.’s study is not due to enhanced visual attention, but to increased episodic memory performance.

Ultimately, there is scarce evidence that donepezil improves retention of training. Better designed experiments need to be conducted before we can come to any firm conclusions regarding its efficacy as a cognitive-enhancing.

The wake-promoting agent modafinil (Provigil®) is another currently available drug that is purported to have cognitive enhancing effects. Provigil® is indicated for the treatment of excessive daytime sleepiness and is often prescribed to those with narcolepsy, obstructive sleep apnea, and shift work sleep disorder. Its mechanisms of action are unclear, but it is supposed that modafinil increases hypothalamic histamine release, thereby promoting wakefulness by indirect activation of the histaminergic system. However, some suggest that modafinil works by inhibiting GABA release in the cerebral cortex.

In normal, healthy subjects, modafinil (100-200 mg) appears to be an effective countermeasure for sleep loss. In several studies, it sustained alertness and performance of sleep-deprived subjects(up to 54.5 hours) and has also been found to improve subjective attention and alertness, spatial planning, stop signal reaction time, digit-span and visual pattern recognition memory. However, at least one study (Randall et al., 2003) reported “increased psychological anxiety and aggressive mood” and failed to find an effect on more complex forms of memory, suggesting that modafinil enhances performance only in very specific, simple tasks.

The dopamine agonists d-amphetamine, bromocriptine, and pergolide have all been shown to improve cognition in healthy volunteers, specifically working memory and executive function. Historically, amphetamines have been used by the military during World War II and the Korean War, and more recently as a treatment for ADHD (Adderall®). But usage statistics suggest that it is commonly used for enhancement by normal, healthy people—particularly college students.

Interestingly, the effect of dopaminergic augmentation appears to have an inverted U-relationship between endogenous dopamine levels and working memory performance. Several studies have provided evidence for this by demonstrating that individuals with a low workingmemory capacity benefit from greater improvements after taking a dopamine receptor agonist, while high-span subjects either do not benefit at all or show a decline in performance.

Guanfacine (Intuniv®) is an α2 adrenoceptor agonist, also indicated for treatment of ADHD symptoms in children, but by increasing norepinephrine levels in the brain. In healthy subjects, guanfacine has been shown to improve visuospatial memory (Jakala et al., 1999a, Jakala et al., 1999b), but the beneficial effects were accompanied by sedative and hypotensive effects (i.e., side effects). Other studies have failed to replicate these cognitive enhancing effects, perhaps due to differences in dosages and/or subject selection.

Methylphenidate (Ritalin®) is a well-known stimulant that works by blocking the reuptake of dopamine and norepinephrine. In healthy subjects, it has been found to enhance spatial workingmemory performance. Interestingly, as with dopamine agonists, an inverted U-relationship was seen, with subjects with lower baseline working memory capacity showing the greatest improvement after methylphenidate administration.

Future targets for enhancing cognition are generally focused on enhancing plasticity by targeting glutamate receptors (responsible for the induction of long-term potentiation) or by increasing CREB (known to strengthen synapses). Drugs targeting AMPA receptors, NMDA receptors, or the expression of CREB have all shown some promise in cognitive enhancement in animal studies, but little to no experiments have been carried out to determine effectiveness in normal, healthy humans.

Beyond cognitive enhancement, there is also the potentialfor enhancement of mood and pro-social behavior. Antidepressants are the first drugs that come to mind when discussing the pharmacological manipulation of mood, including selective serotonin reuptake inhibitors (SSRIs). Used for the treatment of mood disorders such as depression, SSRIs are not indicated for normal people of stable mood. However, some studies have shown that administration of SSRIs to healthy volunteers resulted in a general decrease of negative affect (such as sadness and anxiety) and an increase in social affiliation in a cooperative task. Such decreases in negative affect also appeared to induce a positive bias in information processing, resulting in decreased perception of fear and anger from facial expression cues.

Another potential use for pharmacological agents in otherwise healthy humans would be to blunt unwanted memories by preventing their consolidation.Thismay be accomplished by post-training disruption of noradrenergic transmission (as with β-adrenergic receptor antagonist propranolol). Propranolol has been shown to impair the long-term memory of emotionally arousing stories (but not emotionally neutral stories) by blocking the enhancing effect of arousal on memory (Cahill et al., 1994). In a particularly interesting study making use of patients admitted to the emergency department, post-trauma administration of propranolol reduced physiologic responses during mental imagery of the event 3 months later (Pitman et al., 2002). Further investigations have supported the memory blunting effects of propranolol, possibly by blocking the reconsolidation of traumatic memories.

GENERAL PRINCIPLES

Reviewing these drugs and their effects leads us to some general principles of cognitive and mood enhancement. The first is that many drugs have an inverted U-shaped dose-response curve, where low doses improve and high doses impair performance.This is potentially problematic for the practical use of cognition enhancers in healthy individuals, especially when doses that are most effective in facilitating one behavior simultaneously exert null or detrimental effects on other behaviors.

Second, a drug’s effect can be “baseline dependent,” where low-performing individuals experience greater benefit from the drug while higher-performing individuals do not see such benefits (which might simply reflect a ceiling effect), or may, in fact, see a deterioration in performance (which points to an inverted U-model). In the case of an inverted U-model, low performing individuals are found on the up slope of the inverted U and thus benefit from the drug, while high-performing individuals are located near the peak of the inverted U already and, in effect, experience an “overdose” of neurotransmitter that leads to a decline in performance.

Trade-offs exist in the realm of cognitive enhancing drugs as well. As mentioned, unwanted “side effects” are often experienced with drug administration, ranging from mild physiological symptoms such as sweating to more concerning issues like increased agitation, anxiety, and/or depression.

More specific trade-offs may come in the form of impairment of one cognitive ability at the expense of improving another. Some examples of this include the enhancement of long-term memory but deterioration of working memory with the use of drugs that activate the cAMP/protein kinase A (PKA) signaling pathway. Another tradeoff could occur between the stability versus the flexibility of long-term memory, as in the case of certain cannabinoid receptor antagonists which appear to lead to more robust long-term memories, but which also disrupt the ability of new information to modify those memories. Similarly, a trade-off may exist between stability and flexibility of working memory. Obviously, pharmacological manipulations that increase cognitive stability at the cost of a decreased capacity to flexibly alter behavior are potentially problematic in that one generally does not wish to have difficulty in responding appropriately to change.

Lastly, there is a trade-off involving the relationship between cognition and mood. Many mood-enhancing drugs, such as alcohol and even antidepressants, impair cognitive functioning to varying degrees. Cognition-enhancing drugs may also impair emotional functions. Because cognition and emotion are intricately regulated through interconnected brain pathways, inducing change in one area may have effects in the other. Much more research remains to be performed to elucidate these interactions before we can come to any firm conclusions.

ETHICAL CONCERNS

Again, though it is not the place of this article to advocate or denounce the use of drugs for human enhancement, obviously there are considerable ethical concerns when discussing the administration of drugs to otherwise healthy human beings. First and foremost, safety is of paramount importance. The risks and side-effects, including physical and psychological dependence, as well as long-term effects of drug use should be considered and weighed heavily against any potential benefits.

Societal pressure to take cognitive enhancing drugs is another ethical concern, especially in light of the fact that many may not actually produce benefits to the degree desired or expected. In the same vein, the use of enhancers may give some a competitive advantage, thus leading to concerns regarding fairness and equality (as we already see in the case of physical performance-enhancing drugs such as steroids). Additionally, it may be necessary, but very difficult, to make a distinction between enhancement and therapy in order to define the proper goals of medicine, to determine health-care cost reimbursement, and to “discriminate between morally right and morally problematic or suspicious interventions” (Parens, 1998). Of particular importance will be determining how to deal with drugs that are already used off-label for enhancement. Should they be provided by physicians under certain conditions? Or should they be regulated in the private commercial domain?

There is an interesting argument that using enhancers might change one’s authentic identity—that enhancing mood or behavior will lead to a personality that is not really one’s own (i.e., inauthenticity), or even dehumanization—while others argue that such drugs can help users to “become who the really are,” thereby strengthening their identity and authenticity. Lastly, according to the President’s Council on Bioethics, enhancement may “threaten our sense of human dignity and what is naturally human” (The President’s Council, 2003). According to the Council, “the use of memory blunters is morally problematic because it might cause a loss of empathy if we would habitually ‘erase’ our negative experiences, and because it would violate a duty to remember and to bear witness of crimes and atrocities.” On the other hand, many people believe that we are morally bound to transcend humans’ basic biological limits and to control the human condition. But even they must ask: what is the meaning of trust and relationships if we are able to manipulate them?

These are all questions without easy answers. It may be some time yet before the ethical considerations of human cognitive and mood enhancement really come to a head, given the apparently limited benefits of currently available drugs. But we should not avoid dealing with these issues in the meantime; for there will come a day when significant enhancement, whether via drugs or technological means, will be possible and available. And though various factions may disagree about the morality of enhancement, one thing is for sure: we have a moral obligation to be prepared to handle the consequences of enhancement, both positive and negative.

Originally published as an article (in the Cooler Minds Prevail series) in Cryonics magazine, December, 2013

An End to the Virus

Breakthroughs in medicine have increased substantially over the last hundred years, and most would agree that the introduction of antibiotics in 1942 has been one of the largest milestones in the history of medicine thus far. The success in treating bacterial infection has only accentuated the glaring lack of progress in developing effective therapeutics for those other enemies of the immune system, viruses. But Dr. Todd Rider and his team at MIT have dropped a bombshell with their announcement of a new broad spectrum antiviral therapeutic, DRACO, which appears not only to cure the common cold, but to halt or prevent infections by all known viruses.

Before talking specifically about this exciting news, let us first review viral biology and why viral infections have been so difficult to treat.

As you may recall from your early education, a virus particle, or virion, consists of DNA or RNA surrounded only by a protein coat (i.e., naked virus) or, occasionally, a protein coat and a lipid membrane (i.e., enveloped virus). Viruses have no organelles or metabolism and do not reproduce on their own, so they cannot function without using the cellular machinery of a host (bacteria, plant, or animal).

Viruses can be found all throughout our environment and are easily picked up and transferred to areas where they may enter our bodies, usually through the nose, mouth, or breaks in the skin. Once inside the host, the virus particle finds a host cell to infect so it can reproduce.

There are two ways that viruses reproduce. The first way is by attaching to the host cell and entering it or injecting viral DNA/RNA into the cell. This causes the host cell to make copies of the viral DNA and translate that DNA to make viral proteins. The host cell assembles new viruses and releases them when the cells break apart and die, or it buds the new viruses off, which preserves the host cell. This approach is called the lytic cycle.

The second way that viruses reproduce is to use the host cell’s own materials. A viral enzyme called reverse transcriptase makes a segment of DNA from its RNA using host materials. The DNA segment gets incorporated into the host cell’s DNA. There, the viral DNA lies dormant and gets reproduced with the host cell. When some environmental cue happens, the viral DNA takes over, makes viral RNA and proteins, and uses the host cell machinery to assemble new viruses. The new viruses bud off. This approach is called the lysogenic cycle; these viruses are called retroviruses and include herpes viruses and HIV.

Once free from the host cell the new viruses can attack other cells and produce thousands more virus particles, spreading quickly throughout the body. The immune system responds quickly by producing proteins to interfere with viral replication, pyrogenic chemicals to raise body temperature, and the induction of cell death (apoptosis). In some cases simply continuing the natural immune response is enough to eventually halt viral infection. But the virus kills many host cells in the meantime, leading to symptoms ranging from the characteristic runny nose and sore throat of a cold (rhinovirus) to the muscle aches and coughing associated with the flu (influenza virus).

Any virus can be deadly, especially to hosts with a weakened immune system, such as the elderly, small children, and persons with AIDS (though death is actually often due to a secondary bacterial infection). And any viral infection will cause pain and suffering, making treatment a very worthwhile goal. So far, the most successful approach to stopping viral infections has been prevention through the ubiquitous use of vaccines. The vaccine— either a weakened form of a particular virus or a mimic of one—stimulates the immune system to produce antibodies specific to that virus, thereby preventing infection when the virus is encountered in the environment. In another approach, antiviral medications are administered post-infection and work by targeting some of the specific ways that viruses reproduce.

However, viruses are very difficult to defeat. They vary enormously in genetic composition and physical conformation, making it difficult to develop a treatment that works for more than one specific virus. The immense number of viral types in nature makes even their classification a monumental job as there is more enormous structural diversity among viruses. Viruses have been evolving much longer than any cells have even existed and they have evolved methods to avoid detection and to overcome attempts to block replication. So, while we have made some progress in individual battles, those pesky viruses have definitely been winning the war.

Which is why the announcement of a broad spectrum antiviral therapeutic agent is such huge news. In their paper, Rider et al. describe a drug that is able to identify cells infected by any type of virus and which is then able to specifically kill only the infected cells to terminate the infection. The drug, named DRACO (which stands for Double-stranded RNA (dsRNA) Activated Caspase Oligomerizer), was tested against 15 viruses including rhinoviruses, H1N1 influenza, polio virus, and several types of hemorrhagic fever. And it was effective against every virus it was pitted against.

Dr. Rider looked closely at living cells’ own defense mechanisms in order to design DRACO. First, he observed that all known viruses make long strings of doublestranded RNA (dsRNA) during replication inside of a host cell, and that dsRNA is not found in human or other cells. As part of the natural immune response, human cells have proteins that latch onto dsRNA and start a biochemical cascade that prevents viral replication. But many viruses have evolved to overcome this response quite easily. So Rider combined dsRNA detection with a more potent weapon: apoptosis, or cell suicide.

Basically, the DRACO consists of two ends. One end identifies dsRNA and the other end induces cells to undergo apoptosis. When the DRACO binds to dsRNA it signals the other end of the DRACO to initiate cell suicide, thus killing the infected cell and terminating the infection. Beautifully, the DRACO also carries a protein that allows it to cross cell membranes and enter any human or animal cell. But if no dsRNA is present, it simply does nothing, leaving the cell unharmed.

An interesting question is whether any viruses are actually beneficial and whether wiping all viruses out of an organismal system may have negative consequences (as happens when antibiotic treatment eradicates both invading pathogenic bacteria and non-pathogenic flora, often leading to symptoms such as digestive upset). After his recent presentation at the 6th Strategies for Engineered Negligible Senescence (SENS) conference in September 2013, Dr. Rider fielded this question and stated quite adamantly that there are no known beneficial, symbiotic, or non-harmful viruses. This point is further emphasized in a recently published interview in which he is asked whether DRACO-triggered cell death could lead to a lesion in a tissue or organ. Rider responds that “Virtually all viruses will kill the host cell on the way out. Of the hand-full that don’t, your own immune system will try to kill those infected cells. So we’re not really killing any more cells with our approach than we already have been. It’s just that we’re killing them at an early enough stage before they infect and ultimately kill more cells. So, if anything, this limits the amount of cell death.”

So far, DRACO has been tested in cellular culture and in mouse models against a variety of very different virus types. Rider hopes to license DRACO to a pharmaceutical company so that it can be assessed in larger animal trials and, ultimately, human trials. Unfortunately, it may take a decade or more to complete this process and make the drug available for human therapeutic purposes, and that’s only if there is enough interest to do so. Amazingly, the DRACO project was started over 11 years ago and has barely survived during that period due to lack of interest and funding. Even now, after the DRACOs have been successfully engineered, produced, and tested, no one has yet reached out to Rider about taking them beyond the basic research stage. Let us hope that those of us who do find this work unbelievably exciting can make enough noise that Rider’s work continues to the benefit of all mankind.

Originally published as an article (in the Cooler Minds Prevail series) in Cryonics magazine, November, 2013

No More Couch Potato

In my review of The SharpBrains Guide to Brain Fitness a couple of months ago, the importance of certain lifestyle choices—particularly physical exercise— to maintain and enhance brain health was emphasized at length. Intuitively, we all know that physical activity is good for us. The metaphorical “couch potato” is assumed to be a person in poor health, precisely because of his or her lack of movement (and, of course, lazily consumed snacks and mind-numbing television). But even those of us who admonish the couch potato are moving our bodies a lot less these days due to an increase in the number of jobs requiring long periods of sitting. And current research is clear: all that sitting is taking a toll on our health.

So we know we need to get up and get moving. But what kind of exercise is best? So far, cardiovascular, or aerobic, exercise has received most of the attention in the literature. Because it is light-to-moderate in intensity and long in duration, aerobic exercise increases heart rate and circulation for extended periods, which is presumed to trigger biochemical changes in the brain that spur neuroplasticity—the production of new connections between neurons and even of new neurons themselves. It appears that the best regimen of aerobic exercise incorporates, at a minimum, three 30 to 60 minute sessions per week. In short, plenty of research has found that myriad positive physical and cognitive health benefits are correlated with aerobic exercise.

But what about non-aerobic exercise, such as strength training? The truth is that very little is known about the effects of non-aerobic exercise on cognitive health. What few studies exist show a positive effect of strength training on cognitive health, but the findings are definitely less conclusive than the plethora of evidence supporting aerobic exercise.

However, a lack of research should not be interpreted as negative results. I think non-aerobic exercise has received less research attention because, well, it is harder and appears less accessible than aerobic exercise. It is probably easier to get research participants to commit to a straightforward exercise regimen that doesn’t involve a lot of explanation or study to figure out. Let’s face it: pushing pedals on a stationary bike requires less mental effort than figuring out how to perform weight-bearing exercises with good form.

At worst, we may ultimately discover that non-aerobic exercise has no cognitive benefits. But let’s not throw the baby out with the bathwater. Because strength training does, in fact, promote a number of physical effects that are of great overall benefit to health, especially to the aging individual. Indeed, one would be remiss to omit strength training from any exercise regimen designed to promote healthy aging and a long, physically fit life.

The primary, and most obvious, effect of strength training is that of muscle development, or hypertrophy. Muscles function to produce force and motion and skeletal muscles are responsible for maintaining and changing posture, locomotion, and balance. Anyone who wishes to look and feel strong, physically capable, and well-balanced would do well to develop the appropriate muscles to reach these goals. Muscle mass declines with age, so it is smart to build a reserve of muscle in a relatively youthful state and to maintain it with regular workouts for as long as possible. Doing so will stave off the functional decline known as frailty, a recognized geriatric syndrome associated with weakness, slowing, decreased energy, lower activity, and unintended weight loss.

Those who know me know that I am very, very thin. At 5 foot 9 inches, it has always been a struggle to maintain my weight above 90 lbs.—a full 40 lbs. underweight for a woman of my height. This is almost certainly due, in large part, to genetics (my parents are both rail-thin), and no amount of eating has ever worked to put on additional pounds. Over the years, I grew more concerned about what my underweight meant in terms of disease risks as I age. In particular, dual energy x-ray absorbiometry (DEXA) scans for bone mineral density at age 27 and 33 showed accelerated bone loss beyond what is normal for my age. I was on a trajectory for a diagnosis of osteoporosis by my mid-40s.

Besides ensuring adequate calcium intake, I knew that the best prescription for slowing down bone loss is to perform weight-bearing exercises. Strength training causes the muscles to pull on the bone, resulting in increased bone strength. Strength training also increases muscle strength and flexibility, which reduces the likelihood of falling—the number-one risk factor for hip fracture.

So I dusted off my long-unused gym pass and started strength training 3 to 4 times a week. I was too weak to even lift weights in the beginning, so I started with body weight exercises and gradually progressed to weight machines. Weight machines allow you to build strength and to gain an understanding of how an exercise works a particular muscle or group of muscles. Many machines also have a limited range of motion within which to perform the exercise, providing some guidance on how to perform the movement. As I made improvements in strength, I began reading about strength training exercises online and downloaded some apps to help me in the gym.

For a basic “how-to,” nothing beats a video. There are plenty of exercise demonstration videos on YouTube.com and several other sites, but I prefer the definitive (and straight-to-the-point) visual aids provided by Bodybuilding.com. They offer short instructional videos for just about every strength training exercise in existence. The videos also download quickly and play easily on a mobile device, in case you need a refresher in the gym.

There are a lot of great apps out there, too. My favorites so far include PerfectBody (and associated apps by the same developer), GymPact, and Fitocracy. PerfectBody provides weekly workout routines, complete with illustrated descriptions of exercises and the ability to track your progress by documenting weight lifted and number of repetitions (reps) for each exercise. It is an all-in-one fitness program for learning foundational exercises and building strength and confidence in the gym.

If you have a hard time committing to a workout schedule, Gympact may help. One of the latest in a series of apps that make you put your money where your mouth is, you make a Gympact agreement to go to the gym a minimum number of times per week in order to earn monetary rewards for doing so. The catch is that you are charged money if you fail to meet your pact (which helps to pay all those committed gym-goers who didn’t renege on their promises). For many, the thought of losing money can provide quite the incentive to get your tail to the gym.

Now that you’ve got exercise examples, progress tracking, and motivation to actually get to the gym, how about some fun? Fitocracy is an app that turns exercise into a game, letting you track your exercise in return for points and “level ups” like a video game. There are challenges to meet and quests to conquer, adding to the competitive game-play element. But there’s also a nice social aspect, with friends and groups enabling people to “prop” one another and to provide support and advice.

Once you start pumping iron, you may quickly realize a need for nutrition adequate to meet your new muscle-building goals. As we all know, protein is the most important nutrient for building muscle. And while I will not attempt to provide advice regarding the appropriate nutrient ratio for the calories you consume each day, I can tell you that it is generally recommended to get at least 1 gram of protein per pound of body weight per day if you want to support muscle growth.

Adequate protein consumption is necessary even if you are not strength training and becomes even more important as you age. Reduced appetite and food intake, impaired nutrient absorption, and age-related medical and social changes often result in malnourishment. An insufficient intake of protein, in particular, can lead to loss of muscle mass, reduced strength, and many other negative factors leading to frailty.

It seems that whey protein provides the ultimate benefits in this arena. Whey, which is derived from milk, is a high-quality protein supplement with a rich source of branched-chain amino acids (BCAAs) to stimulate protein synthesis and inhibit protein breakdown, helping to prevent agerelated muscle-wasting (i.e., sarcopenia). Besides muscle support, a growing number of studies indicate other positive, antiaging effects of whey such as antioxidant enhancement, anti-hypertensive effect, hypoglycemic effect, and the promotion of bone formation and suppression of bone resorption. Life Extension Foundation recently reported that these effects mimic the benefits of calorie restriction without a reduction of food intake, playing roles in hormone secretion and action, intracellular signaling, and regulation of gene transcription and translation.

There are many whey protein powder supplements on the market in a variety of formulations and flavors. Whey protein isolate is quickly absorbed and incorporated into muscles, making it a good post-workout option, whereas whey protein concentrate is absorbed and incorporated more slowly, making it ideal for consumption just before bedtime. A whey protein powder may consist of isolate only, concentrate only, or both. Choose what best meets your needs and purposes.

Flavor is an important factor to consider, as well. Most major brands offer a variety of flavors such as vanilla, chocolate, strawberry, and some exotic options. Unflavored powders are sometimes available and are a great neutral protein base for mixing into (green) smoothies or other recipes. Some whey protein powders may actually include sugars to “improve” taste, so make sure to read the ingredients. Even many zero carb powders are still quite sweet. Many brands offer sample size packets which can be very helpful in determining whether or not you like a particular flavor or overall taste prior to buying an entire container.

Lastly, consider the sources of whey protein powder ingredients carefully. Not all whey is created equal, and many commercial brands on the market derive their ingredients from dubious sources or from animals treated with hormones and living in less-than-stellar conditions. But there are many great products out there, including Life Extension’s New Zealand Whey Protein Concentrate, which is derived from grass-fed, free range cows living healthy lives in New Zealand and not treated with Growth Hormone (rBST). If you have reservations about whey protein, there are also alternative protein powders that are derived from plants or egg white.

In summary, while the jury is still out regarding the cognitive benefits of nonaerobic exercise, such exercise is still a very important part of an overall plan to support health and longevity. Adequate nutritional support in the form of whey protein supplementation is generally indicated for its many health benefits, and is absolutely integral to muscle-building efforts. At the very least, strength training should complement brain-boosting aerobic exercise and will help to stave off bone loss and frailty as you age. So erase any preconceived notions you may have had about bodybuilding and start lifting today!

Originally published as an article (in the Cooler Minds Prevail series) in Cryonics magazine, October, 2013

Brain Fitness

Book Review: The SharpBrains Guide to Brain Fitness: How to Optimize Brain Health and Performance at Any Age by Alvero Fernandez

Of all the organs in the human body, a cryonicist should be most concerned about the health and integrity of his or her brain. Thousands of books have been written about physical health and fitness, but very few address the topic of how to keep the brain fit and healthy. Happily, interest in brain fitness, once relegated to academics and gerontologists, is now taking root across America and the world.

The importance of lifelong learning and mental stimulation as a component of healthy aging has long been recognized and touted as a way to stay mentally alert and to stave off dementia in old age. As with physical exercise, “use it or lose it” appears to apply to our brains too. And now that scientists are learning more about neuroplasticity and how brains change as a result of aging, they have begun to test the effects of various factors on brain health and cognitive ability across the lifespan.

Unfortunately, like much health-related research, the results reported by the media have often been convoluted, confusing, and even contradictory. Products developed by overzealous entrepreneurs make outlandish claims and frequently don’t deliver the purported results. Consumers and professionals alike are left wondering what works and what doesn’t when it comes to maintaining our brains in optimal working condition.

To aid all those navigating the murky waters of brain fitness, enter SharpBrains—a company dedicated to tracking news, research, technology, and trends in brain health and to disseminating information about the applications of brain science innovation. In so doing, they “maintain an annual state-of-the-market consumer report series, publish consumer guides to inform decision-making, produce an annual global and virtual professional conference,” and maintain SharpBrains.com, a leading educational blog and website with over 100,000 monthly readers.

Most recently, SharpBrains has published a book on brain fitness called The SharpBrains Guide to Brain Fitness: How to Optimize Brain Health and Performance at Any Age. A compilation and condensation of information accumulated over the lifespan of the company, The SharpBrains Guide to Brain Fitness emphasizes credible research and goes to great lengths to provide the most up-to-date research results in specific areas of brain fitness, followed by interviews with scientists doing work in those fields. The goal of the guide is to help the reader begin to “cultivate a new mindset and master a new toolkit that allow us appreciate and take full advantage of our brain’s incredible properties…[by] providing the information and understanding to make sound and personally relevant decisions about how to optimize your own brain health and performance.”

The Guide begins by emphasizing that the brain’s many neuronal networks serve distinct functions including various types of memory, language, emotional regulation, attention, and planning. Plasticity of the brain is defined as its lifelong capacity to change and reorganize itself in response to the stimulation of learning and experience—the foundation upon which “brain training” to improve cognitive performance at any age, and to maintain brain health into old age, is predicated.

The difficulty of making sense of the scientific findings on brain health and neuroplasticity is discussed at length, with the finger of blame pointed squarely at the media for reporting only fragments of the research and for often reporting those results which are not most meaningful. The authors stress that “it is critical to complement popular media sources with independent resources, and above all with one’s own informed judgment.”

The following chapters go on to review what is known today about how physical exercise, nutrition, mental challenge, social engagement, and stress management can positively affect brain health. Along the way they provide dozens of relevant research results (as well as the design of each study) to support their recommendations. Reporting on all of those experiments is beyond the scope of this review, so if you are interested in examining them (and you should be!) please obtain a copy of the Guide for yourself or from your local library.

Physical exercise is discussed first because of the very strong evidence that exercise—especially aerobic, or “cardio,” exercise slows atrophy of the brain associated with aging, actually increasing the brain’s volume of neurons (i.e., “gray matter”) and connections between neurons (i.e., “white matter”). While much of the initial research supporting the effects of exercise on the brain came from animal studies, the authors report that “several brain imaging studies have now shown that physical exercise is accompanied by increased brain volume in humans.”

Staying physically fit improves cognition across all age groups, with particularly large benefits for so-called “executive” functions such as planning, working memory, and inhibition. A 2010 meta-analysis by the NIH also concluded that physical exercise is a key factor in postponing cognitive decline and/or dementia, while other studies have found physical exercise to lower the risk of developing Parkinson’s disease, as well.

But don’t think that just any moving around will do the trick. When it comes to providing brain benefits, a clear distinction is drawn between physical activity and physical exercise. Only exercise will trigger the biochemical changes in the brain that spur neurogenesis and support neuroplasticity. It doesn’t need to be particularly strenuous, but to be most beneficial it should raise your heart rate and increase your breathing rate.

Of course, adequate nutrition is also imperative in obtaining and maintaining optimal brain health. The SharpBrains Guide to Brain Fitness primarily highlights the well-known benefits of the Mediterranean diet, which consists of a high intake of vegetables, fruit, cereals, and unsaturated fats, a low intake of dairy products, meat, and saturated fats, a moderate intake of fish, and regular but moderate alcohol consumption. But I think it is safe to say that the jury is still out on the best diet for the brain, as evidenced by the recent popularity of the Paleo diet among life extentionists. And, of course, ethnicity and genetics are important, too. The authors do stress the importance of omega-3 fatty acids and antioxidants obtained from dietary sources, stating firmly that “to date, no supplement has conclusively been shown to improve cognitive functioning, slow down cognitive decline, or postpone Alzheimer’s disease symptoms beyond placebo effect.” This includes herbal supplements such as Ginko biloba and St. John’s wort.

Beyond what we normally do to keep our bodies healthy, the Guide also discusses the relative effectiveness of different forms of “mental exercise.” Perhaps you’ve heard that doing crossword or Sudoku puzzles will keep you sharp and alert into old age, or that speaking multiple languages is associated with decreased risk of Alzheimer’s disease. The good news is that these things are true—to a degree. The part that is often left out is that it’s the challenge of these activities that is important. As with physical activity vs. physical exercise, mental exercise refers to the subset of mental activities that are effortful and challenging.

Puzzles and games may be challenging at first, but they (and other mental exercises) can quickly become routine and unchallenging. In order to reap the most benefit from mental exercise, the goal is to be exposed to novelty and increasing levels of challenge. Variety is important for stimulating all aspects of cognitive ability and performance, so excessive specialization is not the best strategy for maintaining long-term brain health. If you are an artist, try your hand at strategybased games. If you’re an economist, try an artistic activity. Get out of your comfort zone in order to stimulate skills that you rarely use otherwise.

The SharpBrains Guide states that “lifelong participation in cognitively engaging activities results in delayed cognitive decline in healthy individuals and in spending less time living with dementia in people diagnosed with Alzheimer’s disease.” This is hypothesized to be because doing so builds up one’s “cognitive reserve”—literally an extra reservoir of neurons and neuronal connections—which may be utilized so that a person continues to function normally even in the face of underlying Alzheimer’s or other brain pathology. This observation raises another important point on which neuroscientists and physiologists do not yet fully agree. Will we all eventually get dementia if we live long enough without credible brain rejuvenation biotechnologies? This is a topic I would like to return to in a future installment of Cooler Minds Prevail.

Social engagement also appears to provide brain benefits. The NIH meta-analysis mentioned earlier concluded that higher social engagement in mid- to late life is associated with higher cognitive functioning and reduced risk of cognitive decline. Brain imaging studies indicate an effect of social stimulation on the volume of the amygdala, a structure that plays a major role in our emotional responses and which is closely connected to the hippocampus, which is important for memory.

Yet again, not all activity is equal. When it comes to social stimulation, “you can expect to accrue more benefits within groups that have a purpose (such as a book club or a spiritual group) compared to casual social interactions (such as having a drink with a friend to relax after work).” To keep socially engaged across the lifespan, seek out interactions that naturally involve novelty, variety, and challenge such as volunteering and participating in social groups.

“The lifelong demands on any person have changed more rapidly in the last thousand years than our genes and brains have,” The SharpBrains Guide explains in the intro to the chapter on stress management. The result? It has become much more difficult to regulate stress and emotions. It is great that we have such amazing and complex brains, but humans are among the few animals that can get stressed from their own thoughts. And while there are some (potentially) beneficial effects of short bursts of stress, high and sustained levels of stress can have a number of negative consequences. Those of note include: increased levels of blood cortisol which can lead to sugar imbalances, high blood pressure, loss of muscle tissue and bone density, lower immunity, and cause damage to the brain; a reduction of certain neurotransmitters, such as serotonin and dopamine, which has been linked to depression; and a hampering of our ability to make changes to reduce the stress, resulting in General Adaption Syndrome (aka “burnout”).

Research-based lifestyle solutions to combat stress include exercise, relaxation, socialization, humor and laughter, and positive thinking. In particular, targeted, capacity-building techniques such as biofeedback and meditation are recommended to manage stress and build resilience. Mindfulness Based Stress Reduction (MBSR) programs have provided evidence that meditative techniques can help manage stress and research shows that MBSR can lead to decreases in the density of an area of the amygdala which is correlated with reduction in reported stress.

So it appears that multiple approaches are necessary to develop a highly fit brain capable of adapting to new situations and challenges throughout life. “Consequently,” The SharpBrains Guide to Brain Fitness states, “we expect cross-training the brain to soon become as mainstream as cross-training the body is today, going beyond unstructured mental activity in order to maximize specific brain functions.”

There is growing evidence that brain training can work, but in evaluating what “works” we are mostly looking at two things: how successful the training program is (i.e., does it actually improve the skill(s) being trained?) and the likelihood of transfer from training to daily life. Building on an analysis of documented examples of brain training techniques that “work” or “transfer,” SharpBrains suggests the following five conditions need to be met for brain training to be likely to translate into meaningful real world improvements (condensed excerpt):

  1. Training must engage and exercise a core brain-based capacity or neural circuit identified to be relevant to real-life outcomes.
  2. The training must target a performance bottleneck.
  3. A minimum “dose” of 15 hours total per targeted brain function, performed over 8 weeks or less, is necessary for real improvement.
  4. Training must be adaptive to performance, require effortful attention, and increase in difficulty.
  5. Over the long-term, the key is continued practice for continued benefits.

Meditation, biofeedback, and/or cognitive therapy in concert with cognitive training to optimize targeted brain functions appear to be winning combinations in terms of successful techniques facilitating transfer from training to real life benefits. Top brain training software programs, based on SharpBrains’ analysis and a survey of their users, include Lumosity, Brain games, brainHQ, Cogmed, and emWave.

In the end, brain fitness needs are unique to each individual and brain fitness claims should be evaluated skeptically. SharpBrains recommends asking several questions when evaluating brain fitness claims, particularly whether there is clear and credible evidence of the program’s success documented in peer-reviewed scientific papers published in mainstream scientific journals that analyze the effects of the specific product.

Of course, your own individual experience with the product is ultimately the most important evaluation of all. If you are ready to take the plunge into the emerging brain fitness market, The SharpBrains Guide to Brain Fitness is a good place to start, and I’m sure they’d appreciate your feedback as this field continues to develop.

Originally published as an article (in the Cooler Minds Prevail series) in Cryonics magazine, August, 2013

HIV, Immunosenescence, and Accelerated Aging

After a few articles considering Alzheimer disease from several angles, I would like to switch gears this month and talk more generally about the interaction between the immune system and aging.

In his 2012 paper[1], Caleb E. Finch documents the evolution of life expectancy in the course of human history. The life expectancy at birth of our shared ape ancestor 6 millions years ago is hypothesized to approximate that of a chimpanzee, 15 years. The first Homo species appeared 1-2 million years ago and had a life expectancy of ~20 years, while H. sapiens came onto the scene ~100,000 years ago and could expect about 30 years of life. But starting around 200 years ago, concurrent with industrialization, human life expectancy jumped rapidly, to somewhere between 70 and 80 years today.

As many readers are likely aware, the huge recent increases in life expectancy are commonly attributed to improvements in hygiene, nutrition, and medicine during the nineteenth and twentieth centuries that reduced mortality from infections at all ages. Finch hypothesizes, generally, that early age mortality over the course of human history is primarily due to (acute) infection, while old age mortality is primarily due to (chronic) inflammation. Further analysis of mortality rates over the last several hundred years leads him to further hypothesize that aging has been slowed in proportion to the reduced exposure to infections in early life. These hypotheses are supported by twentieth century examples which strongly demonstrate influences of the early life environment on adult health, such as the effects of prenatal and postnatal developmental influences (e.g., nutrition, exposure to infection) on adult chronic metabolic and vascular disorders as well as physical traits and mental characteristics. This leads Finch to suggest “broadening the concept of ‘developmental origins’ to include three groups of factors: nutritional deficits, chronic stress from socioeconomic factors, and direct and indirect damage from infections.”

Finch also considers the effects of inflammation and diet on human evolution, proposing several environmental and foraging factors that may have been important in the genetic basis for evolving lower basal mortality through interactions with chronic inflammation, in particular: dietary fat and caloric content; infections from pathogens ingested from carrion and from exposure to excreta; and noninfectious inflammagens such as those in aerosols and in cooked foods. He hypothesizes that exposure to these proinflammatory factors, which one would expect to shorten life expectancy, actually resulted in humans evolving lower mortality and longer lifespans in response to highly inflammatory environments.

A means for this, he argues, was the development of the apoE4 genotype. Noting that the apoE4 allele favors advantageous fat accumulation and is also associated with enhanced inflammatory responses, Finch argues that heightened inflammatory response and more efficient fat storage would have been adaptive in a pro-inflammatory environment and during times of uncertain nutrition. As has been discussed in prior articles in Cooler Minds Prevail, the apoE alleles also influence diverse chronic non-infectious degenerative diseases and lifespan. “Thus,” Finch concludes, “the apoE allele system has multiple influences relevant to evolution of brain development, metabolic storage, host defense, and longevity.”

With the general relationship between inflammation and the evolution of human aging and life expectancy in mind, let us now consider immune system involvement in more detail, and the relationship between HIV and immunosenescence more specifically.

Immunosenescence refers to the ageassociated deterioration of the immune system. As an organism ages it gradually becomes deficient in its ability to respond to infections and experiences a decline in long-term immune memory. This is due to a number of specific biological changes such as diminished self-renewal capacity of hematopoietic stem cells, a decline in total number of phagocytes, impairment of Natural Killer (NK) and dendritic cells, and a reduction in B-cell population. There is also a decline in the production of new naïve lymphocytes and the functional competence of memory cell populations. As a result, advanced age is associated with increased frequency and severity of pathological health problems as well as an increase in morbidity due to impaired ability to respond to infections, diseases, and disorders.

It is not hard to imagine that an increased viral load leading to chronic inflammatory response may accelerate aging and immunosenescence. Evidence for this is accumulating rapidly since the advent of antiretroviral therapies for treatment of HIV infection. An unforeseen consequence of these successful therapies is that HIV patients are living longer but a striking number of them appear to be getting older faster, particularly showing early signs of dementia usually seen in the elderly. In one study, slightly more than 10% of older patients (avg = 56.7 years) with wellcontrolled HIV infection had cerebrospinal fluid (CSF) marker profiles consistent with Alzheimer disease[2] – more than 10 times the risk prevalence of the general population at the same age. HIV patients are also registering higher rates of insulin resistance and cholesterol imbalances, suffer elevated rates of melanoma and kidney cancers, and seven times the rate of other non-HIV-related cancers. And ultimately, long-term treated HIV-infected individuals also die at an earlier age than HIV-uninfected individuals[3].

Recent research is beginning to explore and unravel the interplay between HIV infection and other environmental factors (such as co-infection with other viruses) in the acceleration of the aging process of the immune system, leading to immunosenescence. In the setting of HIV infection, the immune response is associated with abnormally high levels of activation, leading to a cascade of continued viral spread and cell death, and accelerating the physiologic steps associated with immunosenescence. Despite clear improvements associated with effective antiretroviral therapy, some subjects show persistent alterations in T cell homeostasis, especially constraints on T cell recovery, which are further exacerbated in the setting of co-infection and increasing age.

Unsurprisingly, it has been observed that markers of immunosenescence might predict morbidity and mortality in HIV-infected adults as well as the general population. In both HIV infection and aging, immunosenescence is marked by an increased proportion of CD28- to CD57+, and memory CD8+ T cells with reduced capacity to produce interleukin 2 (IL-2), increased production of interleukin 6 (IL-6), resistance to apoptosis, and shortened telomeres. Levels of markers of inflammation are elevated in HIV infected patients, and elevations in markers such as high-sensitivity C-reactive protein, D-dimer, and interleukin 6 (IL-6) have been associated with increased risk for cardiovascular disease, opportunistic conditions, or all-cause mortality[4].

But even as we are beginning to identify markers that appear to be associated with risk of poor outcome in HIV infection, it is still unclear how patients should be treated on the basis of this information. To that end, several trials are underway to evaluate the effects of modulation of immune activation and inflammation in HIV infection. At the same time, clinicians at the forefront of advancing knowledge and clinical care are performing research aimed at optimizing care for aging HIV patients.

The implications for such research may be far-reaching. In fact, many HIV clinicians and researchers think that HIV may be key to understanding aging in general. Dr. Eric Verdin states, “I think in treated, HIV-infected patients the primary driver of disease is immunological. The study of individuals who are HIV-positive is likely to teach us things that are really new and important, not only about HIV infection, but also about normal aging.”

Dr. Steven Deeks stresses the collaborative efforts of experts across fields. “I think there is a high potential for tremendous progress in understanding HIV if we can assemble a team of experts from the world of HIV immunology and the world of gerontology,” he says. “Each field can dramatically inform the other. I believe HIV is a well described, well studied, distinct disease that can be used as
a model by the larger community to look at issues of aging.”

References

[1] Finch, C (2012). Evolution of the Human Lifespan, Past, Present, and Future: Phases in the Evolution of Human Life Expectancy in Relation to the Inflammatory Load. Proceedings of the American Philosophical Society, 156:1, 9-44.

[2] Mascolini, M (2013). Over 10% in Older HIV Group Fit Alzheimer’s Biomarker Risk Profile. Conference Reports for NATAP: 20th Conference on Retroviruses and Opportunistic Infections, March 3-6, 2013.

[3] Aberg, X (2012). Aging, Inflammation, and HIV Infection. Topics in Antiviral Medicine, 20:3, 101-105.

[4] Deeks, S, Verdin, S. and McCune, JM (2012). Immunosenescence and HIV. Current Opinion in Immunology, 24: 1-6.

Originally published as an article (in the Cooler Minds Prevail series) in Cryonics magazine, June, 2013

Deficiencies in the SENS Approach to Rejuvenation

This article was originally published in Cryonics Magazine, 2011 Issue #1

I am an ardent supporter of Dr. Aubrey de Grey and his work to advance rejuvenation science. The man is priceless and unique in his concepts, brilliance, dedication, organizational abilities, and networking skill. His impact on anti-aging science has been powerful. I have attended all four of the conferences he has organized at Cambridge University in England. For the February 2006 issue of LIFE EXTENSION magazine I interviewed Dr. de Grey, and for the December 2007 issue of LIFE EXTENSION I wrote a review of ENDING AGING, the book he co-authored with Michael Rae.

Dr. de Grey asserts that aging is the result of seven kinds of damage – and that technologies that repair all seven types of damage will result in rejuvenation. His seven-fold program for damage repair is called SENS: “Strategies for Engineered Negligible Senescence”. Dr. de Grey asserts that repairing aging damage is a more effective approach than attempting to slow or prevent aging, and I agree with him. Being an ardent supporter of SENS has not stopped me from simultaneously being a critic of aspects of his program that I think are flawed or deficient. I will attempt to outline some of my criticisms in simple language, assuming that my readers have some knowledge of basic science.

Two SENS strategies cannot justly be described as damage-repair, in my opinion. To protect mitochondrial DNA from free radical damage he wants to make copies of mitochondrial DNA in the nucleus – and import the resulting proteins back into the mitochondria. I would call this an attempt to slow or prevent aging – it cannot be called repair.

Similarly, SENS aims to eliminate cancer by deletion of genes that contribute to cancer, specifically telomerase and ALT (Alternate Lengthening of Telomeres) genes. I am not convinced that this is the best way to eliminate cancer, and I do not believe that deleting cancer-producing genes can properly be called damage-repair.

My criticisms about a procrustean attempt to force two strategies into a model purporting to only be concerned with damage and repair is minor, however, compared to a more fundamental concern that I have that a significant form of aging damage may be being ignored by SENS. I have written a review expressing my concern entitled “Nuclear DNA Damage as a Direct Cause of Aging” that was published in the June 2009 issue of the peer-reviewed journal Rejuvenation Research, [note 1] a journal of which Dr. de Grey is Editor-in-Chief. A PDF of my review is available in the life extension section of my website BENBEST.COM. Those interested in all the citations for claims I will make in this essay are encouraged to read my review. In this essay, I limit my citations to only a few critical articles.

There are many types of DNA damage, but for the purposes of this essay I will focus on breakage of both DNA strands – resulting in a gap in a chromosome. There are two mechanisms for repairing double-strand DNA breaks: Homologous Recombination (HR) and Non-Homologous End-Joining (NHEJ). HR usually results in perfect repair, but HR can only operate when cells are dividing. NHEJ is the more frequent form of double-strand break repair, but it is error-prone. NHEJ is the only DNA repair mechanism available for non-dividing cells. Even in cells that divide, 75% of double-strand breaks are repaired by NHEJ. [note 2]

It is hard to believe that it could be a coincidence that the most notorious “accelerated aging” diseases are due to defective DNA repair. The two most prominent of these diseases are Werner’s syndrome (“adult progeria”) and Hutchinson-Gilford syndrome (“childhood progeria”), both of which are caused by defective nuclear DNA repair, mainly HR. In both diseases the “aging phenotype” is apparently due to high levels of apoptosis and cellular senescence. Apoptosis (“cell suicide”) and cellular senescence (cessation of cell division) are both mechanisms that are induced in cells experiencing nuclear DNA damage that the cell is unable to repair. It is not surprising that victims suffering massive depletion of properly functioning cells should exhibit “accelerated aging”. Mice that are genetically altered to show increased apoptosis and cellular senescence also show an “accelerated aging phenotype”.

Elimination of senescent cells and stem-cell replenishment of cells depleted in tissues by this elimination – as well as depleted by apoptosis – are part of SENS. But these strategies are only applicable to cells that divide – not to non-dividing cells such as neurons. Cryonicists are acutely aware that organs – and even whole bodies – can be replaced, but brains (neurons, axons, dendrites, and synapses, particularly) must be preserved if we are not to lose memory and personal identity. The ability of future medicine to replace all organs and tissues other than the brain would render most of SENS unnecessary – except for the brain.

There is considerable evidence of a significant role for DNA damage in brain aging. There are nearly twice as many double-stand nuclear DNA breaks in the cerebral cortex of adult (180 days) rats as in young rats (4 days) – and old (over 780 days) rats have more than twice the double-strand breaks as adult rats. [note 3] Adult rats show a 28% decrease in NHEJ activity in the cerebral cortex neurons compared to neonatal rats – and old rats show a 40% decrease. [note 4] Declining NHEJ activity with age is at least partially due to ATP decline and cellular damage that SENS is intended to fix. But even if NHEJ activity did not decline with age, nuclear DNA damage in neurons will increase at least in part because NHEJ is so error-prone.

Nuclear DNA damage typically leads to mutation or DNA repair – or apoptosis or cellular senescence when DNA repair fails (a mechanism that is believed to have evolved for protection against cancer). But not all DNA damage is repaired, and NHEJ repair is often defective. Accumulating DNA damage and mutation can lead to increasingly dysfunctional cells.

Cancer is due to nuclear DNA damage, mutations, and epimutations. Dr. de Grey has written that “only cancer matters” for mutation and epimutation to nuclear DNA. His mutation terminology does not even acknowledge DNA damage. He has assumed that damaged DNA either is or becomes a mutation. He has assumed that DNA damage that does not become a mutation is either repaired – or leads to apoptosis or cellular senescence.

Dr. de Grey has made the claim that evolution has required such strong defenses against cancer that residual mutation (and, implicitly, DNA damage) is negligible. But cancer incidence increases exponentially with age up to age 80, so it is likely that the residual increases exponentially at the same time.

As recently as the 1980s it was widely believed that normal aging is associated with extensive neuron loss. Now it is established that functional decline in the aging brain is associated with increased neural dysfunction rather than neurodegeneration. [note 5] This neural dysfunction may or may not be mostly due to cellular damage that SENS is intended to fix – including causes of declining NHEJ activity. How much neuron dysfunction associated with aging is due to accumulating mutations or unrepairable nuclear DNA damage is unknown. SENS assumes without proof that nuclear DNA damage and mutation is negligible as a cause of aging (apart from cancer, apoptosis, and cellular senescence). This may be right or it may be wrong. I believe that without definitive proof, nothing should be assumed, and active investigation to determine the facts should not be neglected.

I believe the situation is not hopeless if nuclear DNA damage proves to be a significant cause of brain aging. Future molecular technologies for detection and repair of nuclear DNA damage could be significantly better than natural DNA repair enzymes. And, to simplify the required effort, the DNA repair technologies could be restricted to genes that are actively transcribed in neurons, rather than needing to repair the whole genome.

Notes

1: Best BP. Nuclear DNA damage as a direct cause of aging. Rejuvenation Res. 2009 Jun;12(3):199-208.

2: Mao Z, Bozzella M, Seluanov A, Gorbunova V. Comparison of nonhomologous end joining and homologous recombination in human cells. DNA Repair (Amst). 2008 Oct 1;7(10):1765-71.

3: Mandaville BS, Rao KS. Neurons in the cerebral cortex are most susceptible to DNA-damage in aging rat brain. Biochem Mol Biol Int 1996 Oct; 40(3):507-14.

4: Vyjayanti VN, Rao KS. DNA double strand break repair in brain: reduced NHEJ activity in aging rat neurons. Neurosci Lett. 2006 Jan 23;393(1):18-22.

5: Morrison JH, Hof PR. Life and death of neurons in the aging brain. Science. 1997 Oct 17;278(5337):412-9.

Alzheimer Disease in 2020

Any terminal illness is a terrible thing; but to a cryonics member, a brain-destroying neurodegenerative disease is the worst contemporary medical “death sentence” one can receive. There are several flavors of neurodegenerative disorders, many of which primarily affect the patient’s movement, strength, coordination, or the peripheral nervous system. And there are numerous contributory mechanisms in the causation of neurodegeneration, including prion infection and toxin related disease. But the most common – and the most feared – neurodegenerative disease is one that affects not movement, but cognition.

Of course, I am speaking of Alzheimer disease (AD). Originally described in a 51- year old woman by the Bavarian psychiatrist Alois Alzheimer in 1906, neuropathologists have increasingly recognized that AD is also the most common basis for latelife cognitive failure. Culminating in neuronal dystrophy and death leading to the progressive loss of memory and other cognitive functions (i.e., dementia), and affecting individuals of both sexes and of all races and ethnic groups at a rate of occurrence in the U.S. ranging from approximately 1.3% (age 65-74) to 45% (age 85-93), it is easy to see why AD has generated so much intense scientific interest in recent years.

In the recently published work “The Biology of Alzheimer Disease” (2012), most of what is known about AD today is described in detail in the various chapters covering topics such as the neuropsychological profile and neuropathological alterations in AD, biomarkers of AD, the biochemistry and cell biology of the various proteins involved in AD, animal models of AD, the role of inflammation in AD, the genetics of AD, and treatment strategies. The editors’ selection of contributions has resulted in the most up-to-date compendium on Alzheimer disease to date.

The book culminates in a chapter called Alzheimer Disease in 2020, where the editors extol “the remarkable advances in unraveling the biological underpinnings of Alzheimer disease…during the last 25 years,” and yet also recognize that “we have made only the smallest of dents in the development of truly disease-modifying treatments.” So what can we reasonably expect over the course of the next 7 years or so? Will we bang our heads against the wall of discovery, or will there be enormous breakthroughs in identification and treatment of AD?

Though a definitive diagnosis of AD is only possible upon postmortem histopathological examination of the brain, a thorough review of the book leads me to believe that the greatest progress currently being made is in developing assays to diagnose AD at earlier stages. It is now known that neuropathological changes associated with AD may begin decades before symptoms manifest. This, coupled with the uncertainty inherent in a clinical diagnosis of AD, has driven a search for diagnostic markers. Two particular approaches have shown the most promise: brain imaging and the identification of fluid biomarkers of AD.

Historically, imaging was used only to exclude potentially surgically treatable causes of cognitive decline. Over the last few decades, imaging has moved from this minor role to a central position of diagnostic value with ever-increasing specificity. The ability to differentiate AD from alternative or contributory pathologies is of significant value now, but the need for an earlier and more certain diagnosis will only increase as disease-modifying therapies are identified. This will be particularly true if these therapies work best (or only) when initiated at the preclinical stage. Improvements in imaging have also greatly increased our understanding of the biology and progression of AD temporally and spatially. Importantly, the clinical correlations of these changes and their relationships to other biomarkers and to prognosis can be studied.

The primary modalities that have contributed to progress in AD imaging are structural magnetic resonance imaging (MRI), functional MRI, fluorodeoxyglucose (FDG) positron emission tomography (PET), and amyloid PET. Structural MRI, which is used to image the structure of the brain, has obvious utility in visualizing the progressive cerebral atrophy characteristic of AD. Such images can be used as a marker of disease progression and as a means of measuring effective treatments (which would slow the rate of atrophy). Functional MRI, on the other hand, measures changes in blood oxygen leveldependent (BOLD) MR signal. This signal, which can be acquired during cognitive tasks, may provide the clinician with a tool to compare brain activity across conditions in order to assess and detect early brain dysfunction related to AD and to monitor therapeutic response over relatively short time periods.

FDG PET primarily indicates brain metabolism and synaptic activity by measuring glucose analog fluorodeoxyglucose (which can be detected by PET after labeling it with Fluorine-18). A large body of FDG-PET work has identified an endophenotype of AD – that is, a signature set of regions that are typically hypometabolic in AD patients. FDG hypometabolism parallels cognitive function along the trajectory of normal, preclinical, prodromal, and established AD. Over the course of three decades of investigation, FDG PET has emerged as a robust marker of brain dysfunction in AD. Imaging of β-amyloid (Aβ) – the peptide that makes up the plaques found in the brains of AD patients – is accomplished via amyloid PET to determine brain Aβ content. Historically, this has only been possible upon postmortem examination, so the utility of amyloid imaging is in moving this assessment from the pathology laboratory to the clinic. Because amyloid deposition begins early on, however, amyloid PET is not useful as a marker of disease progression.

The well-known hallmarks of AD, the plaques and neurofibrillary tangles first described by Alouis Alzheimer in 1906, were discovered in 1985 to be composed primarily of β-amyloid and hyperphosphorylated tau protein, respectively. Advances in our knowledge of Aβ generation and tau protein homeostasis have led to substantial research into disease-modifying drugs aimed at decreasing overall plaque and tangle load in an effort to halt neurodegeneration. Such treatments will likely be most effective if started early in the disease process, making sensitive and accurate fluid biomarkers of Aβ and tau especially important.

Outside of imaging, progress in AD diagnostics stems primarily from the assessment of fluid biomarkers of AD. These biomarkers are generally procured from the cerebrospinal fluid (CSF) and blood plasma and include total tau (T-tau), phosphorylated tau (P-tau) and the 42 amino acid form of of β-amyloid (Aβ42). These core biomarkers reflect AD pathology and have high diagnostic accuracy, which is especially useful in diagnosing AD in prodromal and mild cognitive impairment cases.

Because the CSF is in direct contact with the extracellular space of the brain, biochemical changes in the brain can be detected in the CSF. Assays to detect Aβ42 led to the discovery that Aβ42 in AD is decreased to approximately 50% of control levels, making the measurement of Aβ42 a useful clinical tool. Measurements of T-tau (around 300% of control in AD patients) and P-tau biomarkers (a marked increase in AD patients) in combination with Aβ42, however, provide an even more powerful diagnostic assay.

Fluid biomarkers for AD other than Aβ and tau have been posited, but positive results have been difficult to replicate. Novel biomarkers with the most promise inlcude the amyloid precursor proteins sAPPβ and sAPPα, β-site APP cleaving enzyme-1 (BACE1), Aβ oligomers, and other Aβ isoforms. Additionally, neuronal and synaptic proteins as well as various inflammatory molecules and markers of oxidative stress may prove valuable as CSF biomarkers. Studies of plasma biomarkers such as those investigating plasma Aβ have yielded contradictory results, but promising novel blood biomarkers for AD may be found in certain signaling and inflammatory proteins.

Taken together, progress in brain imaging and identification of fluid biomarkers hold great promise in improved diagnosis of AD cases. When combined with expected drug therapies we may be able to delay the onset of neurodegeneration and associated cognitive impairment significantly. In the meantime, early diagnosis is helpful in stratifying AD cases, monitoring potential treatments for safety, and monitoring the biochemical effect of drugs. For cryonicists, early diagnosis can help guide treatment and end-of-life care decisions in order to optimize cryopreservation of the brain.

So – back to the original question. What can we predict about the AD landscape in 2020?

Besides continued progress in early diagnosis through brain imaging and fluid biomarkers, the authors anticipate that advances in whole-genome and exome sequencing will lead to a better understanding of all of the genes that contribute to overall genetic risk of AD. Additionally, improved ability to sense and detect the proteins that aggregate in AD and to distinguish these different assembly forms and to correlate the various conformations with cellular, synaptic, and brain network dysfunction should be forthcoming in the next few years. Lastly, we will continue to improve our understanding of the cell biology of neurodegeneration as well as cell-cell interactions and inflammation, providing new insights into what is important and what is not in AD pathogenesis and how it differs across individuals, which will lead, in turn, to improved clinical trials and treatment strategies.

Originally published as an article (in the Cooler Minds Prevail series) in Cryonics magazine, April, 2013