You are on page 1of 22

Big file of abstract thought – Paper 4

Animal models

The visionary physicians Rudolf Virchow and William Osler advanced the ‘One Health’ concept ,
according to which human wellness and animal wellness are linked. In line with this paradigm,
comparative medicine relies
on animal models of disease to enable translational research by providing living systems in which the
effect of discrete molecular manipulations can be examined in complex physiological contexts.
However, even the best models are, by definition, an approximation and may vary in their fidelity to
the induction and progression of human disease
Evolutionary systems biology is an emerging discipline that aims to delineate how evolution jointly
shapes genomes, molecular networks and phenotypes. Humans and mice derive from a common
mammalian ancestor but have evolved independently in distinct biospheres over ~90 million years.
According to a survey of over 7,000 clinical trials performed in the past decade, 90% of treatment
regimens fail to progress from phase I to approval 4. We suggest that a number of these failures are
due to an ‘illusion of similarity’ that masks the inability to predict to what extent animal models
inform human medicine. Consider, for example, inflammatory bowel diseases (IBDs), including
Crohn’s disease (CD) and ulcerative colitis (UC), which arise due to exaggerated host responses to
the microbiota in genetically susceptible hosts.
In most mouse models of IBD, it appears as disease in the colon and resembles UC more than CD,
which classically affects the ileum. Examination of the inflammation in models of IBD reveals that it
has several similarities to the inflammation of human disease, including both innate immune
responses and adaptive immune responses. For example, expression of the proinflammatory
cytokine TNF is increased in diseased humans and most animal models of IBD, and the use of
antibody to TNF is often an effective therapeutic strategy in both species. However, these similarities
distract from key differences. For example, the fibrotic lesions that affect approximately one third of
human patients affected with CD for over 10 years have been difficult to mimic in animal models.
The molecular distinctions between UC and CD were initially based on cytokine profiles in animal
models, which led to the notion that the TH1 subset of helper T cells is associated with CD and the
TH2 subset is associated with UC. Subsequently, human genome-wide association studies
demonstrated that the immunopathogenesis is more complex. Significant mutations in T H2 cell–
related genes are not readily associated with disease in humans, while many other loci, including
genes encoding components of the IL-17 pathway, are linked to both CD and UC 5. The gene
encoding IL-17A is conserved between human and rodents, and its expression is increased in the
colon of humans with IBD and in many mouse models of colitis. Some studies have linked IL-17 to
the pathogenesis of colitis in mice, but other reports suggest it is protective 6, a paradoxical
observation that perhaps was overlooked until treatment with antibody to IL-17 was shown to
exacerbate IBD in humans6. Thus, an apparent similarity in the expression of a gene in animal
models and human disease provided tragically limited insight into the complexities of biology and
disease.

Aging
Grey hair may sprout when the immune system is activated by infection, a study in mice suggests.
Hair gets its colour from pigment-making cells called melanocytes that are found at the base of each
hair. Scientists know that a protein called MITF controls many of the functions of melanocytes,
including pigment production.
Melissa Harris at the University of Alabama at Birmingham and her colleagues engineered mice
whose melanocyte precursors expressed lower levels of MITF. The altered cells not only caused the
mice to turn grey prematurely, but also activated genes involved in the body’s immune response.
When the researchers artificially stimulated the immune systems of a different line of mice that are
predisposed to go grey, this caused a loss of both melanocyte precursors and melanocytes, giving
the mice more grey fur.
Although it’s not clear how the immune system affects hair colour, the finding might explain why
some people reputedly go grey after viral infections.
Hematopoietic stem cells (HSCs) generate all known hematopoietic lineages and self-renew, but the
function of HSCs declines during aging, which is characterized as impairments of lymphopoiesis and
enhanced myelopoiesis. These aging-associated changes correlate with an increasing risk of myelo-
proliferative diseases and impairments in immune function. Recent studies showed that only a
subpopulation of HSCs contains a high capacity to undergo lymphoid differentiation but this
subpopulation is lost during aging, which contributes to age-related impairments in lymphopoiesis.
On the basis of the observations that DNA damage and telomere dysfunction can impair stem cell
function, it is possible that accumulation of DNA damage results in the loss of subpopulations of
HSCs that are required for the maintenance of lymphopoiesis and immune functions during aging.

 Rougheye rockfish (Sebastes aleutianus) – 205 years


 Olm (Proteus anguinus) – 102 years
 Painted turtle (Chrysemys picta) – 61 years
 Blanding's turtle (Emydoidea blandingii) – 77 years
 Eastern box turtle (Terrapene carolina) – 138 years
 Red sea urchin (Strongylocentrotus franciscanus) – 200 years
 Ocean quahog clam (Arctica islandica) – 507 years
 Great Basin bristlecone pine (Pinus longaeva) – 4,713 years
 Greenland Shark – 400 years
Research suggests that lobsters may not slow down, weaken, or lose fertility with age, and that older
lobsters may be more fertile than younger lobsters. This does not however make them immortal in
the traditional sense, as they are significantly more likely to die at a shell moult the older they get (as
detailed below).
Their longevity may be due to telomerase, an enzyme that repairs long repetitive sections of DNA
sequences at the ends of chromosomes, referred to as telomeres. Telomerase is expressed by most
vertebrates during embryonic stages but is generally absent from adult stages of life. However,
unlike vertebrates, lobsters express telomerase as adults through most tissue, which has been
suggested to be related to their longevity. Contrary to popular belief, lobsters are not immortal.
Lobsters grow by moulting which requires a lot of energy, and the larger the shell the more energy is
required. Eventually, the lobster will die from exhaustion during a moult. Older lobsters are also
known to stop moulting, which means that the shell will eventually become damaged, infected, or
fall apart and they die.[  The European lobster has an average life span of 31 years for males and 54
years for females.
Although the premise that biological aging can be halted or reversed by foreseeable technology
remains controversial,[27] research into developing possible therapeutic interventions is underway.
[28]
 Among the principal drivers of international collaboration in such research is the SENS Research
Foundation, a non-profit organization that advocates a number of what it claims are plausible
research pathways that might lead to engineered negligible senescence in humans.[29][30]
In 2015, Elizabeth Parrish, CEO of BioViva, treated herself using gene therapy, with the goal of not
just halting, but reversing aging.[31] She has since reported feeling more energetic, but long-term
study of the treatment is ongoing.[citation needed]
For several decades,[32] researchers have also pursued various forms of suspended animation as a
means by which to indefinitely extend mammalian lifespan. Some scientists have voiced
support[33] for the feasibility of the cryopreservation of humans, known as cryonics. Cryonics is
predicated on the concept that some people considered clinically deadby today's medicolegal
standards are not actually dead according to information-theoretic death and can, in principle, be
resuscitated given sufficient technological advances. [34]The goal of current cryonics procedures is
tissue vitrification, a technique first used to reversibly cryopreserve a viable whole organ in 2005. [35]
[36]

Similar proposals involving suspended animation include chemical brain preservation. The non-profit


Brain Preservation Foundation offers a cash prize valued at over $100,000 for demonstrations of
techniques that would allow for high-fidelity, long-term storage of a mammalian brain. [37]
In 2016, scientists at the Buck Institute for Research on Aging and the Mayo Clinic employed genetic
and pharmacological approaches to ablate pro-aging senescent cells, extending healthy lifespan of
mice by over 25%. The startup Unity Biotechnology is further developing this strategy in human
clinical trials.[38]
In early 2017, Harvard scientists headed by biologist David Sinclair announced they have tested a
compound called NAD+ on mice and have successfully reversed the cellular aging process and can
protect the DNA from future damage. [39] "The old mouse and young mouse cells are
indistinguishable", David was quoted. Human trials are to begin shortly in what the team expect is 6
months at Brigham and Women's Hospital, in Boston.
One of the most recognized consequences of aging is a decline in immune function. While elderly
individuals are by no means immunodeficient, they often do not respond efficiently to novel or
previously encountered antigens. This is illustrated by increased vulnerability of individuals 70 years
of age and older to influenza (1), a situation that is exacerbated by their poor response to
vaccination (2–4).
Although the number of naive B and T cells that migrate from primary to secondary lymphoid organs
is reduced by aging, B and T cell development does not cease entirely. Indeed, some functional
thymic tissue remains even in elderly humans (44). The continued production of lymphocytes, albeit
limited, and the presence of relatively normal numbers of lymphocytes in organs such as the spleen
raises the question of why functional immunity declines in the elderly. The answer is that the
composition and quality of the mature lymphocyte pool is profoundly altered by aging.
For example, an increase in the number of memory T cells is now a well-recognized feature of aging.
These cells, which are generated following the initial encounter with antigen, persist long after the
initial challenge has cleared and provide a source of effectors that can respond rapidly upon antigen
re-exposure. Exposure to multiple pathogens over time results in a diverse immune repertoire that
includes an increased pool of protective memory cells. However, chronic stimulation with persistent
viral infections such as CMV can exhaust the naive pool of cells and result in an oligoclonal memory
cell expansion. This phenomenon is thought to be a major factor contributing to the accumulation of
CD8+ memory cells in the elderly (45), although antigen-independent expansion of CD8 + T cells may
also occur (46). These oligoclonal expanded CD8+ memory T cells are further distinguished in humans
by loss of expression of the CD28 co-stimulatory molecule (47, 48) and impaired immune function.
The accumulation of CD8+CD28– T cells and CMV seropositivity are components of the immune risk
profile, which has been proposed as a predictor of mortality in individuals 65 and older (49), but
precisely how the combination of these events results in increased death is not clear (50).
The immune risk profile also includes B cells with impaired function. B cell number in mice is
unchanged by aging, but in human peripheral blood their absolute number is reduced (51). This
decline is likely due to decreased numbers of IgM + memory and switched memory B cells, because
the total number of naive B cells remains unchanged by aging (51, 52). Human and murine B cells
also exhibit impaired class switch recombination, which has been attributed to decreased induction
of activation-induced cytidine deaminase (AID) enzyme (53).
Inflammaging (64), a condition in which there is an accumulation of inflammatory mediators in
tissues, has been associated with aging. The source of these inflammatory factors has been
proposed to be cells that have acquired a senescence-associated secretory phenotype (SASP) (65).
The SASP could be acquired by cells once they have aged, or it may occur gradually in various
populations over time as they acquire DNA lesions that in turn trigger the increased production of
inflammatory mediators such as IL-6 (66, 67). Regardless of how the shift occurs from a salutary to
an inflammatory milieu, the end result may be a negative effect on the ability of naive lymphocytes
from the bone marrow or thymus to lodge in an organ such as the spleen as well as the function of
mature lymphocytes already resident in that tissue. In the latter case, for example, CD8 + T cells that
accumulate in aged individuals proliferate poorly (68, 69), and, as discussed in more detail in the
following section, CD4+ T cells exhibit reduced T cell receptor signaling intensity and altered
production of various cytokines following antigen binding (70, 71).

Immunometabolism

Failures in Cancer Research


The World Health Organization estimates that worldwide, new cases will rise by 70% in the next two
decades. In concert, treatment costs are skyrocketing and could reach US$156 billion by 2020 in the
United States alone, according to the US National Cancer Institute (NCI). A modest decline in US
cancer mortality rates has been attributed to prevention, such as lower smoking rates, rather than
better treatment. Yet, more than 150,000 papers on cancer have been published each year since
2013.
This month, application deadlines closed for several programmes in the US$1.8-billion Cancer
Moonshot authorized by the US Congress in 2016. The extra funds to study cancer are badly needed,
but we do not have a sufficient fundamental understanding of the disease for these investments to
make a near-term difference in treatment.
Comparison of the cancer initiative to former US president John F. Kennedy’s lunar challenge is
misleading. When, in 1961, Kennedy declared the goal of landing on the Moon, we understood
gravity well enough to be reasonably confident that if we built rockets powerful enough, we could
do it. We could predict distant planetary orbits with startling precision. Getting an astronaut to a
nearby satellite was an engineering feat. No new basic principles needed to be discovered.
This is not true for cancer. The deepest puzzle we must solve is how groups of cells behave, which
networking theories developed in the physical sciences are well equipped to address. Cancer can
move from a localized tumour to remote locations — a process called metastasis. Once that
happens, individuals with cancer have a poor prognosis. Metastasis drives the costs of treatment
skyward, but these therapies are, tragically, largely futile. Without a better way to explain and treat
metastases, new clinical methods will do little to improve the situation.
New anticancer agents are approved by regulatory agencies on the basis of proven efficacy and
safety, which usually (but not always) need to be demonstrated in phase III randomized controlled
trials (RCT; ref. 1). It is estimated that current development of a single new drug costs
pharmaceutical companies up to $2.6 billion (2). Before a particular new anticancer agent enters
evaluation in a RCT, its mechanism of action, antitumor activity, and safety should be demonstrated
in preclinical models and in early-phase clinical trials. The overall failure rate of oncologic RCTs is
about 60% and is higher than in other medical disciplines (3).

Nobel Prize Winners – Importance of History


Pandemics
In February 2003, during a family visit to mainland China, a young girl from Hong Kong died of an
unidentified respiratory illness. After returning to Hong Kong, both her father and brother were
hospitalized with severe respiratory disease, which proved fatal to the father. When H5N1 (avian)
influenza virus was isolated from both patients, the World Health Organization (WHO) went to
pandemic alert status (1). At about the same time, there were rumors of rampant influenza-like
disease in China. Influenza experts feared that H5N1 influenza virus had acquired the ominous
capacity to pass from human to human. That outbreak is now known to have been SARS, caused by a
novel coronavirus.
In March 2003, another alarming situation arose on the other side of the world. A highly pathogenic
H7N7 avian influenza outbreak had recently erupted in the poultry industry of the Netherlands (2),
and workers involved in the slaughter of infected flocks contracted viral conjunctivitis. The H7N7
virus isolated from these patients had several disquieting features: Not only could it replicate in the
human conjunctiva, but there was also evidence of human-to-human spread. Nearby herds of swine
(which are often implicated in the adaptation of influenza viruses to humans) also showed serologic
evidence of exposure (2). When a veterinarian died of respiratory infection (2–5), WHO again
acknowledged the presence of a severe threat (6).
Luckily, the worst-case scenarios did not come about in either of the 2003 avian influenza virus
scares.

 A major challenge in controlling influenza is the sheer magnitude of the animal reservoirs. It is not
logistically possible to prepare reagents and vaccines against all strains of influenza encountered in
animal reservoirs, and therefore, virus subtypes must be prioritized for pandemic vaccine and
reagent preparation. Preliminary findings have identified the H2, H5, H6, H7, and H9 subtypes of
influenza A as those most likely to be transmitted to humans. [Influenza viruses are typed according
to their hemagglutinin (H) and neuraminidase (N) surface glycoproteins.] The influenza A subtypes
currently circulating in humans, H1 and H3, continue to experience antigenic drift. That is, their
antigenic surface glycoproteins are continually modified, allowing them to escape the population's
immunity to the previous strain and thus to continue causing annual outbreaks.
H2 influenza viruses are included in the high-risk category because they were the causative agent of
the 1957 “Asian flu” pandemic and were the only influenza A subtype circulating in humans between
1957 and 1968. Counterparts of the 1957 H2N2 pandemic virus continue to circulate in wild and
domestic duck reservoirs. Under the right conditions (which are still not completely understood),
H2N2 viruses could again be transmitted to and spread among humans, none of whom under the
age of 30 years now has immunity to this virus.
H5 subtype has threatened to emerge as a human pandemic pathogen since 1997, when it killed 6 of
18 infected humans. Before that event, the receptor specificity of avian influenza viruses was
thought to prevent their direct transmission to humans. Transmission from aquatic birds to humans
was hypothesized to require infection of an intermediate host, such as the pig, that has both human-
specific (α2-6 sialic acid) and avian-specific (α2-3 sialic acid) receptors on its respiratory epithelium.
The 1997 H5N1 event demonstrated that domestic poultry species may also act as intermediate
hosts. H5N1 viruses continue to emerge and evolve despite heroic measures taken to break their
evolutionary cycle in the live poultry markets of Hong Kong: the elimination of live ducks and geese
(the original source), the elimination of quail (the source of the internal genes of H5N1/97), and the
institution of monthly “clean days,” when all 1000-plus retail markets are emptied and cleaned.
The third virus subtype on the most wanted list is H7. The H7 and H5 viruses have a unique ability to
evolve into a form highly virulent to chickens and turkeys by acquiring additional amino acids at the
hemagglutinin (HA) cleavage site (HA cleavage is required for viral infectivity) (13). The highly
pathogenic H7N7 influenza viruses that were lethal to poultry infected the eyes of more than 80
humans and killed one person.

Since the 1970s, influenza vaccines have been made by exploiting the tendency of the segmented
influenza genome to reassort (20). This natural process has been used to produce vaccine strains
that simultaneously contain gene segments that allow them to grow well in eggs and gene segments
that produce the desired antigenicity. Natural reassortment is allowed to occur in embryonated
chicken eggs, and reassortants with the desired characteristics are selected. These recombinant
vaccine strains contain the hemagglutinin and neuraminidase genes of the target virus (encoding
glycoproteins that induce neutralizing antibodies); their remaining six gene segments come from
A/Puerto Rico/8/34 (H1N1), which replicates well in eggs and is safe for use in humans (21). These
“6+2” reassortants are then grown in large quantities in embryonated chicken eggs, inactivated,
disrupted into subunits, and formulated for use as vaccines. Although this process creates an
effective and safe influenza vaccine, it is too time-consuming and too dependent on a steady supply
of eggs to be reliable in the face of a pandemic emergency.

It is known that physical linkage of TLR ligands and vaccine antigens significantly enhances the
immunopotency of the linked antigens. We have used this approach to generate novel influenza
vaccines that fuse the globular head domain of the protective hemagglutinin (HA) antigen with the
potent TLR5 ligand, flagellin. These fusion proteins are efficiently expressed in standard E.
coli fermentation systems and the HA moiety can be faithfully refolded to take on the native
conformation of the globular head. In mouse models of influenza infection, the vaccines elicit robust
antibody responses that mitigate disease and protect mice from lethal challenge. These
immunologically potent vaccines can be efficiently manufactured to support pandemic response,
pre-pandemic and seasonal vaccines.
Oxford Economics has suggested that the cost of a global pandemic, including spillover across
industry sectors, could be as great as $3.5tn – an impact far greater than the magnitude of the great
financial crisis of 2008.

Publication Pressure
Two research prizes signal a shifting culture. One, announced earlier this month by the European
College of Neuropsychopharmacology, offers a €10,000 (US$11,800) award for negative results in
preclinical neuroscience: careful experiments that do not confirm an accepted hypothesis or
previous result. The other, from the international Organization for Human Brain Mapping, is entering
its second year. It awards US$2,000 for the best replication study — successful or not — with
implications for human neuroimaging. Winners, to be announced next year, are chosen for both the
quality of the study and the importance of the finding being scrutinized.
The sorts of information most likely to stay in the shadows come from the negative results and
replication studies that these two prizes put into the limelight. Indeed, in many fields, independent
replication can be an advance in itself. Biomarkers and drugs, for example, must be tested in
different patient populations from those of the initial studies to show that they work broadly and
reliably — or, more importantly, that they don’t. Working out why can inform clinical approaches
and elucidate the underlying biology.
Then there is all the time wasted when many scientists attempt the same thing. A researcher might
not need to explore a particular hypothesis if others have spent months carefully doing so. But in
today’s science system, those who toil but find no evidence for a hypothesis, or find similar evidence
to others, have scant means or reason to publicize their efforts.

What these prizes attempt to do is counteract risks (including the risk of wasted effort and potential
backlash from original researchers) and boost rewards for negative results and replications. They
provide a line on winners’ CVs that committees can see and value. The hope is that such recognition
will encourage researchers to present work that would otherwise languish on their hard drives.

Numerous biases are believed to affect the scientific literature, but their actual prevalence across
disciplines is unknown. To gain a comprehensive picture of the potential imprint of bias in science,
we probed for the most commonly postulated bias-related patterns and risk factors, in a large
random sample of meta-analyses taken from all disciplines. The magnitude of these biases varied
widely across fields and was overall relatively small. However, we consistently observed a significant
risk of small, early, and highly cited studies to overestimate effects and of studies not published in
peer-reviewed journals to underestimate them. We also found at least partial confirmation of
previous evidence suggesting that US studies and early studies might report more extreme effects,
although these effects were smaller and more heterogeneously distributed across meta-analyses
and disciplines. Authors publishing at high rates and receiving many citations were, overall, not at
greater risk of bias. However, effect sizes were likely to be overestimated by early-career
researchers, those working in small or long-distance collaborations, and those responsible for
scientific misconduct, supporting hypotheses that connect bias to situational factors, lack of mutual
control, and individual integrity. Some of these patterns and risk factors might have modestly
increased in intensity over time, particularly in the social sciences. Our findings suggest that, besides
one being routinely cautious that published small, highly-cited, and earlier studies may yield inflated
results, the feasibility and costs of interventions to attenuate biases in the literature might need to
be discussed on a discipline-specific and topic-specific basis.

The growing competition and “publish or perish” culture in academia might conflict with the
objectivity and integrity of research, because it forces scientists to produce “publishable” results at
all costs. Papers are less likely to be published and to be cited if they report “negative” results
(results that fail to support the tested hypothesis). Therefore, if publication pressures increase
scientific bias, the frequency of “positive” results in the literature should be higher in the more
competitive and “productive” academic environments. This study verified this hypothesis by
measuring the frequency of positive results in a large random sample of papers with a corresponding
author based in the US. Across all disciplines, papers were more likely to support a tested hypothesis
if their corresponding authors were working in states that, according to NSF data, produced more
academic papers per capita. The size of this effect increased when controlling for state's per capita
R&D expenditure and for study characteristics that previous research showed to correlate with the
frequency of positive results, including discipline and methodology. Although the confounding effect
of institutions' prestige could not be excluded (researchers in the more productive universities could
be the most clever and successful in their experiments), these results support the hypothesis that
competitive academic environments increase not only scientists' productivity but also their bias. The
same phenomenon might be observed in other countries where academic competition and
pressures to publish are high.

Basic Research
The importance of basic research has been highlighted this year by Yoshinori Ohsumi receiving the
Nobel Prize in Medicine for his work on the process of autophagy. Autophagy—literally “self-
eating”—is a fundamental cellular process that degrades and recycles cellular components. During
autophagy, fatty capsules, or vesicles, form around internal components of a cell (autophagosomes),
are fused with a lysosome, an acidic cellular compartment that breaks down its contents (this fused
structure is called an autolysosome). Malfunctioning autophagy has been found to play roles in
many conditions such as Parkinson’s, cancer, type II diabetes and metabolic disorders. Autophagy
was first identified in the 1960’s, but the mechanisms and key genes controlling autophagy remained
nebulous for another 30 years. It wasn’t until the 1990’s that Ohsumi and colleagues performed
“mutagenesis screens” , in baker’s yeast that uncovered essential components of this process.
Mutagenesis screens are used to discover genes involved in different biological processes: First,
researchers induce random mutations in a model organism, then they look for individuals that are
defective in the process they’re interested in, and finally, they identify the genes that were mutated
in the affected individuals. These genes are likely to be important for the process under study, and
this is exactly what Ohsumi did to elucidate the process of autophagy.
Dr. Katherine Rogers’ commentary on this award speaks to the impact of mutagenesis screens on
our understanding of biology and how basic research can have broad applications. Dr. Rogers
explains that mutagenesis screens have been crucial in identifying the key genes that regulate many
different biological processes, which is an important first step in understanding how these processes
work. In turn, improved understanding of basic biological processes often has far-reaching and
unpredictable implications.
Concerning work in basic research, Dr. Rogers says that “great progress in applied science often
comes from people studying relatively obscure things that don’t have evident intrinsic value to
medicine/technology/etc. (GFP, PCR, lasers, and CRIPSR/CAS9 all came out of basic research where
nobody was trying to cure a disease).” These technologies have been developed as powerful tools
that have changed science, medicine, agriculture, and crime scene investigations, to name a few.
Ultimately, Ohsumi’s work uncovered how the proteins involved in autophagy work together to
create a cellular recycling and degradation system. This work, along with countless others, speaks to
the importance of basic research and how it shapes our world.
Vaccine Uptake
Seasonal-flu vaccines can prevent millions of infections even if they don’t closely match the strains of
virus in circulation.
Flu vaccines are reformulated each year in an attempt to keep up with the fast-evolving influenza
virus. The 2017–18 vaccine was a particularly poor match to circulating flu strains, and early
evidence suggests that this led to a spike in hospitalizations and deaths.
Burton Singer at the University of Florida in Gainesville and his colleagues developed a model to
predict the impact of low-efficacy flu vaccines. They found that if 43% of US residents received a
vaccine that protected only 20% of recipients, it could still avert some 21 million infections, nearly
130,000 hospitalizations and almost 62,000 deaths.
Vaccinating large numbers of school-age children achieves the most benefit from low-efficacy
vaccines. But the model suggested that elderly individuals should also be a priority when the vaccine
is particularly ineffective.

A study in  Health Affairs  covering 73 Gavi-supported countries over the 2011–2020 period shows
that, for every US$ 1 spent on immunisation, US$ 18 are saved in healthcare costs, lost wages and
lost productivity due to illness. If we take into account the broader benefits of people living longer,
healthier lives, the return on investment rises to US$ 48 per US$ 1 spent.

Every year, US spends 47 billion dollars, keeping a fleet of nuclear submarines permanently


patrolling the oceans to protect us from a threat that almost certainly will never happen. And yet,
we spend virtually nothing to prevent something as tangible and evolutionarily certain as epidemic
infectious diseases – WHO epidemic preparedness budget is 394 million. And make no mistake
about it -- it's not a question of "if," but "when." These bugs are going to continue to evolve and
they're going to threaten the world. And vaccines are our best bet.

Today, about 80% of infants living in the world’s 73 poorest countries receive routine immunizations,
a measure currently assessed by whether they have been given a full course of a vaccine regime to
prevent diphtheria, pertussis and tetanus. In 2000, only about 60% received such protection. That
progress is great, but achieving 100% coverage will require better insight into which children are
missing out.
Putting the child at the centre of tracking efforts is not as simple as it sounds. Tens of millions of
children have no formal record of their existence — especially those living in remote, impoverished
or vulnerable communities. This global identity crisis is so important that it has its own indicator
(number 16.9) under the United Nations’ Sustainable Development Goals (SDGs) intended to ensure
that everyone has a legal identity by 2030.

Introduction: immunization is a strong pillar of community health. Attainment of the desired


immunization coverage is always dependent on a range of determinants. These determinants are
normally put into broad categories as immunization system based, clients based and service
providers based. The objective of this study is to explore determinants of immunization services
uptake in developing countries. This study reports magnitude of system, providers, and clients
based determinants of immunization uptake in developing countries.
 
Methods: systematic documentary review was a method for this study. Literature searches were
made using Research4Life, HINARI and other online publication sources to identify relevant
research articles. Twenty-six articles were reviewed. 
 
Results: seventeen Key Determinants were identified with frequencies in brackets: caregivers'
social status (25); caregivers' knowledge on immunization (22); access to immunization services
and information (20); health workers' knowledge attitude and practice (12); social influence and
support (110); quality of immunization services (10); alternative strategies for hard-to-reach
populations (9); caregivers' perceptions about immunization (7); gender (7); and care givers'
beliefs and attitude towards immunization (6). Overall, 62.3% of the key determinants were
clients based; 29.5% were immunization system based; 8.2% were providers based. 
 
Conclusion: majority of immunization services uptake determinants are based on clients.
Therefore, immunization interventions in developing countries should majorly focus on social
behaviour change communication.

Neglected Tropical Diseases


More than 1 billion people—one-sixth of the world's population—suffer from one or more
Neglected Tropical Diseases (NTDs).
NTDs are a group of infectious diseases that are the source of tremendous suffering because of their
disfiguring, debilitating, and sometimes deadly impact. They are called neglected because they have
been largely wiped out in the more developed parts of the world and persist only in the poorest,
most marginalized communities and conflict areas.
Social stigma is a major consequence of NTDs. In addition to causing physical and emotional
suffering, these devastating diseases hamper a person's ability to work, keep children out of school,
and prevent families and communities from thriving.
General Fast Facts

 One hundred percent of low-income countries are affected by at least five neglected tropical
diseases simultaneously
 Worldwide, 149 countries and territories are affected by at least one neglected tropical disease
(NTD)
 NTDs are a major cause of disease burden, resulting in approximately 57 million years of life lost
due to premature disability and death
 Individuals are often afflicted with more than one parasite or infection
 Treatment cost for most NTD mass drug administration programs is estimated at less than US
fifty cents per person per year

Guinea Worm Disease

 Since the Guinea Worm Eradication Program began, the annual number of cases of Guinea
worm disease has fallen from an estimated 3.5 million in 20 countries* in 1986 to 25 cases in
three counties 2016, a decrease of more than 99%. In 2016, only three countries had ongoing
transmission of Guinea worm disease: Chad, Ethiopia and South Sudan.

* On July 9, 2011, the Republic of South Sudan was formed.  Prior to this date, South Sudan was
part of Sudan and GWD cases in South Sudan were reported under Sudan.  Therefore, the GWEP
began with 20 Guinea worm-endemic countries, not 21.
Lymphatic filariasis (LF)

 The Global Programme to Eliminate Lymphatic Filariasis has called for the elimination of LF by
2020
 In August 2007, the World Health Organization certified that the People's Republic of China was
the first endemic country to have successfully eliminated lymphatic filariasis as a public health
problem, followed by the Republic of Korea in March 2008
 Interruption of LF transmission has been successful in Costa Rica, Suriname, and Trinidad and
Tobago
 Within the first eight years of the worldwide elimination program, 1.9 billion treatments for LF
were delivered to more than 570 million people in 48 countries
 6.6 million newborns are now protected from becoming infected with LF
 The economic benefit of the first seven years of the program is estimated at US $24 billion. The
full economic benefit could exceed US $55 billion

Onchocerciasis

 Onchocerciasis, and the debilitating blindness it causes, has been eliminated as a public health
problem from ten West African countries
 Onchocerciasis has been eliminated in 11 of the 13 major areas where the infection was being
transmitted in the Americas

Schistosomiasis

 700 million people are at-risk of acquiring schistosomiasis


 207 million people suffer from schistosomiasis
 Approximately 90% of those infected with schistosomiasis live in Africa
 From 2006 to 2009, the number of people treated for schistosomiasis in Africa doubled.
However, less than 7% of the affected population was treated
 Treatment for schistosomiasis is ongoing for 27 million school children in Africa

Trachoma

 Trachoma is endemic in 57 countries, with 40 million people in need of treatment


 Blinding trachoma has been eliminated from Iran, Mexico, Morocco, Oman and the United
States; additional countries are expected to eliminate this devastating disease

Obesity
Obesity and metabolic syndrome are significant public health concerns because of their high global
prevalence and association with an increased risk for developing chronic diseases (1–3). The
prevalence of obesity has increased over the past few decades. More than one-third of adults and
17% of children and adolescents in the United States are now obese (4). Obesity has been deemed
the leading cause of preventable death (5) and has become a global economic and health burden
(6,7).
Obesity is the result of a disruption of energy balance that leads to weight gain and metabolic
disturbances that cause tissue stress and dysfunction (8). Clinical manifestation of these underlying
disturbances often present as the parameters of metabolic syndrome (MetS), 5 a condition
characterized by a clustering of 3 or more of the following components: central adiposity, elevated
blood glucose, plasma TGs, blood pressure, and low plasma HDL-cholesterol (2). In addition to these
qualifying parameters, obesity and MetS are associated with endothelial dysfunction, atherogenic
dyslipidemia, insulin resistance, and chronic low-grade inflammation (9). In line with national obesity
trends in the United States, it has been estimated that ~34% of adults have MetS (10,11). The high
prevalence of MetS is significant, as classification with MetS increases an individual's risk of
cardiovascular disease and type 2 diabetes mellitus by 2- and 5-fold, respectively (2).
Researchers have elucidated an important role for immune cells in the physiological dysfunction
associated with obesity and MetS, in addition to the pathogenesis and development of subsequent
chronic diseases (8,12). Metabolic disturbances lead to immune activation in tissues such as adipose
tissue, liver, pancreas, and the vasculature, and individuals often present with elevated plasma
markers of chronic low-grade inflammation (8,13–15). In addition to immune cells playing a role in
the perpetuation of chronic disease, it has further been established that obesity negatively affects
immunity, as evidenced by higher rates of vaccine failure and complications from infection (16,17).
The detrimental effects of obesity on immunity are associated with alterations in lymphoid tissue
architecture and integrity and shifts in leukocyte populations and inflammatory phenotypes
(12,18,19). These effects may not only complicate and further perpetuate immune-mediated
metabolic dysfunction and disease risk, but may also increase the risk for other infectious and
chronic diseases (13,17,20,21). An overview of the relation between obesity, metabolic syndrome,
and immunity is depicted inFigure 1. Because the role of immune cells in the pathogenesis of
metabolic disease has been extensively studied, this review focuses on the effects of obesity and
MetS parameters on lymphoid tissues, the distribution of leukocyte subsets and phenotypes, and
immunity against foreign pathogens.
Gene Therapy

The importance of Academic and Industrial Research


There are many advantages of collaboration, for both industry and academia. For academics, these
include career opportunities, research funding, awareness of industry trends, and inspiration by
application derived discussions. For industry, these involve access to extended networks, thinking
outside the box, training, ability to find new talent to hire and access to specialised, world-leading
resources. Making contacts and exchange of knowledge are just some of the advantages for both
partners.
Collaborations can also have important potential societal and economic benefits. Progress on drug
discovery investigation is slow due to the high costs of research and development of new drugs, as
well as subsequent extensive clinical trials. Greater research collaboration and facilitating funding
will ensure novel and quicker drug discovery. Partnerships can also create more jobs for consultants,
lab technicians, PhD studentships, and many others.
University research plays an important role in industrial innovation (Cohen et al., 2002; Mansfield,
1991; Salter and Martin, 2001). A considerable body of research has investigated the mechanisms by
which this occurs, notably transfer of intellectual property (IP) and academic entrepreneurship (Phan
and Siegel, 2006; Rothaermel et al., 2007). Researchers have also analysed the impact of industry
involvement on universities. While some emphasize the academic benefits of industrial involvement
for universities, others fear that growing involvement might have detrimental effects on core
academics activities (Feller, 2005; Krimsky, 2003; Slaughter and Leslie, 1997). In light of the current
trend to promote faculty engagement with industry (Mowery and Sampat, 2004), this issue is of
considerable significance for science and technology policy. Of particular interest is how interaction
with industry affects the development of the body of open science. If increasing industry
involvement was found to be detrimental to the accumulation of openly accessible knowledge,
policies aimed at promoting it would risk sacrificing the long-term benefits of scientific inquiry for
short-term industrial benefits (Dosi et al., 2006; Pavitt, 2001).
Previous research has investigated this question by assessing faculty-industry involvement primarily
using measures such as patenting, licensing or participation in spin-off companies. While valuable in
its own right, this research does not tell us how different ways of interacting with industry affect the
research output of academics. This aspect would seem important in light of recent evidence on the
multi-channel nature of university-industry relationships (Perkmann and Walsh, 2007). Collaborative
forms of interaction, such as collaborative research, contract research and consulting, are seen by
industry as more important and valuable than IP transfer, such as licensing (Cohen et al., 2002;
Faulkner and Senker, 1994; Meyer-Krahmer and Schmoch, 1998). Similarly, collaborative forms of
industry engagement are more wide-spread among academics than patenting and academic
entrepreneurship (D'Este and Patel, 2007).
In this article, we investigate how collaborative university-industry interactions impact on academic
research. We deploy an inductive, qualitative research approach because the primary purpose of our
analysis is to understand the effects of industry involvement in different circumstances while
retaining a relative openness towards possible results. Specifically, we are able to consider both the
indirect and the direct effects of industry engagement on academic publishing.
Our findings indicate that joint research with industry often results in academic publications while
this is less true for relationships with more applied objectives, such as contract research and
consulting. However, the latter relationships tend to involve far closer collaboration between
academic researchers and industry partners. Close collaboration facilitates interactive learning which
in turn indirectly benefits scientific production by generating new ideas and motivating new research
projects. Conceptually, our learning-centred interpretation of university-industry relations questions
the ‘convergence’ between academic and industrial worlds hypothesized in the recent literature
(Owen-Smith, 2003). Convergence is implicit in the scenarios of ‘commercialization’ where
academics are seen as economic entrepreneurs (Etzkowitz, 2003), as well as ‘manipulation’ where
the academic system is portrayed as being captured by corporate interests (Noble, 1977; Slaughter
and Leslie, 1997). By contrast, our analysis sheds light on the conditions under which collaboration is
compatible with maintaining the distinct logics of both academia and industry.

DNA Database

Next Generation Sequencing (NGS) is a term used to describe DNA sequencing technologies whereby
multiple pieces of DNA are sequenced in parallel. This allows large sections of the human genome to
be sequenced rapidly. The name is a catch- all-phrase that refers to high-throughput sequencing
rather than the previous Sanger sequencing technology, which was much slower. NGS is also known
as Massive Parallel Sequencing and the terms are often used interchangeably. Within this document
the term NGS refers to technologies that provide more wide-ranging information than the standard
DNA short tandem repeat (STR) profiling techniques that measure the number of repeats at a
specific region of non-coding DNA within an autosomal chromosome.
NGS sequencing technologies have developed rapidly over the past decade while the costs
associated with sequencing have declined. Whilst need and utility, and not merely the availability
and affordability of NGS technologies, should be the driver for their introduction into criminal
investigations, declining costs increase the feasibility of their introduction. It is therefore timely that
the ethical issues associated with the application of NGS in criminal investigations are considered. In
this document, the Ethics Group (EG) provides an outline of the NGS technologies that are likely to
become available in the next 10 years and a map (albeit not yet an in-depth discussion) of the ethical
challenges associated with the application of these technologies for forensic purposes.
As with the application of all new technologies for forensic investigation purposes, all practices
involving NGS technologies require ethical consideration, if possible prior to introduction. Ethical
concerns identified by the EG and wider stakeholders should be considered in conjunction with
arguments put forward by members of the wider public. Moreover, it is important to bear in mind
that there may well be ethical issues associated with not introducing NGS technologies if those
technologies are available and can impact on preventing and solving crimes and eliminating
individuals from investigations and suspicion. Also the right “to enjoy the benefits of scientific
progress and its applications”, which is stipulated in Article 15 of the United Nations International
Covenant on Economic, Social and Cultural Rights (1976), could be seen to prescribe an (at least
ethical) obligation to consider the potential benefits of technology use in the context of criminal
investigation very seriously.
If SNP data correlating with known (not externally visible) phenotypes are held in national
databases, then governments could query the databases to assess if associations for aggressive
behaviour or criminally relevant traits or phenotypes are evident. When research in this field
advances, profiles of ‘risky’ individuals, even in the absence of (re-)offending, could then be retained
for longer periods than those of others. Similarly, if SNP data were divulged to third parties (such as
employers or insurance companies), discrimination on the basis of supposed genetic risks could
ensue. At present, most countries have legislation in place that prohibits the use of forensic DNA
viii
samples or profiles for any other purpose than forensic identification .
Moreover, SNP analysis could yield more in-depth information about the possible distant genetic
relatedness between individuals whose profiles are stored in the database. While the approach,
known as ‘familial searching’ or ‘genetic proximity testing’ in the context of STR profiles, has helped
to solve several cases in the US and the UK, its efficacy could improve further with the use of SNP
testing. This, however, would also exacerbate the concerns voiced in connection with the approach.
Furthermore, genetic proximity testing could reinforce views about the alleged prevalence of
criminality in certain families; reveal to relatives that a genetic relative has a profile on the database;
or even reveal a genetic link (or lack thereof) between individuals unaware of it. For example, this
might reveal paternity information that the parties involved had not asked for and that potentially
ix x
could disrupt social and familial structures (Haimes et al. 2006 and Greely et al. 2006 ). Finally, it
has been argued that the use of genetic proximity testing, reinforces existing demographic
disparities in the criminal justice system, in which arrests and convictions differ widely based on
xi xii
race, ethnicity, geographic location, and social class (Greely et al. 2006 , Bieber et al. 2006 and
xiii
Kim et al. 2011 ).
Whilst some panels of SNPs and other markers have been shown to provide accurate predictions of
what someone looks like, or where they are likely to have originated from, concerns have been
voiced with regard to making inferences about externally visible traits (EVC) and other traits from
DNA material. All forensic interpretation is made using a probabilistic approach and it is vital that
any prediction is made with a full understanding of the likely error rates, that the tests are fully
validated with blind testing and are presented for intelligence use only, in order to avoid over-
interpretation of the data. It has been argued that because of its probabilistic nature, phenotypic
xiv
profiling is especially prone to misinterpretation (Cho & Sankar et al. 2004 ). It has also been
argued that due to its probabilistic nature, insofar as ‘predictions’ of EVC are concerned, it is of
utmost importance that criminal investigators keep in mind that the perpetrator may look very
different from what the test result suggests (i.e. he may have blue eyes despite the test ‘predicting’
brown eyes with 68 per cent likelihood), and that intelligence led mass screening should not be
xv
based on EVC predictions (M’Charek et al. 2012) .
There are already examples where EVCs have been used to produce ‘photofits’ of potential suspects
that may be erroneous (and examples of facial comparisons provided by companies using this
approach oddly have managed to predict the same hair style). Such an approach, currently based on
very limited information and which certainly does not build in an ageing effect, can be very
dangerous, although the separate predictions can be useful intelligence. Despite that, an EVC
approach used in an historic murder in Spain, solved by traditional STR typing, accurately predicted
the North African ancestry, dark eyes and dark hair, but the skin tone of the individual arrested was
not as predicted. We already know that skin tone predictions are more prone to error and it may be
better to use an approach where one can be more certain about the prediction – not pale skin, for
example – along with the certainty of any prediction.
Also in the realm of STR profiles, new challenges lie ahead. The EG has already looked into the issue
of identifying male lineages as a result of new Y-chromosomal markers (Y STR markers) being
proposed for use in DNA profiling in England and Wales. The EG advised in favour of the use of Y-STR
information in connection with serious crimes without opening the door to routine/speculative
6
searches of genealogical links between males. The use of NGS, and newer STR approaches, will
provide this genetic information by default, however, and so its governance must be considered.
But also currently available autosomal (i.e. non-chromosomal) STR markers can provide 1st and 2nd
degree relationships to be ascertained if necessary, and the NGS platforms will allow the possibility
of going further without a separate workflow to reveal relationships that most people would be
unaware of and could result in suggestions of a ‘criminal gene’.
Also the use of human models in a metabiomic approach will pose ethical challenges. It is a sensitive
technique that produces large volumes of data, of which some will have the potential to highlight
sensitive information, such as the likelihood of certain diseases. For instance, a few studies have
looked into gut microbiota in the faeces of animals to determine links between gut microbiota and
obesity. This moves on from looking at the microbes that are present in a sample to which ones are
active and the active processes that are occurring within a human body.
A way to address many of the ethical concerns is to create analysis pipelines as a safeguard to
ensure that only forensically useful information is obtained rather than sensitive information. In the
future, discussions will be needed in order to determine whether information which has been
identified as forensically useful but also as sensitive can be included in forensic analyses. Due to the
uncertainty related to the capabilities of these methods and the vast areas where studies need to be
carried out, at present it is believed that such methods should be used as a theoretical investigative
tool rather than being used to produce active conclusions. This is, however, a developing area and
will need a watching oversight.

Familial searching
The EG was invited by the SB to respond to a consultation on a new policy that had been developed
to provide a framework for undertaking familial searching on the NDNAD. Familial searching is used
to identify potential suspects in a criminal investigation or unidentified bodies or victims. The
NDNAD is searched to identify individuals who could be biologically related to a person of interest.
The searches for biological relationships include parent/child relationships and siblings. The process
exploits the fact that members of a biological family share certain amounts of DNA.
Prior to the implementation of the familial searching policy, requests to undertake familial searches
were approved on a case-by-case basis by the SB. The EG reviewed the familial searching policy. It
decided that it would no longer be necessary for each application for familial searching to be
approved by the SB, as each application would be checked for compliance with the new policy. The
EG noted concerns about exceptional cases to undertake familial searches that did not meet the
requirements of the policy. The group suggested that the principles that would be applied when
assessing exceptional cases should be made explicit. If it was thought that the familial search request
was pushing the boundaries of the policy from an ethical viewpoint, then the EG and the Biometrics
Commissioner should be asked whether the search would be proportionate.
In its 2015 annual report the EG provided details of the Prüm treaty, which enables European
Member States to exchange data rapidly on DNA, fingerprints and vehicle registration numbers
belonging to persons suspected to be cooperating in terrorism, cross-border crime and illegal
migration. A full Business and Implementation Case was undertaken by the UK in relation to
rejoining the Prüm treaty and the UK undertook a Prüm-style pilot to exchange DNA profiles with
other countries.

Heterogeneity in Cancer Treatments (Inter and Intra)


The causations of both inter- and intratumor heterogeneity in breast cancer is debated (80), partly
because knowledge about the hierarchical relationship between different epithelial cells in the
normal breast is still at the hypothetical stage, but also because cell-of-origin and tumor progression
paths of breast cancers are not yet defined.
Two hypothetical models explaining intertumor heterogeneity are frequently proposed (recently
reviewed by Visvader; ref. 81). The genetic model points to the same cell of origin but different
initiating events that will lead to different molecular subtypes. The other model points to each
subtype having different cells of origin. It is also acknowledged that a combinatory model might be
plausible as well, in which not only different cells of origin but also different initial events can explain
the diversity in molecular subtypes (82). The differences in the genome and the transcriptome
between luminal A and basal-like tumors indicate that these diseases have very distinct
pathogenesis. Several studies have also shown that genome-wide patterns of DNA methylation differ
between luminal and basal-like tumors, with similarities to CD24 + cells (luminal cells) and CD44+ cells
(progenitor cells) (83–86). Although it is tempting to speculate that this is related to cell of origin,
recent work has pointed to luminal progenitors as the cell of origin for both basal-like and luminal
tumors (87, 88).
Tumor progression is an important basis to explain intratumor heterogeneity, and different models
are plausible (89, 90). The clonal evolution model originally proposed by Nowell in 1976 suggests
that tumors evolve by the expansion of one (monoclonal) or multiple (polyclonal) subpopulations to
form the tumor mass (Figure (Figure3A3A and ref. 91). In this egalitarian model, all clones have the
potential for continued proliferation and Darwinian selection. In contrast, the cancer stem cell model
suggests a hierarchical organization in which tumor heterogeneity is explained by several rare
precursor cells, each giving rise to a different subpopulation within the tumor (Figure (Figure3B).3B).
Another model for tumor progression, the mutator hypothesis, suggests that tumors evolve by the
gradual and random accumulation of mutations as the tumor grows (Figure (Figure3C),3C), which
suggests a vast degree of diversity in the tumor rather than clonal subpopulations (92). As illustrated
in Figure Figure3D,3D, different progression models can result in distinct spatial distribution of
subpopulations, but whether such patterns are subtype specific is still unknown.

Variation between patients is often referred to as intertumor heterogeneity and is classically


recognized through different morphology types, expression subtypes, or classes of genomic copy
number patterns, among other differences. Variation within a single tumor, intratumor
heterogeneity, has long been observed by histopathologists as sectors of different morphology or
staining behavior and has more recently been defined at the molecular level by the genetic
differences observed in tumor subpopulations and even among individual malignant cells.

Tumour heterogeneity refers to the existence of subpopulations of cells, with distinct genotypes and
phenotypes that may harbour divergent biological behaviours, within a primary tumour and its
metastases, or between tumours of the same histopathological subtype (intra- and inter-tumour,
respectively). With the advent of deep sequencing techniques, the extent and prevalence of intra-
and inter-tumour heterogeneity is increasingly acknowledged. There are features of intra-tumour
heterogeneity that form part of routine pathologic assessment, but its determination does not yet
form part of the clinical decision-making process. This mini-review aims to summarise the evidence
supporting the extent, causes and consequences of intra-tumour heterogeneity, and will suggest
how this knowledge may be integrated into future clinical practice and research efforts to optimise
patient care and clinical outcomes.
The issue of cancer heterogeneity, including the relationships between subpopulation within and
between tumour lesions, may have profound implications for drug therapy in cancer. Targeted
therapy, which attempts to exploit a tumour's dependence on a critical proliferation or survival
pathway, has significantly improved patient outcomes in a range of solid tumour types, but in the
majority of advanced disease cases, it is also apparent that targeted therapeutics do not help all
molecularly selected patients and even when clinical benefit is observed, it is often of limited
duration. If a tumour contains multiple branched events, depicting intra-tumour heterogeneity, then
even the targeting of a driver event may not significantly influence treatment outcome due to a low-
frequency subpopulation harbouring a resistance event in the tumour branches, leading to subclonal
selection and the acquisition of drug resistance, as observed for the low-frequency gatekeeper
mutation in EGFR in non-small cell lung cancer (NSCLC)

Drug resistance is arguably the most critical problem faced by oncologists, and as a result almost all
patients with metastatic solid tumours (with some notable exceptions such as seminoma) die of
their disease. There are many examples of drug resistance conferred by the emergence of subclones
harbouring specific somatic gene mutations. Imatinib-resistant mutations in the BCR-ABL fusion gene
have been identified in patients with chronic myeloid leukaemia; some of these mutations have
been shown to precede systemic treatment, and additionally, to co-exist with subclones carrying
different imatinib-resistant mutations in treatment naïve patients (Shah et al, 2002). Intra-tumour
heterogeneity of drug resistance mechanisms occurs in gastrointestinal stromal tumours (GIST)
treated with imatinib or sunitinib; 9 of 11 patients with oncogenic KIT mutations developed
secondary drug-resistant mutations, 6 of whom had two or more different mutations in separate
metastases, and 3 of whom had 2 secondary KIT mutations in the same metastasis (Liegl et al, 2008).
The bewildering complexity witnessed by multiple distinct mutations occurring in separate or
identical metastases begins to illuminate the vast somatic mutational reservoir present in these
tumours.

Sleep
Over the last 15 years, research following a systems approach of neuroimmunology has accumulated
surprisingly strong evidence that sleep enhances immune defence, in agreement with the popular
wisdom that ‘sleep helps healing’. Although the communication between sleep regulatory networks
in the central nervous system and the cells and tissues of the immune system is basically
bidirectional, in this review, we will focus on the role of sleep for proper functioning of the immune
system. First, we will give a short overview of the signals which mediate the communication
between the nervous and immune system and thus provide the basis for the influence of sleep on
immune processes. Because normally sleep is embedded in the circadian sleep–wake rhythm, we
will then review studies that examined immune changes associated with the sleep (or rest) phase of
this rhythm, without attempting to isolate the effects of sleep per se from those of circadian rhythm.
Thereafter, we will concentrate on studies that aimed at disentangling the immuno-supporting
effects of sleep from those of the circadian system. Results from these studies, many of them
comparing the effects of sleep during the normal rest phase with 24 h of continuous waking, support
the view that sleep is particularly important for initiating effective adaptive immune responses that
eventually produce long-lasting immunological memory. We will close with some remarks about the
detrimental effects of prolonged sleep loss on immune functions showing the importance of proper
sleep for general health.

Sleep and the circadian system exert a strong regulatory influence on immune functions.
Investigations of the normal sleep–wake cycle showed that immune parameters like numbers of
undifferentiated naïve T cells and the production of pro-inflammatory cytokines exhibit peaks during
early nocturnal sleep whereas circulating numbers of immune cells with immediate effector
functions, like cytotoxic natural killer cells, as well as anti-inflammatory cytokine activity peak during
daytime wakefulness. Although it is difficult to entirely dissect the influence of sleep from that of the
circadian rhythm, comparisons of the effects of nocturnal sleep with those of 24-h periods of
wakefulness suggest that sleep facilitates the extravasation of T cells and their possible
redistribution to lymph nodes. Moreover, such studies revealed a selectively enhancing influence of
sleep on cytokines promoting the interaction between antigen presenting cells and T helper cells,
like interleukin-12. Sleep on the night after experimental vaccinations against hepatitis A produced a
strong and persistent increase in the number of antigen-specific Th cells and antibody titres.
Together these findings indicate a specific role of sleep in the formation of immunological memory.
This role appears to be associated in particular with the stage of slow wave sleep and the
accompanying pro-inflammatory endocrine milieu that is hallmarked by high growth hormone and
prolactin levels and low cortisol and catecholamine concentrations.
Concept: Sleep supports the initiation of an adaptive immune response. The invading antigen is
taken up and processed by antigen presenting cells (APC) which present fragments of the antigen to
T helper (Th) cells, with the two kinds of cells forming an ‘immunological synapse’. The concomitant
release of interleukin (IL)-12 by APC induces a Th1 response that supports the function of antigen-
specific cytotoxic T cells and initiates the production of antibodies by B cells. This response finally
generates long-lasting immunological memory for the antigen. Sleep, in particular slow wave sleep
(SWS), and the circadian system act in concert to generate a pro-inflammatory hormonal milieu with
enhanced growth hormone and prolactin release as well as reduced levels of the anti-inflammatory
stress hormone cortisol. The hormonal changes in turn support the early steps in the generation of
an adaptive immune response in the lymph nodes. In analogy to neurobehavioural memory formed
in the central nervous system, the different phases of immunological memory might be divided in an
encoding, a consolidation and a recall phase. In both the central nervous system and the immune
system, sleep specifically supports the consolidation stage of the respective memory types. 

Redundancy
Innate host defense pathways consist of microbial sensors, their signaling pathways, and the
antimicrobial effector mechanisms. Several classes of host defense pathways are currently known,
each comprising several pattern-recognition receptors that detect different types of pathogens.
These pathways interact with one another in a variety of ways that can be categorized into
cooperation, complementation, and compensation.
It is important to note that most pathogens can be detected by more than one microbial sensor.
Thus, bacterial pathogens can be recognized by several TLRs, NODs, phagocytic receptors, a
complement system, inflammasomes, and in some cases intracellular DNA sensors. Fungal
pathogens can be detected by TLRs, Dectins, the complement, and inflammasomes. Viral pathogens
can be detected by TLRs as well as intracellular RNA and DNA sensors, and in some cases, by
inflammasomes (see reviews in this issue). Thus, the innate immune system has a great deal of
apparent redundancy at the level of pathogen detection.
Viral infections lead to production of type-I IFNs (IFN-α and IFN-β), which induce expression of over
200 antiviral genes that can interfere with multiple stages of viral infection cycles and sensitize
infected cells to killing by cytotoxic NK cells and CD8 +T cells. In addition, type I IFNs promote
cytotoxic activity of NK and CD8+ T cells and induce an antiviral state in neighboring cells (Stetson
and Medzhitov, 2006). Importantly, all these responses can be induced by any of the viral pathogen
sensors as their signaling pathways converge on IRF3 and/or 7 activation and type I IFN production
(Honda and Taniguchi, 2006). An important difference exists between IFN induction by cell-intrinsic
sensors, such as RLRs and cytosolic DNA sensors, and cell-extrinsic mechanisms mediated by TLR3,
TLR7, TLR8, and TLR9. Cell-intrinsic sensors are ubiquitous and trigger IFN-β production in infected
cells, whereas the TLRs involved in viral recognition are expressed on specialized cells, such as
plasmacytoid dendritic cells (pDCs), which make large amounts of IFN-α in infected tissues. The
common strategy of host defense against viral pathogens is to interfere with viral replication and
spread and to kill infected cells, often with the help of cytotoxic lymphocytes (NK cells and
CD8+ T cells). This latter strategy is particularly useful in tissues with a high rate of renewal, such as
the epithelium. The cell types that cannot be easily replaced, such as neurons and cardiomyocytes,
may rely on alternative mechanisms of antiviral defense, which remain to be fully characterized.
Detection of bacterial, fungal, and protozoan pathogens by multiple receptors results in the
induction of antimicrobial peptides (e.g., defensins and cathelicidins) and enzymes (e.g., iNOS,
NADPH oxidase, lysozymes and proteases), as well as proteins involved in deprivation of iron
(NRAMP, lactoferrin, lipocalins) and tryptophan (IDO). Macrophages and neutrophils, acute phase
proteins, the complement system as well as surface epithelia producing mucins and antimicrobial
peptides, all contribute to host defense against bacterial, fungal, and protozoan infections.
Importantly, these effector mechanisms can be induced by multiple microbial sensors (TLRs, NODs,
and Dectins) through the NF-κB and MAP kinase signaling pathways or by cytokines induced
downstream of these pathways. The common strategy of host defense against the majority of
bacterial, fungal, and protozoan pathogens is their direct killing by antimicrobial effectors and the
generation of an uninhabitable microenvironment (low pH, nutrient deprivation).
Most pathogens can be recognized by multiple microbial sensors, which in turn can induce multiple
antimicrobial effector mechanisms. There are several reasons for the existence of diverse
recognition and effector responses. The existence of multiple pathogen detecting pathways allows
for a greater “coverage” of the microbial world, whereas diversity and redundancy of the effectors
accounts for robustness of host defenses in the face of continuous pathogen evolution. Clearly
different sensors and effectors have evolved to detect and eliminate different classes of pathogens,
for example, RNA viruses versus tape worms.

When a particular host defense pathway is disabled by mutations, as happens in experimental


animals and immunodeficient patients, infection susceptibility will depend on whether the defect
can be compensated for by the remaining pathways (Figure 2). If pathogen Px can be detected by
two sensors, A and B, that can both activate a protective effector response, inactivation of pathway
A (due to mutation or by pathogen evasion mechanisms) can be compensated by pathway B and the
host will remain resistant to pathogen Px (Figure 2A). However, the same host can be susceptible to
pathogen Py that is detected by pathway A only (Figure 2B). A clinical outcome of immunodeficiency
in pathway A would be susceptibility to pathogen Pybut not Px and the incorrect conclusion would
be that pathway A plays a role in defense against pathogen Py but not Px. The correct interpretation
of the same clinical observation, however, is that pathway B can compensate for a defect in A in
response to Px but not Py because Py does not activate pathway B.
In the second type of compensation, pathogen Px is detected by two pathways, A and B, that
activate distinct effector mechanisms: EM1 and EM2, respectively. When pathway A is inactivated,
the host will be protected from pathogen Px if EM2 is sufficient for protection (Figure 2C) and
susceptible to pathogen Py if EM2 is insufficient (Figure 2D). Patients with pathway A mutation
would be protected from pathogen Px and susceptible to pathogen Py. This clinical presentation may
lead to an incorrect conclusion that sensor A is irrelevant for protection from Px when in fact a
defect in A is compensated by EM2. If Px can evade recognition by sensor B, however (Figure 2C),
the outcome will be that sensor A is critical for protection from Px.
Paradoxically, one common consequence of immunodeficiency can be immunopathology. The
reason for that can be illustrated as follows: if there are two pathways (A and B in Figure 3A ) that
can be activated in response to a given pathogen, inactivation of one of these pathways (pathway A
in Figure 3B) due to immunodeficiency will be compensated by the intact pathway B. However, the
intact pathway has to be hyperactivated to provide protection in the absence of pathway A. This is
because pathway B has to “work alone” when pathway A is deficient, and because pathway A
deficiency can result in (at least transient) increase in pathogen burden compared to the normal
situation when both pathways are intact. Thus, a decrease in AMPs production by mucosal epithelia
may result in increased mucus production, which can affect respiratory or digestive function.
Redundancy is a notion that is often invoked in the literature, when an expected phenotype is not
observed in the absence of a specific gene. It is sometimes implicitly appreciated that redundancy is
conditional on both the environment and the readout. For example, the left arm is redundant for a
chess player, but not for a piano player
The evolution of redundancy has been a subject of debate for decades, as simple logic would suggest
that true redundancy would be evolutionarily unstable because there would be no selective pressure
to maintain it.

MDR
The cell membrane of MRSA is highly fluid, but it contains specialized sections that are more rigid.
These sections may serve as scaffolds for groups of proteins to work together.
The researchers found that the protein PBP2a, which gives MRSA its antibiotic resistance, collects in
these membrane ‘microdomains’. Treatment with statins interfered with microdomain lipids and
PBP2a activity. Mice infected with MRSA were much more likely to survive on a regimen of statins
and antibiotics than on antibiotics alone. 
Disruption of membrane microdomains could offer a new strategy for fighting multi-drug-resistant
infections with conventional antibiotics, the authors say.

The rise of antibiotic resistance has researchers trying new strategies for unlocking useful
compounds hidden in the genes of microbes — the source of most antibiotics used today. Tim Bugni
of the University of Wisconsin–Madison and his colleagues
grew Rhodococcus and Micromonospora bacteria, isolated from marine invertebrates, in the same
culture, in the hope that doing so would trigger the expression of antibiotic genes that are silent
when each microbe is grown singly. 
This yielded an antibiotic called keyicin, which is produced by a Micromonospora species but only in
the presence of Rhodococcus. Keyicin can inhibit the growth of ‘Gram-positive’ bacteria, including an
antibiotic-resistant strain of the pathogen Staphylococcus aureus. And, unlike other antibiotics of
similar structure, keyicin does not seem to damage DNA.

Metals
Metals are important in biochemistry and the concentrations of many are highly regulated. This
paper introduces a thematic series, Metals in Biology, which includes Minireviews on three metals--
iron, copper, and selenium. Deficiencies and excesses of all three of these metals cause problems in
human health. The three Mini-reviews deal with regulation of iron homeostasis, the roles of copper
metabolism in cell regulation and disease, and the functions of selenoproteins.
The field of biochemistry is often to be considered one dealing with amino acids and proteins,
carbohydrates, lipids, and nucleic acids. In the first course a student learns that the proteins act as
enzymes, carbohydrates are degraded to provide energy, lipids are used in membranes and also
provide energy, and nucleic acids are used to regulate cells. What might not be an immediate
thought is the role of metals, which may first cause one to think of inorganic chemistry. One
estimate is that 30% of enzymes use metals (this percentage is arbitrary and has not been evaluated
systematically, at least since the delineation of complete genomes). As one moves from the basic
biochemistry class to the laboratory, it becomes quickly apparent that metals are important. For
instance, the presence of a divalent cation (e.g. magnesium) is critical for DNA polymerases, and one
can stop a reaction in a millisecond by adding EDTA. Metals are used in many ways. Many are
important in electron transfer (e.g. iron), particularly when complexed in appropriate lattices. Metals
also facilitate enzyme catalysis; they act as "super" electrophiles and have varying polarizability to
modulate their properties in the contexts of various functions. Excess concentrations of many
metals, especially redoxactive ones, are also a health issue, due to production of reactive oxygen
species and other issues. Therefore, free concentrations of most metals are highly regulated,
including the three subjects of this Thematic Series--iron, copper, and selenium. Both deficiencies
and overloads are associated with diseases in humans.
The first Mini-Review in this series, by Zhang and Enns, deals with recently identified proteins
involved in iron homeostasis, a complex process. Uptake involves transferrin, its receptor, and the
peptide hepcidin. Hepcidin regulation in the liver, in turn, involves several regulatory factors. These
pathways are relevant in states such as hypoxia and inflammation.

Iron is an essential nutrient required for a variety of biochemical processes. It is a vital component of the
heme in hemoglobin, myoglobin, and cytochromes and is also an essential cofactor for non-heme enzymes
such as ribonucleotide reductase, the limiting enzyme for DNA synthesis. When in excess, iron is toxic
because it generates superoxide anions and hydroxyl radicals that react readily with biological molecules,
including proteins, lipids, and DNA. As a result, humans possess elegant control mechanisms to maintain
iron homeostasis by coordinately regulating iron absorption, iron recycling, and mobilization of stored iron.
Disruption of these processes causes either iron-deficient anemia or iron overload disorders. In this
minireview, we focus on the roles of recently identified proteins in the regulation of iron homeostasis.

The second Minireview, by Turski and Thiele, deals with another redox-active transition metal,
copper, which has some of the same biological issues as iron, e.g. oxidative stress associated with an
overload of free metal ions. The integral membrane protein Ctr1 plays a major role in the import
process. This protein also has a role in the uptake of another metal, the nonphysiological complex
cisplatin, a drug used in cancer treatment. A number of copper metabolism issues in cancer therapy
are also discussed, along with biochemical issues in copper deficiency and future questions in the
field of copper metabolism.

Recent mapping of functional sequence elements in the human genome has led to the realization that
transcription is pervasive and that noncoding RNAs compose a significant portion of the transcriptome.
Some dominantly inherited neurological disorders are associated with the expansion of microsatellite
repeats in noncoding regions that result in the synthesis of pathogenic RNAs. Here, we review RNA gain-of-
function mechanisms underlying three of these microsatellite expansion disorders to illustrate how some
mutant RNAs cause disease.

The third Minireview series involves an element not always immediately considered a metal,
selenium. Lu and Holmgrendiscuss selenoproteins and their functions. The synthesis of
selenoproteins is highly unusual in that UGA codons, normally read as stops, are utilized.

Selenium is an essential micronutrient for man and animals. The role of selenium has been attributed
largely to its presence in selenoproteins as the 21st amino acid, selenocysteine (Sec, U). Sec is encoded by
TGA in DNA. A unique mechanism is used to decode the UGA codon in mRNA to co-translationally
incorporate Sec into the growing polypeptide because there is no free pool of Sec. In the human genome, 25
genes for selenoproteins have been identified. Selenoproteins such as glutathione peroxidases, thioredoxin
reductases, and iodothyronine deiodinases are involved in redox reactions, and Sec is an active-site residue
essential for catalytic activity. Selenoproteins have biological functions in oxidoreductions, redox signaling,
antioxidant defense, thyroid hormone metabolism, and immune responses. They thus possess a strong
correlation with human diseases such as cancer, Keshan disease, virus infections, male infertility, and
abnormalities in immune responses and thyroid hormone function.

Humans have 25 selenoprotein genes, and the known functions include thioredoxin reductases,
glutathione peroxidases, and iodothyronine deaminases. However, Lu and Holmgren point out that
the functions of many of the selenoproteins remain unknown. The biological consequences of
unusually high or low levels of selenoproteins are now beginning to be studied.

Lipid Rafts
Ten years ago, the lipid raft field was suffering from ambiguous methodology and imprecise
nomenclature.
New high-resolution imaging methods are now giving insights into raft dynamics. Together with
other studies, this has led to changes in our concept of rafts.
Rafts in plasma membranes can be characterized by three different states: dynamic nanoscale
assemblies, raft platforms stabilized by oligomerization and micrometre-scale phase separation.
Lipidomics is beginning to give comprehensive views of the lipid composition of raft domains.
Three examples of roles that rafts have in cellular function are: T cell signalling, HIV assembly and
membrane trafficking.
A key open issue for the field is how lipids interact with integral raft proteins.

Cell membranes contain hundreds of lipids in two asymmetric leaflets and a plethora of proteins.
For several decades, membrane research was dominated by the idea that proteins were the key
factors for membrane functionality, whereas lipids were regarded as a passive, fluid solvent.
Introducing the lipid raft concept in 1997, we postulated that sphingolipid–cholesterol–protein
assemblies could function in membrane trafficking and signalling. These assemblies, or rafts,
were thought to be characterized by their tight lipid packing, similar to the sterol-dependent,
liquid-ordered phase in model membranes. The novelty of the raft concept was that it brought
lipids back into the picture by giving them a function and by introducing chemical specificity into
the lateral heterogeneity of membranes.

T cell signalling. An early clue that rafts might affect T cell signalling was the observation that
antibody-mediated cross-linking of GPI-anchored proteins (which do not span the membrane)
could stimulate signalling. Later, DRM analysis showed that factors important for T cell signalling
were detergent-insoluble, whereas engineered palmitoylation -deficient proteins became
detergent-soluble and impaired T cell activation. Cholesterol depletion inhibited T cell activation,
whereas co-patching experiments using cholera toxin induced part of the T cell-activation
programme and lead to microscopically observable domains containing essential T cell-activation
proteins. Taken together, these data suggested that lipid rafts are involved in T cell signalling.

An important issue in T cell signalling is how TCRs connect to 10–100 cognate pMHC molecules
among a total cell surface pool of 10 4–105MHC molecules on an APC. The kinetics of binding
between TCRs and the cognate pMHC have been evaluated using engineered proteins in solution,
and only weak affinities and dissociation rates were seen. However, when binding was measured
between a TCR in a T cell plasma membrane and the cognate pMHC integrated into another
membrane, accelerated kinetics and a more than 100-fold higher affinity were observed. These
data explain the rapid and efficient recognition of an APC and also show that TCRs can serially
engage a few cognate pMHCs in a large, self-MHC background. Cholesterol depletion was found
to reduce the effective two-dimensional affinities between TCRs and pMHCs.

Virus budding. Many viruses acquire a membrane envelope when budding off from the host cell
plasma membrane. Some viruses, including HIV and influenza, seem to do this by organizing a
lipid raft domain around their nucleocapsid that includes viral glycoproteins and excludes most
host cell surface proteins from the budding viral envelope. The Gag protein of HIV, the matrix
domain of which assembles with the Env glycoprotein in the plasma membrane, becomes
detergent-resistant while driving the budding process; furthermore, budding is cholesterol- and
sphingolipid-dependent. If labelled cholera toxin is applied to HIV-expressing cells, Gag, GM1 and
the virus proteins co-patch in distinct clusters that segregate away from clusters of non-raft
transferrin receptors. These data suggested that the assembly of the virus envelope at the host
cell plasma membrane involves the clustering of rafts. In support of this hypothesis, the lipidome
of purified HIV particles showed that sphingolipids, cholesterol, plasmenyl
phosphatidylethanolamine (PtdEtn), PtdSer and saturated PtdChos were enriched in the HIV
membrane relative to total host cell membranes.

You might also like